Feb 13 20:05:12.899491 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:05:12.899520 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:05:12.899530 kernel: KASLR enabled Feb 13 20:05:12.899536 kernel: efi: EFI v2.7 by EDK II Feb 13 20:05:12.899542 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 20:05:12.899548 kernel: random: crng init done Feb 13 20:05:12.899555 kernel: ACPI: Early table checksum verification disabled Feb 13 20:05:12.899561 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 20:05:12.899567 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 20:05:12.899574 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:05:12.899580 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:05:12.899587 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:05:12.899593 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:05:12.899599 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:05:12.899606 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:05:12.899614 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:05:12.899621 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:05:12.899627 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:05:12.899633 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 20:05:12.899640 kernel: NUMA: Failed to initialise from firmware Feb 13 20:05:12.899646 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:05:12.899653 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 20:05:12.899659 kernel: Zone ranges: Feb 13 20:05:12.899665 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:05:12.899671 kernel: DMA32 empty Feb 13 20:05:12.899679 kernel: Normal empty Feb 13 20:05:12.899685 kernel: Movable zone start for each node Feb 13 20:05:12.899691 kernel: Early memory node ranges Feb 13 20:05:12.899697 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 20:05:12.899704 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 20:05:12.899710 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 20:05:12.899716 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 20:05:12.899722 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 20:05:12.899729 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 20:05:12.899735 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 20:05:12.899741 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:05:12.899748 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 20:05:12.899755 kernel: psci: probing for conduit method from ACPI. Feb 13 20:05:12.899761 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:05:12.899768 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:05:12.899777 kernel: psci: Trusted OS migration not required Feb 13 20:05:12.899784 kernel: psci: SMC Calling Convention v1.1 Feb 13 20:05:12.899791 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 20:05:12.899798 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:05:12.899805 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:05:12.899812 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 20:05:12.899819 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:05:12.899826 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:05:12.899832 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:05:12.899839 kernel: CPU features: detected: Spectre-v4 Feb 13 20:05:12.899846 kernel: CPU features: detected: Spectre-BHB Feb 13 20:05:12.899853 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:05:12.899860 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:05:12.899868 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:05:12.899874 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:05:12.899881 kernel: alternatives: applying boot alternatives Feb 13 20:05:12.899889 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:05:12.899896 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:05:12.899903 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:05:12.899910 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:05:12.899916 kernel: Fallback order for Node 0: 0 Feb 13 20:05:12.899923 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 20:05:12.899929 kernel: Policy zone: DMA Feb 13 20:05:12.899936 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:05:12.899944 kernel: software IO TLB: area num 4. Feb 13 20:05:12.899951 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 20:05:12.899958 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Feb 13 20:05:12.899965 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 20:05:12.899972 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:05:12.899979 kernel: rcu: RCU event tracing is enabled. Feb 13 20:05:12.899986 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 20:05:12.899993 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:05:12.899999 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:05:12.900006 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:05:12.900013 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 20:05:12.900020 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:05:12.900028 kernel: GICv3: 256 SPIs implemented Feb 13 20:05:12.900034 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:05:12.900041 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:05:12.900048 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:05:12.900054 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 20:05:12.900061 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 20:05:12.900068 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 20:05:12.900074 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 20:05:12.900081 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 20:05:12.900088 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 20:05:12.900095 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:05:12.900103 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:05:12.900110 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:05:12.900117 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:05:12.900124 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:05:12.900131 kernel: arm-pv: using stolen time PV Feb 13 20:05:12.900138 kernel: Console: colour dummy device 80x25 Feb 13 20:05:12.900144 kernel: ACPI: Core revision 20230628 Feb 13 20:05:12.900166 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:05:12.900173 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:05:12.900180 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:05:12.900189 kernel: landlock: Up and running. Feb 13 20:05:12.900196 kernel: SELinux: Initializing. Feb 13 20:05:12.900203 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:05:12.900210 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:05:12.900217 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:05:12.900225 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:05:12.900232 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:05:12.900239 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:05:12.900247 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 20:05:12.900255 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 20:05:12.900262 kernel: Remapping and enabling EFI services. Feb 13 20:05:12.900269 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:05:12.900276 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:05:12.900283 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 20:05:12.900290 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 20:05:12.900297 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:05:12.900304 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:05:12.900311 kernel: Detected PIPT I-cache on CPU2 Feb 13 20:05:12.900317 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 20:05:12.900326 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 20:05:12.900333 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:05:12.900344 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 20:05:12.900353 kernel: Detected PIPT I-cache on CPU3 Feb 13 20:05:12.900360 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 20:05:12.900368 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 20:05:12.900375 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:05:12.900390 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 20:05:12.900397 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 20:05:12.900407 kernel: SMP: Total of 4 processors activated. Feb 13 20:05:12.900415 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:05:12.900422 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:05:12.900429 kernel: CPU features: detected: Common not Private translations Feb 13 20:05:12.900437 kernel: CPU features: detected: CRC32 instructions Feb 13 20:05:12.900444 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 20:05:12.900451 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:05:12.900458 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:05:12.900467 kernel: CPU features: detected: Privileged Access Never Feb 13 20:05:12.900474 kernel: CPU features: detected: RAS Extension Support Feb 13 20:05:12.900482 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 20:05:12.900489 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:05:12.900496 kernel: alternatives: applying system-wide alternatives Feb 13 20:05:12.900507 kernel: devtmpfs: initialized Feb 13 20:05:12.900514 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:05:12.900521 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 20:05:12.900529 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:05:12.900537 kernel: SMBIOS 3.0.0 present. Feb 13 20:05:12.900545 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 20:05:12.900552 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:05:12.900560 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:05:12.900567 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:05:12.900575 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:05:12.900582 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:05:12.900589 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Feb 13 20:05:12.900597 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:05:12.900605 kernel: cpuidle: using governor menu Feb 13 20:05:12.900612 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:05:12.900620 kernel: ASID allocator initialised with 32768 entries Feb 13 20:05:12.900627 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:05:12.900634 kernel: Serial: AMBA PL011 UART driver Feb 13 20:05:12.900642 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:05:12.900649 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:05:12.900656 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:05:12.900663 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:05:12.900672 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:05:12.900679 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:05:12.900686 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:05:12.900694 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:05:12.900701 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:05:12.900708 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:05:12.900715 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:05:12.900722 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:05:12.900730 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:05:12.900738 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:05:12.900745 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:05:12.900752 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:05:12.900760 kernel: ACPI: Interpreter enabled Feb 13 20:05:12.900767 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:05:12.900774 kernel: ACPI: MCFG table detected, 1 entries Feb 13 20:05:12.900781 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:05:12.900788 kernel: printk: console [ttyAMA0] enabled Feb 13 20:05:12.900796 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:05:12.900929 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:05:12.901007 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:05:12.901072 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:05:12.901135 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 20:05:12.901197 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 20:05:12.901206 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 20:05:12.901214 kernel: PCI host bridge to bus 0000:00 Feb 13 20:05:12.901282 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 20:05:12.901341 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 20:05:12.901413 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 20:05:12.901471 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:05:12.901557 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 20:05:12.901631 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:05:12.901700 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 20:05:12.901765 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 20:05:12.901828 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:05:12.901891 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:05:12.901954 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 20:05:12.902017 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 20:05:12.902074 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 20:05:12.902133 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 20:05:12.902190 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 20:05:12.902199 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 20:05:12.902207 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 20:05:12.902214 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 20:05:12.902221 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 20:05:12.902229 kernel: iommu: Default domain type: Translated Feb 13 20:05:12.902236 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:05:12.902244 kernel: efivars: Registered efivars operations Feb 13 20:05:12.902252 kernel: vgaarb: loaded Feb 13 20:05:12.902260 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:05:12.902267 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:05:12.902274 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:05:12.902282 kernel: pnp: PnP ACPI init Feb 13 20:05:12.902355 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 20:05:12.902366 kernel: pnp: PnP ACPI: found 1 devices Feb 13 20:05:12.902373 kernel: NET: Registered PF_INET protocol family Feb 13 20:05:12.902457 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:05:12.902465 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:05:12.902473 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:05:12.902480 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:05:12.902488 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:05:12.902495 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:05:12.902515 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:05:12.902527 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:05:12.902535 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:05:12.902545 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:05:12.902552 kernel: kvm [1]: HYP mode not available Feb 13 20:05:12.902560 kernel: Initialise system trusted keyrings Feb 13 20:05:12.902567 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:05:12.902574 kernel: Key type asymmetric registered Feb 13 20:05:12.902581 kernel: Asymmetric key parser 'x509' registered Feb 13 20:05:12.902588 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:05:12.902596 kernel: io scheduler mq-deadline registered Feb 13 20:05:12.902603 kernel: io scheduler kyber registered Feb 13 20:05:12.902611 kernel: io scheduler bfq registered Feb 13 20:05:12.902619 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 20:05:12.902626 kernel: ACPI: button: Power Button [PWRB] Feb 13 20:05:12.902634 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 20:05:12.902713 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 20:05:12.902724 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:05:12.902731 kernel: thunder_xcv, ver 1.0 Feb 13 20:05:12.902738 kernel: thunder_bgx, ver 1.0 Feb 13 20:05:12.902745 kernel: nicpf, ver 1.0 Feb 13 20:05:12.902754 kernel: nicvf, ver 1.0 Feb 13 20:05:12.902826 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:05:12.902887 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:05:12 UTC (1739477112) Feb 13 20:05:12.902897 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:05:12.902904 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 20:05:12.902912 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:05:12.902919 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:05:12.902926 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:05:12.902935 kernel: Segment Routing with IPv6 Feb 13 20:05:12.902943 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:05:12.902950 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:05:12.902957 kernel: Key type dns_resolver registered Feb 13 20:05:12.902964 kernel: registered taskstats version 1 Feb 13 20:05:12.902972 kernel: Loading compiled-in X.509 certificates Feb 13 20:05:12.902979 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:05:12.902986 kernel: Key type .fscrypt registered Feb 13 20:05:12.902993 kernel: Key type fscrypt-provisioning registered Feb 13 20:05:12.903002 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:05:12.903009 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:05:12.903016 kernel: ima: No architecture policies found Feb 13 20:05:12.903024 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:05:12.903031 kernel: clk: Disabling unused clocks Feb 13 20:05:12.903038 kernel: Freeing unused kernel memory: 39360K Feb 13 20:05:12.903045 kernel: Run /init as init process Feb 13 20:05:12.903052 kernel: with arguments: Feb 13 20:05:12.903059 kernel: /init Feb 13 20:05:12.903068 kernel: with environment: Feb 13 20:05:12.903075 kernel: HOME=/ Feb 13 20:05:12.903082 kernel: TERM=linux Feb 13 20:05:12.903089 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:05:12.903098 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:05:12.903107 systemd[1]: Detected virtualization kvm. Feb 13 20:05:12.903115 systemd[1]: Detected architecture arm64. Feb 13 20:05:12.903124 systemd[1]: Running in initrd. Feb 13 20:05:12.903131 systemd[1]: No hostname configured, using default hostname. Feb 13 20:05:12.903139 systemd[1]: Hostname set to . Feb 13 20:05:12.903147 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:05:12.903155 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:05:12.903162 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:05:12.903170 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:05:12.903178 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:05:12.903187 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:05:12.903195 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:05:12.903204 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:05:12.903213 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:05:12.903221 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:05:12.903228 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:05:12.903236 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:05:12.903245 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:05:12.903253 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:05:12.903261 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:05:12.903269 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:05:12.903276 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:05:12.903284 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:05:12.903292 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:05:12.903300 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:05:12.903308 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:05:12.903317 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:05:12.903325 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:05:12.903332 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:05:12.903340 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:05:12.903348 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:05:12.903356 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:05:12.903363 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:05:12.903371 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:05:12.903391 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:05:12.903399 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:05:12.903407 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:05:12.903415 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:05:12.903423 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:05:12.903431 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:05:12.903441 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:05:12.903466 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 20:05:12.903485 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:05:12.903495 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:05:12.903509 systemd-journald[239]: Journal started Feb 13 20:05:12.903527 systemd-journald[239]: Runtime Journal (/run/log/journal/45082afb12bf49df9a9e4804bd00a17d) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:05:12.894843 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 20:05:12.907015 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:05:12.910324 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:05:12.914383 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:05:12.914769 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:05:12.917453 kernel: Bridge firewalling registered Feb 13 20:05:12.915220 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 20:05:12.916601 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:05:12.922244 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:05:12.924326 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:05:12.927262 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:05:12.929144 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:05:12.931773 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:05:12.940498 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:05:12.942696 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:05:12.954807 dracut-cmdline[276]: dracut-dracut-053 Feb 13 20:05:12.957287 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:05:12.971973 systemd-resolved[278]: Positive Trust Anchors: Feb 13 20:05:12.971990 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:05:12.972021 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:05:12.976745 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 13 20:05:12.977630 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:05:12.980974 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:05:13.022394 kernel: SCSI subsystem initialized Feb 13 20:05:13.025410 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:05:13.033404 kernel: iscsi: registered transport (tcp) Feb 13 20:05:13.047434 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:05:13.047449 kernel: QLogic iSCSI HBA Driver Feb 13 20:05:13.088483 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:05:13.098584 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:05:13.117175 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:05:13.117216 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:05:13.117234 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:05:13.163415 kernel: raid6: neonx8 gen() 15698 MB/s Feb 13 20:05:13.180403 kernel: raid6: neonx4 gen() 15562 MB/s Feb 13 20:05:13.197413 kernel: raid6: neonx2 gen() 13198 MB/s Feb 13 20:05:13.214411 kernel: raid6: neonx1 gen() 10437 MB/s Feb 13 20:05:13.231412 kernel: raid6: int64x8 gen() 6937 MB/s Feb 13 20:05:13.248414 kernel: raid6: int64x4 gen() 7321 MB/s Feb 13 20:05:13.265405 kernel: raid6: int64x2 gen() 6114 MB/s Feb 13 20:05:13.282483 kernel: raid6: int64x1 gen() 5034 MB/s Feb 13 20:05:13.282523 kernel: raid6: using algorithm neonx8 gen() 15698 MB/s Feb 13 20:05:13.300468 kernel: raid6: .... xor() 11893 MB/s, rmw enabled Feb 13 20:05:13.300496 kernel: raid6: using neon recovery algorithm Feb 13 20:05:13.305798 kernel: xor: measuring software checksum speed Feb 13 20:05:13.305813 kernel: 8regs : 19778 MB/sec Feb 13 20:05:13.306466 kernel: 32regs : 19650 MB/sec Feb 13 20:05:13.307710 kernel: arm64_neon : 25354 MB/sec Feb 13 20:05:13.307734 kernel: xor: using function: arm64_neon (25354 MB/sec) Feb 13 20:05:13.357555 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:05:13.368446 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:05:13.375577 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:05:13.386607 systemd-udevd[460]: Using default interface naming scheme 'v255'. Feb 13 20:05:13.389674 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:05:13.396522 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:05:13.407646 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Feb 13 20:05:13.432733 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:05:13.443546 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:05:13.481901 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:05:13.487530 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:05:13.500048 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:05:13.503400 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:05:13.504515 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:05:13.508597 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:05:13.514639 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:05:13.523522 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 20:05:13.531063 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 20:05:13.531166 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:05:13.531177 kernel: GPT:9289727 != 19775487 Feb 13 20:05:13.531186 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:05:13.531195 kernel: GPT:9289727 != 19775487 Feb 13 20:05:13.531204 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:05:13.531216 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:05:13.525654 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:05:13.531153 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:05:13.531245 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:05:13.534988 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:05:13.536063 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:05:13.546343 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (509) Feb 13 20:05:13.536243 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:05:13.539986 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:05:13.550814 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (510) Feb 13 20:05:13.556638 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:05:13.566779 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:05:13.568175 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:05:13.579914 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:05:13.583689 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:05:13.584806 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:05:13.590191 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:05:13.603515 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:05:13.605129 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:05:13.610122 disk-uuid[550]: Primary Header is updated. Feb 13 20:05:13.610122 disk-uuid[550]: Secondary Entries is updated. Feb 13 20:05:13.610122 disk-uuid[550]: Secondary Header is updated. Feb 13 20:05:13.613403 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:05:13.625242 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:05:14.624278 disk-uuid[551]: The operation has completed successfully. Feb 13 20:05:14.625336 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:05:14.641925 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:05:14.642019 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:05:14.675589 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:05:14.678293 sh[573]: Success Feb 13 20:05:14.693401 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:05:14.720968 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:05:14.732575 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:05:14.734154 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:05:14.744454 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:05:14.744525 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:05:14.744548 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:05:14.744941 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:05:14.746389 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:05:14.749411 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:05:14.750615 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:05:14.751267 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:05:14.753936 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:05:14.762915 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:05:14.762948 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:05:14.762959 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:05:14.765402 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:05:14.773175 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:05:14.774652 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:05:14.779419 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:05:14.786582 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:05:14.842469 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:05:14.851548 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:05:14.878419 systemd-networkd[765]: lo: Link UP Feb 13 20:05:14.878427 systemd-networkd[765]: lo: Gained carrier Feb 13 20:05:14.879064 systemd-networkd[765]: Enumeration completed Feb 13 20:05:14.879324 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:05:14.879631 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:05:14.879634 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:05:14.885175 ignition[668]: Ignition 2.19.0 Feb 13 20:05:14.881278 systemd[1]: Reached target network.target - Network. Feb 13 20:05:14.885181 ignition[668]: Stage: fetch-offline Feb 13 20:05:14.883479 systemd-networkd[765]: eth0: Link UP Feb 13 20:05:14.885211 ignition[668]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:05:14.883483 systemd-networkd[765]: eth0: Gained carrier Feb 13 20:05:14.885218 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:05:14.883490 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:05:14.885365 ignition[668]: parsed url from cmdline: "" Feb 13 20:05:14.885368 ignition[668]: no config URL provided Feb 13 20:05:14.885373 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:05:14.885391 ignition[668]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:05:14.885412 ignition[668]: op(1): [started] loading QEMU firmware config module Feb 13 20:05:14.885416 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 20:05:14.898238 ignition[668]: op(1): [finished] loading QEMU firmware config module Feb 13 20:05:14.902424 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.156/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:05:14.919678 ignition[668]: parsing config with SHA512: e7696ddf958a3b69fd1a9b37786dce9d83966d5eff23cbf6af2ea783a0ea064c99ebd96465584d885076ad35356d3bc9138af138edfd854a09dace335ebb4f2b Feb 13 20:05:14.924447 unknown[668]: fetched base config from "system" Feb 13 20:05:14.924475 unknown[668]: fetched user config from "qemu" Feb 13 20:05:14.925884 ignition[668]: fetch-offline: fetch-offline passed Feb 13 20:05:14.925950 ignition[668]: Ignition finished successfully Feb 13 20:05:14.927343 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:05:14.928646 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:05:14.935592 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:05:14.945551 ignition[771]: Ignition 2.19.0 Feb 13 20:05:14.945560 ignition[771]: Stage: kargs Feb 13 20:05:14.945703 ignition[771]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:05:14.945712 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:05:14.946524 ignition[771]: kargs: kargs passed Feb 13 20:05:14.948636 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:05:14.946567 ignition[771]: Ignition finished successfully Feb 13 20:05:14.956541 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:05:14.965340 ignition[780]: Ignition 2.19.0 Feb 13 20:05:14.965348 ignition[780]: Stage: disks Feb 13 20:05:14.965529 ignition[780]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:05:14.967780 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:05:14.965538 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:05:14.969280 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:05:14.966310 ignition[780]: disks: disks passed Feb 13 20:05:14.970843 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:05:14.966347 ignition[780]: Ignition finished successfully Feb 13 20:05:14.972699 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:05:14.974335 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:05:14.975723 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:05:14.986572 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:05:14.995561 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:05:15.001423 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:05:15.003344 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:05:15.049300 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:05:15.050716 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:05:15.050469 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:05:15.066474 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:05:15.068117 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:05:15.069319 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:05:15.069397 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:05:15.069450 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:05:15.077484 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Feb 13 20:05:15.073433 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:05:15.081687 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:05:15.081739 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:05:15.081765 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:05:15.077196 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:05:15.084400 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:05:15.085725 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:05:15.119957 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:05:15.123952 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:05:15.127368 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:05:15.131223 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:05:15.198280 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:05:15.207484 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:05:15.209739 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:05:15.215389 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:05:15.226521 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:05:15.231959 ignition[914]: INFO : Ignition 2.19.0 Feb 13 20:05:15.231959 ignition[914]: INFO : Stage: mount Feb 13 20:05:15.233400 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:05:15.233400 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:05:15.233400 ignition[914]: INFO : mount: mount passed Feb 13 20:05:15.233400 ignition[914]: INFO : Ignition finished successfully Feb 13 20:05:15.234724 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:05:15.243512 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:05:15.742948 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:05:15.751602 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:05:15.756405 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Feb 13 20:05:15.758930 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:05:15.758956 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:05:15.758966 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:05:15.761395 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:05:15.762638 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:05:15.777295 ignition[944]: INFO : Ignition 2.19.0 Feb 13 20:05:15.777295 ignition[944]: INFO : Stage: files Feb 13 20:05:15.778851 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:05:15.778851 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:05:15.778851 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:05:15.782139 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:05:15.782139 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:05:15.782139 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:05:15.782139 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:05:15.782139 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:05:15.782139 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 20:05:15.782139 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 13 20:05:15.781081 unknown[944]: wrote ssh authorized keys file for user: core Feb 13 20:05:15.831005 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:05:16.162079 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 20:05:16.162079 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:05:16.165523 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:05:16.165523 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:05:16.165523 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:05:16.165523 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:05:16.165523 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:05:16.165523 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:05:16.165523 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:05:16.165523 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:05:16.165523 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:05:16.165523 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:05:16.165523 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:05:16.165523 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:05:16.165523 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 20:05:16.244506 systemd-networkd[765]: eth0: Gained IPv6LL Feb 13 20:05:16.474745 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:05:16.717448 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:05:16.717448 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:05:16.720832 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:05:16.720832 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:05:16.720832 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:05:16.720832 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 20:05:16.720832 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:05:16.720832 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:05:16.720832 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 20:05:16.720832 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 20:05:16.742342 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:05:16.746113 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:05:16.747604 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 20:05:16.747604 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:05:16.747604 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:05:16.747604 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:05:16.747604 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:05:16.747604 ignition[944]: INFO : files: files passed Feb 13 20:05:16.747604 ignition[944]: INFO : Ignition finished successfully Feb 13 20:05:16.748942 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:05:16.761502 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:05:16.764522 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:05:16.765733 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:05:16.766443 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:05:16.771273 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 20:05:16.773358 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:05:16.773358 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:05:16.776108 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:05:16.775094 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:05:16.777523 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:05:16.783496 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:05:16.801964 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:05:16.802850 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:05:16.804199 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:05:16.805941 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:05:16.807624 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:05:16.808287 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:05:16.823114 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:05:16.831531 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:05:16.839889 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:05:16.841031 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:05:16.842922 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:05:16.844558 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:05:16.844666 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:05:16.847032 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:05:16.848915 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:05:16.850453 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:05:16.852087 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:05:16.854023 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:05:16.855856 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:05:16.857517 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:05:16.859302 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:05:16.861143 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:05:16.862754 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:05:16.864159 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:05:16.864268 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:05:16.866409 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:05:16.868254 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:05:16.870056 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:05:16.873453 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:05:16.874605 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:05:16.874708 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:05:16.877250 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:05:16.877364 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:05:16.879261 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:05:16.880772 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:05:16.884448 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:05:16.885712 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:05:16.887635 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:05:16.889091 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:05:16.889173 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:05:16.890607 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:05:16.890686 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:05:16.892105 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:05:16.892213 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:05:16.893855 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:05:16.893949 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:05:16.908628 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:05:16.909466 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:05:16.909602 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:05:16.912069 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:05:16.912933 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:05:16.913050 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:05:16.914974 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:05:16.915072 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:05:16.919076 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:05:16.923186 ignition[999]: INFO : Ignition 2.19.0 Feb 13 20:05:16.923186 ignition[999]: INFO : Stage: umount Feb 13 20:05:16.923186 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:05:16.923186 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:05:16.923186 ignition[999]: INFO : umount: umount passed Feb 13 20:05:16.923186 ignition[999]: INFO : Ignition finished successfully Feb 13 20:05:16.920197 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:05:16.923690 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:05:16.923780 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:05:16.925224 systemd[1]: Stopped target network.target - Network. Feb 13 20:05:16.926425 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:05:16.926477 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:05:16.928358 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:05:16.928416 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:05:16.930543 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:05:16.930591 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:05:16.932157 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:05:16.932199 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:05:16.933985 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:05:16.935894 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:05:16.938334 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:05:16.942688 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:05:16.942804 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:05:16.943448 systemd-networkd[765]: eth0: DHCPv6 lease lost Feb 13 20:05:16.945630 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:05:16.945741 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:05:16.948053 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:05:16.948123 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:05:16.962475 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:05:16.963292 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:05:16.963354 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:05:16.965239 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:05:16.965284 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:05:16.967010 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:05:16.967054 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:05:16.968967 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:05:16.969010 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:05:16.970870 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:05:16.979559 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:05:16.979653 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:05:16.988725 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:05:16.989067 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:05:16.991264 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:05:16.991346 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:05:16.993409 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:05:16.993470 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:05:16.994567 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:05:16.994597 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:05:16.996052 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:05:16.996097 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:05:16.998588 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:05:16.998630 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:05:17.001101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:05:17.001143 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:05:17.003051 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:05:17.003093 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:05:17.015565 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:05:17.016561 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:05:17.016613 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:05:17.018573 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:05:17.018616 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:05:17.022992 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:05:17.023085 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:05:17.024587 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:05:17.026829 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:05:17.035515 systemd[1]: Switching root. Feb 13 20:05:17.063343 systemd-journald[239]: Journal stopped Feb 13 20:05:17.732726 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 20:05:17.732780 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:05:17.732793 kernel: SELinux: policy capability open_perms=1 Feb 13 20:05:17.732803 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:05:17.732814 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:05:17.732824 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:05:17.732834 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:05:17.732847 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:05:17.732857 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:05:17.732867 systemd[1]: Successfully loaded SELinux policy in 30.520ms. Feb 13 20:05:17.732887 kernel: audit: type=1403 audit(1739477117.205:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:05:17.732898 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.368ms. Feb 13 20:05:17.732910 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:05:17.732922 systemd[1]: Detected virtualization kvm. Feb 13 20:05:17.732935 systemd[1]: Detected architecture arm64. Feb 13 20:05:17.732945 systemd[1]: Detected first boot. Feb 13 20:05:17.732958 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:05:17.732970 zram_generator::config[1045]: No configuration found. Feb 13 20:05:17.732982 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:05:17.732993 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:05:17.733004 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:05:17.733015 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:05:17.733026 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:05:17.733037 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:05:17.733050 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:05:17.733062 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:05:17.733073 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:05:17.733085 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:05:17.733096 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:05:17.733111 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:05:17.733126 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:05:17.733137 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:05:17.733148 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:05:17.733161 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:05:17.733172 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:05:17.733184 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:05:17.733195 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:05:17.733206 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:05:17.733217 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:05:17.733228 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:05:17.733239 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:05:17.733251 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:05:17.733262 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:05:17.733273 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:05:17.733284 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:05:17.733295 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:05:17.733306 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:05:17.733317 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:05:17.733329 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:05:17.733341 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:05:17.733353 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:05:17.733364 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:05:17.733375 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:05:17.733489 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:05:17.733503 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:05:17.733514 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:05:17.733525 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:05:17.733536 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:05:17.733550 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:05:17.733561 systemd[1]: Reached target machines.target - Containers. Feb 13 20:05:17.733573 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:05:17.733584 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:05:17.733595 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:05:17.733606 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:05:17.733617 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:05:17.733628 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:05:17.733639 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:05:17.733651 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:05:17.733662 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:05:17.733674 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:05:17.733685 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:05:17.733696 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:05:17.733706 kernel: fuse: init (API version 7.39) Feb 13 20:05:17.733717 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:05:17.733728 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:05:17.733740 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:05:17.733751 kernel: loop: module loaded Feb 13 20:05:17.733761 kernel: ACPI: bus type drm_connector registered Feb 13 20:05:17.733771 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:05:17.733784 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:05:17.733795 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:05:17.733824 systemd-journald[1112]: Collecting audit messages is disabled. Feb 13 20:05:17.733848 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:05:17.733861 systemd-journald[1112]: Journal started Feb 13 20:05:17.733882 systemd-journald[1112]: Runtime Journal (/run/log/journal/45082afb12bf49df9a9e4804bd00a17d) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:05:17.733918 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:05:17.545699 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:05:17.564208 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:05:17.564563 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:05:17.735568 systemd[1]: Stopped verity-setup.service. Feb 13 20:05:17.739398 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:05:17.739894 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:05:17.740974 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:05:17.742126 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:05:17.743157 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:05:17.744300 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:05:17.745472 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:05:17.748414 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:05:17.749694 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:05:17.751106 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:05:17.751247 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:05:17.752588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:05:17.752711 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:05:17.753982 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:05:17.754112 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:05:17.756517 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:05:17.756663 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:05:17.757989 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:05:17.758132 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:05:17.760716 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:05:17.760845 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:05:17.762058 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:05:17.763464 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:05:17.764898 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:05:17.775894 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:05:17.785478 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:05:17.787339 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:05:17.788462 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:05:17.788507 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:05:17.790293 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:05:17.792367 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:05:17.794249 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:05:17.795259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:05:17.796708 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:05:17.798461 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:05:17.799534 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:05:17.803448 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:05:17.804473 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:05:17.807459 systemd-journald[1112]: Time spent on flushing to /var/log/journal/45082afb12bf49df9a9e4804bd00a17d is 19.058ms for 851 entries. Feb 13 20:05:17.807459 systemd-journald[1112]: System Journal (/var/log/journal/45082afb12bf49df9a9e4804bd00a17d) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:05:17.832695 systemd-journald[1112]: Received client request to flush runtime journal. Feb 13 20:05:17.832736 kernel: loop0: detected capacity change from 0 to 114432 Feb 13 20:05:17.805638 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:05:17.807499 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:05:17.812096 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:05:17.817663 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:05:17.819083 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:05:17.820466 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:05:17.821764 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:05:17.823183 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:05:17.829469 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:05:17.839399 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:05:17.842602 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:05:17.847574 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:05:17.850711 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:05:17.852976 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:05:17.860877 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:05:17.867916 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:05:17.868414 kernel: loop1: detected capacity change from 0 to 114328 Feb 13 20:05:17.870408 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:05:17.878642 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:05:17.880051 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:05:17.895491 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 20:05:17.895508 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 20:05:17.899262 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:05:17.903400 kernel: loop2: detected capacity change from 0 to 201592 Feb 13 20:05:17.940404 kernel: loop3: detected capacity change from 0 to 114432 Feb 13 20:05:17.945401 kernel: loop4: detected capacity change from 0 to 114328 Feb 13 20:05:17.949398 kernel: loop5: detected capacity change from 0 to 201592 Feb 13 20:05:17.953302 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 20:05:17.953692 (sd-merge)[1181]: Merged extensions into '/usr'. Feb 13 20:05:17.957099 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:05:17.957119 systemd[1]: Reloading... Feb 13 20:05:18.013073 zram_generator::config[1208]: No configuration found. Feb 13 20:05:18.063164 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:05:18.107862 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:05:18.142935 systemd[1]: Reloading finished in 185 ms. Feb 13 20:05:18.169496 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:05:18.170881 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:05:18.181589 systemd[1]: Starting ensure-sysext.service... Feb 13 20:05:18.183670 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:05:18.189503 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:05:18.189518 systemd[1]: Reloading... Feb 13 20:05:18.199287 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:05:18.199578 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:05:18.200183 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:05:18.200433 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Feb 13 20:05:18.200492 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Feb 13 20:05:18.202527 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:05:18.202539 systemd-tmpfiles[1243]: Skipping /boot Feb 13 20:05:18.209222 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:05:18.209240 systemd-tmpfiles[1243]: Skipping /boot Feb 13 20:05:18.238410 zram_generator::config[1270]: No configuration found. Feb 13 20:05:18.319368 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:05:18.354689 systemd[1]: Reloading finished in 164 ms. Feb 13 20:05:18.371249 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:05:18.384778 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:05:18.391954 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:05:18.394303 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:05:18.396474 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:05:18.400662 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:05:18.408744 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:05:18.410947 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:05:18.413943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:05:18.414903 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:05:18.419675 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:05:18.423614 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:05:18.427520 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:05:18.433105 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:05:18.434901 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Feb 13 20:05:18.434918 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:05:18.435036 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:05:18.436632 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:05:18.436758 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:05:18.438337 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:05:18.438464 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:05:18.440175 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:05:18.447270 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:05:18.453692 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:05:18.455927 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:05:18.459357 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:05:18.461768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:05:18.463045 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:05:18.466259 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:05:18.468269 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:05:18.468422 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:05:18.470374 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:05:18.472414 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:05:18.476684 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:05:18.480126 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:05:18.483002 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:05:18.483127 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:05:18.487658 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:05:18.490838 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:05:18.508197 systemd[1]: Finished ensure-sysext.service. Feb 13 20:05:18.513490 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1340) Feb 13 20:05:18.517676 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 20:05:18.525880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:05:18.535573 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:05:18.536929 augenrules[1374]: No rules Feb 13 20:05:18.543760 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:05:18.548594 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:05:18.551302 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:05:18.553565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:05:18.555552 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:05:18.560448 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:05:18.561437 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:05:18.562450 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:05:18.564729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:05:18.564880 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:05:18.566820 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:05:18.573594 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:05:18.575704 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:05:18.575840 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:05:18.577101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:05:18.577223 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:05:18.591372 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:05:18.599588 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:05:18.603062 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:05:18.603128 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:05:18.604364 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:05:18.613614 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:05:18.616622 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:05:18.624626 systemd-resolved[1310]: Positive Trust Anchors: Feb 13 20:05:18.624850 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:05:18.624947 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:05:18.642518 systemd-resolved[1310]: Defaulting to hostname 'linux'. Feb 13 20:05:18.647324 systemd-networkd[1385]: lo: Link UP Feb 13 20:05:18.647327 systemd-networkd[1385]: lo: Gained carrier Feb 13 20:05:18.648011 systemd-networkd[1385]: Enumeration completed Feb 13 20:05:18.648139 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:05:18.648539 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:05:18.648548 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:05:18.649163 systemd-networkd[1385]: eth0: Link UP Feb 13 20:05:18.649172 systemd-networkd[1385]: eth0: Gained carrier Feb 13 20:05:18.649186 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:05:18.653909 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:05:18.660556 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:05:18.661757 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:05:18.663276 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:05:18.665296 systemd[1]: Reached target network.target - Network. Feb 13 20:05:18.666400 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:05:18.669435 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.156/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:05:18.673510 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:05:18.674192 systemd-timesyncd[1386]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 20:05:18.674233 systemd-timesyncd[1386]: Initial clock synchronization to Thu 2025-02-13 20:05:18.364153 UTC. Feb 13 20:05:18.674803 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:05:18.685425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:05:18.686753 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:05:18.690160 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:05:18.691232 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:05:18.692402 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:05:18.693551 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:05:18.694830 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:05:18.695952 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:05:18.697109 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:05:18.698253 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:05:18.698291 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:05:18.699161 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:05:18.700724 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:05:18.702884 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:05:18.711229 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:05:18.713267 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:05:18.714795 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:05:18.715882 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:05:18.716755 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:05:18.717650 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:05:18.717682 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:05:18.718549 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:05:18.720325 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:05:18.721495 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:05:18.723536 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:05:18.726041 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:05:18.728705 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:05:18.733461 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:05:18.735373 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:05:18.736182 jq[1414]: false Feb 13 20:05:18.738329 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:05:18.741023 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:05:18.749566 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:05:18.754173 extend-filesystems[1415]: Found loop3 Feb 13 20:05:18.754173 extend-filesystems[1415]: Found loop4 Feb 13 20:05:18.754173 extend-filesystems[1415]: Found loop5 Feb 13 20:05:18.754173 extend-filesystems[1415]: Found vda Feb 13 20:05:18.754173 extend-filesystems[1415]: Found vda1 Feb 13 20:05:18.754173 extend-filesystems[1415]: Found vda2 Feb 13 20:05:18.754173 extend-filesystems[1415]: Found vda3 Feb 13 20:05:18.754173 extend-filesystems[1415]: Found usr Feb 13 20:05:18.754173 extend-filesystems[1415]: Found vda4 Feb 13 20:05:18.754173 extend-filesystems[1415]: Found vda6 Feb 13 20:05:18.754173 extend-filesystems[1415]: Found vda7 Feb 13 20:05:18.754173 extend-filesystems[1415]: Found vda9 Feb 13 20:05:18.754173 extend-filesystems[1415]: Checking size of /dev/vda9 Feb 13 20:05:18.755465 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:05:18.762901 dbus-daemon[1413]: [system] SELinux support is enabled Feb 13 20:05:18.755889 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:05:18.756513 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:05:18.762509 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:05:18.764909 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:05:18.770929 jq[1432]: true Feb 13 20:05:18.768062 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:05:18.770365 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:05:18.770577 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:05:18.770883 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:05:18.771023 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:05:18.773788 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:05:18.773930 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:05:18.776175 extend-filesystems[1415]: Resized partition /dev/vda9 Feb 13 20:05:18.782149 extend-filesystems[1439]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:05:18.787589 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 20:05:18.791285 (ntainerd)[1440]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:05:18.805825 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:05:18.805882 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:05:18.809626 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:05:18.809650 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:05:18.815035 jq[1438]: true Feb 13 20:05:18.818466 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1360) Feb 13 20:05:18.828909 update_engine[1431]: I20250213 20:05:18.828572 1431 main.cc:92] Flatcar Update Engine starting Feb 13 20:05:18.832833 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:05:18.833241 update_engine[1431]: I20250213 20:05:18.833096 1431 update_check_scheduler.cc:74] Next update check in 7m42s Feb 13 20:05:18.836816 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:05:18.840335 tar[1435]: linux-arm64/LICENSE Feb 13 20:05:18.850862 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 20:05:18.851066 systemd-logind[1427]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 20:05:18.851282 systemd-logind[1427]: New seat seat0. Feb 13 20:05:18.852183 extend-filesystems[1439]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:05:18.852183 extend-filesystems[1439]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:05:18.852183 extend-filesystems[1439]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 20:05:18.861024 extend-filesystems[1415]: Resized filesystem in /dev/vda9 Feb 13 20:05:18.852849 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:05:18.862144 tar[1435]: linux-arm64/helm Feb 13 20:05:18.856624 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:05:18.858141 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:05:18.886574 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:05:18.888885 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:05:18.891308 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:05:18.908536 locksmithd[1453]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:05:19.011426 containerd[1440]: time="2025-02-13T20:05:19.011301562Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:05:19.033698 containerd[1440]: time="2025-02-13T20:05:19.033654047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:05:19.034918 containerd[1440]: time="2025-02-13T20:05:19.034887183Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:05:19.034918 containerd[1440]: time="2025-02-13T20:05:19.034916290Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:05:19.034972 containerd[1440]: time="2025-02-13T20:05:19.034930132Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:05:19.035086 containerd[1440]: time="2025-02-13T20:05:19.035059017Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:05:19.035115 containerd[1440]: time="2025-02-13T20:05:19.035089008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:05:19.035163 containerd[1440]: time="2025-02-13T20:05:19.035143992Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:05:19.035163 containerd[1440]: time="2025-02-13T20:05:19.035160602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:05:19.035879 containerd[1440]: time="2025-02-13T20:05:19.035301984Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:05:19.035879 containerd[1440]: time="2025-02-13T20:05:19.035322901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:05:19.035879 containerd[1440]: time="2025-02-13T20:05:19.035335012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:05:19.035879 containerd[1440]: time="2025-02-13T20:05:19.035352046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:05:19.035879 containerd[1440]: time="2025-02-13T20:05:19.035444711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:05:19.035879 containerd[1440]: time="2025-02-13T20:05:19.035610893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:05:19.035879 containerd[1440]: time="2025-02-13T20:05:19.035698136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:05:19.035879 containerd[1440]: time="2025-02-13T20:05:19.035710094Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:05:19.035879 containerd[1440]: time="2025-02-13T20:05:19.035774460Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:05:19.035879 containerd[1440]: time="2025-02-13T20:05:19.035809026Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:05:19.039561 containerd[1440]: time="2025-02-13T20:05:19.039534159Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:05:19.039622 containerd[1440]: time="2025-02-13T20:05:19.039575569Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:05:19.039622 containerd[1440]: time="2025-02-13T20:05:19.039590527Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:05:19.039622 containerd[1440]: time="2025-02-13T20:05:19.039604561Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:05:19.039622 containerd[1440]: time="2025-02-13T20:05:19.039617211Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:05:19.039908 containerd[1440]: time="2025-02-13T20:05:19.039732946Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:05:19.039961 containerd[1440]: time="2025-02-13T20:05:19.039930772Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:05:19.040042 containerd[1440]: time="2025-02-13T20:05:19.040023322Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:05:19.040064 containerd[1440]: time="2025-02-13T20:05:19.040045277Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:05:19.040064 containerd[1440]: time="2025-02-13T20:05:19.040058081Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:05:19.040110 containerd[1440]: time="2025-02-13T20:05:19.040071577Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:05:19.040110 containerd[1440]: time="2025-02-13T20:05:19.040084919Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:05:19.040110 containerd[1440]: time="2025-02-13T20:05:19.040096416Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:05:19.040110 containerd[1440]: time="2025-02-13T20:05:19.040109297Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:05:19.040175 containerd[1440]: time="2025-02-13T20:05:19.040121870Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:05:19.040175 containerd[1440]: time="2025-02-13T20:05:19.040133020Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:05:19.040175 containerd[1440]: time="2025-02-13T20:05:19.040144440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:05:19.040175 containerd[1440]: time="2025-02-13T20:05:19.040154975Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:05:19.040175 containerd[1440]: time="2025-02-13T20:05:19.040172816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.040259 containerd[1440]: time="2025-02-13T20:05:19.040185159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.040259 containerd[1440]: time="2025-02-13T20:05:19.040201808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.040259 containerd[1440]: time="2025-02-13T20:05:19.040213497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.040259 containerd[1440]: time="2025-02-13T20:05:19.040224916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.040259 containerd[1440]: time="2025-02-13T20:05:19.040236721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.040259 containerd[1440]: time="2025-02-13T20:05:19.040247294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.040259 containerd[1440]: time="2025-02-13T20:05:19.040258599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.040411 containerd[1440]: time="2025-02-13T20:05:19.040270672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.040411 containerd[1440]: time="2025-02-13T20:05:19.040285783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.040411 containerd[1440]: time="2025-02-13T20:05:19.040296587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.040411 containerd[1440]: time="2025-02-13T20:05:19.040307584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.040411 containerd[1440]: time="2025-02-13T20:05:19.040318466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.040411 containerd[1440]: time="2025-02-13T20:05:19.040332462Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:05:19.040411 containerd[1440]: time="2025-02-13T20:05:19.040359953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.040411 containerd[1440]: time="2025-02-13T20:05:19.040372219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.040411 containerd[1440]: time="2025-02-13T20:05:19.040401480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:05:19.041031 containerd[1440]: time="2025-02-13T20:05:19.041006417Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:05:19.041080 containerd[1440]: time="2025-02-13T20:05:19.041043829Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:05:19.041080 containerd[1440]: time="2025-02-13T20:05:19.041055441Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:05:19.041080 containerd[1440]: time="2025-02-13T20:05:19.041066899Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:05:19.041080 containerd[1440]: time="2025-02-13T20:05:19.041076050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.041169 containerd[1440]: time="2025-02-13T20:05:19.041090430Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:05:19.041169 containerd[1440]: time="2025-02-13T20:05:19.041100543Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:05:19.041169 containerd[1440]: time="2025-02-13T20:05:19.041110501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:05:19.041469 containerd[1440]: time="2025-02-13T20:05:19.041365234Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:05:19.041469 containerd[1440]: time="2025-02-13T20:05:19.041434829Z" level=info msg="Connect containerd service" Feb 13 20:05:19.041714 containerd[1440]: time="2025-02-13T20:05:19.041500233Z" level=info msg="using legacy CRI server" Feb 13 20:05:19.041714 containerd[1440]: time="2025-02-13T20:05:19.041507769Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:05:19.041714 containerd[1440]: time="2025-02-13T20:05:19.041575595Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:05:19.042210 containerd[1440]: time="2025-02-13T20:05:19.042182108Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:05:19.042685 containerd[1440]: time="2025-02-13T20:05:19.042572801Z" level=info msg="Start subscribing containerd event" Feb 13 20:05:19.042685 containerd[1440]: time="2025-02-13T20:05:19.042627054Z" level=info msg="Start recovering state" Feb 13 20:05:19.042685 containerd[1440]: time="2025-02-13T20:05:19.042649624Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:05:19.042685 containerd[1440]: time="2025-02-13T20:05:19.042694611Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:05:19.042973 containerd[1440]: time="2025-02-13T20:05:19.042832724Z" level=info msg="Start event monitor" Feb 13 20:05:19.042973 containerd[1440]: time="2025-02-13T20:05:19.042851103Z" level=info msg="Start snapshots syncer" Feb 13 20:05:19.042973 containerd[1440]: time="2025-02-13T20:05:19.042859639Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:05:19.042973 containerd[1440]: time="2025-02-13T20:05:19.042868406Z" level=info msg="Start streaming server" Feb 13 20:05:19.043620 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:05:19.045484 containerd[1440]: time="2025-02-13T20:05:19.044609777Z" level=info msg="containerd successfully booted in 0.036309s" Feb 13 20:05:19.214217 tar[1435]: linux-arm64/README.md Feb 13 20:05:19.225844 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:05:19.515709 sshd_keygen[1430]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:05:19.535203 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:05:19.549599 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:05:19.554563 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:05:19.554743 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:05:19.557064 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:05:19.567290 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:05:19.569739 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:05:19.571568 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:05:19.572688 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:05:19.700545 systemd-networkd[1385]: eth0: Gained IPv6LL Feb 13 20:05:19.706315 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:05:19.708487 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:05:19.719708 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 20:05:19.721847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:05:19.723819 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:05:19.738205 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 20:05:19.738429 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 20:05:19.739916 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:05:19.743441 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:05:20.227338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:05:20.228750 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:05:20.230217 systemd[1]: Startup finished in 553ms (kernel) + 4.502s (initrd) + 3.057s (userspace) = 8.112s. Feb 13 20:05:20.230856 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:05:20.612591 kubelet[1526]: E0213 20:05:20.612476 1526 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:05:20.614848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:05:20.614987 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:05:25.401051 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:05:25.402117 systemd[1]: Started sshd@0-10.0.0.156:22-10.0.0.1:51734.service - OpenSSH per-connection server daemon (10.0.0.1:51734). Feb 13 20:05:25.451719 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 51734 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:05:25.453238 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:25.460942 systemd-logind[1427]: New session 1 of user core. Feb 13 20:05:25.461924 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:05:25.474580 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:05:25.482770 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:05:25.484667 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:05:25.490396 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:05:25.562700 systemd[1544]: Queued start job for default target default.target. Feb 13 20:05:25.574369 systemd[1544]: Created slice app.slice - User Application Slice. Feb 13 20:05:25.574422 systemd[1544]: Reached target paths.target - Paths. Feb 13 20:05:25.574433 systemd[1544]: Reached target timers.target - Timers. Feb 13 20:05:25.575519 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:05:25.583975 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:05:25.584031 systemd[1544]: Reached target sockets.target - Sockets. Feb 13 20:05:25.584041 systemd[1544]: Reached target basic.target - Basic System. Feb 13 20:05:25.584074 systemd[1544]: Reached target default.target - Main User Target. Feb 13 20:05:25.584096 systemd[1544]: Startup finished in 88ms. Feb 13 20:05:25.584286 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:05:25.585684 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:05:25.643569 systemd[1]: Started sshd@1-10.0.0.156:22-10.0.0.1:51738.service - OpenSSH per-connection server daemon (10.0.0.1:51738). Feb 13 20:05:25.677713 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 51738 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:05:25.678878 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:25.682918 systemd-logind[1427]: New session 2 of user core. Feb 13 20:05:25.695579 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:05:25.744515 sshd[1555]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:25.762575 systemd[1]: sshd@1-10.0.0.156:22-10.0.0.1:51738.service: Deactivated successfully. Feb 13 20:05:25.764694 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:05:25.766045 systemd-logind[1427]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:05:25.773640 systemd[1]: Started sshd@2-10.0.0.156:22-10.0.0.1:51752.service - OpenSSH per-connection server daemon (10.0.0.1:51752). Feb 13 20:05:25.774441 systemd-logind[1427]: Removed session 2. Feb 13 20:05:25.804197 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 51752 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:05:25.805262 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:25.808929 systemd-logind[1427]: New session 3 of user core. Feb 13 20:05:25.816568 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:05:25.863080 sshd[1562]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:25.876496 systemd[1]: sshd@2-10.0.0.156:22-10.0.0.1:51752.service: Deactivated successfully. Feb 13 20:05:25.877769 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:05:25.879612 systemd-logind[1427]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:05:25.880672 systemd[1]: Started sshd@3-10.0.0.156:22-10.0.0.1:51764.service - OpenSSH per-connection server daemon (10.0.0.1:51764). Feb 13 20:05:25.881279 systemd-logind[1427]: Removed session 3. Feb 13 20:05:25.914857 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 51764 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:05:25.915911 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:25.919341 systemd-logind[1427]: New session 4 of user core. Feb 13 20:05:25.929505 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:05:25.979041 sshd[1569]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:25.992501 systemd[1]: sshd@3-10.0.0.156:22-10.0.0.1:51764.service: Deactivated successfully. Feb 13 20:05:25.993806 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:05:25.994902 systemd-logind[1427]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:05:26.004642 systemd[1]: Started sshd@4-10.0.0.156:22-10.0.0.1:51776.service - OpenSSH per-connection server daemon (10.0.0.1:51776). Feb 13 20:05:26.005462 systemd-logind[1427]: Removed session 4. Feb 13 20:05:26.035067 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 51776 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:05:26.036109 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:26.039425 systemd-logind[1427]: New session 5 of user core. Feb 13 20:05:26.053522 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:05:26.114793 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:05:26.115047 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:05:26.413594 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:05:26.413687 (dockerd)[1597]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:05:26.662193 dockerd[1597]: time="2025-02-13T20:05:26.662129975Z" level=info msg="Starting up" Feb 13 20:05:26.771187 dockerd[1597]: time="2025-02-13T20:05:26.771098324Z" level=info msg="Loading containers: start." Feb 13 20:05:26.858029 kernel: Initializing XFRM netlink socket Feb 13 20:05:26.916755 systemd-networkd[1385]: docker0: Link UP Feb 13 20:05:26.935528 dockerd[1597]: time="2025-02-13T20:05:26.935490804Z" level=info msg="Loading containers: done." Feb 13 20:05:26.946241 dockerd[1597]: time="2025-02-13T20:05:26.946194669Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:05:26.946355 dockerd[1597]: time="2025-02-13T20:05:26.946283417Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:05:26.946421 dockerd[1597]: time="2025-02-13T20:05:26.946402261Z" level=info msg="Daemon has completed initialization" Feb 13 20:05:26.971256 dockerd[1597]: time="2025-02-13T20:05:26.971047845Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:05:26.971253 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:05:27.526562 containerd[1440]: time="2025-02-13T20:05:27.526519543Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 20:05:28.158741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3889736031.mount: Deactivated successfully. Feb 13 20:05:30.243562 containerd[1440]: time="2025-02-13T20:05:30.243387255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:30.244366 containerd[1440]: time="2025-02-13T20:05:30.244191894Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218238" Feb 13 20:05:30.245100 containerd[1440]: time="2025-02-13T20:05:30.245032212Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:30.248313 containerd[1440]: time="2025-02-13T20:05:30.248274237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:30.249023 containerd[1440]: time="2025-02-13T20:05:30.248978340Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 2.722418621s" Feb 13 20:05:30.249023 containerd[1440]: time="2025-02-13T20:05:30.249021314Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 13 20:05:30.249912 containerd[1440]: time="2025-02-13T20:05:30.249676101Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 20:05:30.865282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:05:30.874535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:05:30.972473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:05:30.975941 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:05:31.020091 kubelet[1807]: E0213 20:05:31.020038 1807 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:05:31.023532 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:05:31.023805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:05:32.168674 containerd[1440]: time="2025-02-13T20:05:32.168618046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:32.169474 containerd[1440]: time="2025-02-13T20:05:32.169433003Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528147" Feb 13 20:05:32.170156 containerd[1440]: time="2025-02-13T20:05:32.170117496Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:32.174057 containerd[1440]: time="2025-02-13T20:05:32.174020621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:32.174809 containerd[1440]: time="2025-02-13T20:05:32.174755210Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 1.925048248s" Feb 13 20:05:32.174809 containerd[1440]: time="2025-02-13T20:05:32.174785403Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 13 20:05:32.175437 containerd[1440]: time="2025-02-13T20:05:32.175256325Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 20:05:33.841881 containerd[1440]: time="2025-02-13T20:05:33.841832379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:33.842534 containerd[1440]: time="2025-02-13T20:05:33.842498773Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480802" Feb 13 20:05:33.843392 containerd[1440]: time="2025-02-13T20:05:33.843347313Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:33.846610 containerd[1440]: time="2025-02-13T20:05:33.846572407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:33.847672 containerd[1440]: time="2025-02-13T20:05:33.847637962Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.672349808s" Feb 13 20:05:33.847737 containerd[1440]: time="2025-02-13T20:05:33.847671759Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 13 20:05:33.848528 containerd[1440]: time="2025-02-13T20:05:33.848026030Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 20:05:35.082350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2475448807.mount: Deactivated successfully. Feb 13 20:05:35.339759 containerd[1440]: time="2025-02-13T20:05:35.339645451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:35.340576 containerd[1440]: time="2025-02-13T20:05:35.340509600Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363384" Feb 13 20:05:35.341166 containerd[1440]: time="2025-02-13T20:05:35.341133532Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:35.343653 containerd[1440]: time="2025-02-13T20:05:35.343621220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:35.345049 containerd[1440]: time="2025-02-13T20:05:35.345004145Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.496955277s" Feb 13 20:05:35.345088 containerd[1440]: time="2025-02-13T20:05:35.345045196Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 20:05:35.345692 containerd[1440]: time="2025-02-13T20:05:35.345660329Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 20:05:36.037191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount63273025.mount: Deactivated successfully. Feb 13 20:05:37.165607 containerd[1440]: time="2025-02-13T20:05:37.165550849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:37.166188 containerd[1440]: time="2025-02-13T20:05:37.166151896Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Feb 13 20:05:37.166950 containerd[1440]: time="2025-02-13T20:05:37.166916848Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:37.170098 containerd[1440]: time="2025-02-13T20:05:37.170046209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:37.171370 containerd[1440]: time="2025-02-13T20:05:37.171341337Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.82565006s" Feb 13 20:05:37.171415 containerd[1440]: time="2025-02-13T20:05:37.171393952Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 13 20:05:37.171887 containerd[1440]: time="2025-02-13T20:05:37.171845326Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:05:37.657799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3362409098.mount: Deactivated successfully. Feb 13 20:05:37.661445 containerd[1440]: time="2025-02-13T20:05:37.661406040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:37.662128 containerd[1440]: time="2025-02-13T20:05:37.661946540Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 20:05:37.662755 containerd[1440]: time="2025-02-13T20:05:37.662723530Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:37.664948 containerd[1440]: time="2025-02-13T20:05:37.664899801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:37.665751 containerd[1440]: time="2025-02-13T20:05:37.665724542Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 493.842983ms" Feb 13 20:05:37.665806 containerd[1440]: time="2025-02-13T20:05:37.665757028Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 20:05:37.666177 containerd[1440]: time="2025-02-13T20:05:37.666155029Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 20:05:38.258782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1848811461.mount: Deactivated successfully. Feb 13 20:05:41.273977 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:05:41.280857 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:05:41.386547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:05:41.390272 (kubelet)[1949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:05:41.423598 kubelet[1949]: E0213 20:05:41.423550 1949 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:05:41.425663 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:05:41.425782 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:05:41.499134 containerd[1440]: time="2025-02-13T20:05:41.499089576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:41.500125 containerd[1440]: time="2025-02-13T20:05:41.499909369Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Feb 13 20:05:41.500855 containerd[1440]: time="2025-02-13T20:05:41.500829237Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:41.505051 containerd[1440]: time="2025-02-13T20:05:41.505001334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:41.506454 containerd[1440]: time="2025-02-13T20:05:41.506308325Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.840120876s" Feb 13 20:05:41.506454 containerd[1440]: time="2025-02-13T20:05:41.506344890Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 13 20:05:47.229502 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:05:47.242590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:05:47.261464 systemd[1]: Reloading requested from client PID 1989 ('systemctl') (unit session-5.scope)... Feb 13 20:05:47.261481 systemd[1]: Reloading... Feb 13 20:05:47.327084 zram_generator::config[2025]: No configuration found. Feb 13 20:05:47.443263 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:05:47.494752 systemd[1]: Reloading finished in 232 ms. Feb 13 20:05:47.537617 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:05:47.540202 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:05:47.540517 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:05:47.541903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:05:47.638295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:05:47.641805 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:05:47.674389 kubelet[2075]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:05:47.674389 kubelet[2075]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:05:47.674389 kubelet[2075]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:05:47.674714 kubelet[2075]: I0213 20:05:47.674453 2075 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:05:48.285101 kubelet[2075]: I0213 20:05:48.285060 2075 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:05:48.285101 kubelet[2075]: I0213 20:05:48.285090 2075 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:05:48.285372 kubelet[2075]: I0213 20:05:48.285340 2075 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:05:48.313836 kubelet[2075]: I0213 20:05:48.313693 2075 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:05:48.314570 kubelet[2075]: E0213 20:05:48.314527 2075 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.156:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:48.322074 kubelet[2075]: E0213 20:05:48.321970 2075 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:05:48.322074 kubelet[2075]: I0213 20:05:48.321995 2075 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:05:48.326040 kubelet[2075]: I0213 20:05:48.326018 2075 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:05:48.326239 kubelet[2075]: I0213 20:05:48.326218 2075 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:05:48.326398 kubelet[2075]: I0213 20:05:48.326241 2075 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:05:48.326488 kubelet[2075]: I0213 20:05:48.326480 2075 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:05:48.326516 kubelet[2075]: I0213 20:05:48.326490 2075 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:05:48.326677 kubelet[2075]: I0213 20:05:48.326663 2075 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:05:48.329034 kubelet[2075]: I0213 20:05:48.329008 2075 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:05:48.329034 kubelet[2075]: I0213 20:05:48.329030 2075 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:05:48.329109 kubelet[2075]: I0213 20:05:48.329047 2075 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:05:48.329109 kubelet[2075]: I0213 20:05:48.329056 2075 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:05:48.336236 kubelet[2075]: I0213 20:05:48.332233 2075 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:05:48.336236 kubelet[2075]: I0213 20:05:48.333289 2075 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:05:48.336236 kubelet[2075]: W0213 20:05:48.333538 2075 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:05:48.336236 kubelet[2075]: I0213 20:05:48.334773 2075 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:05:48.336236 kubelet[2075]: I0213 20:05:48.334797 2075 server.go:1287] "Started kubelet" Feb 13 20:05:48.336236 kubelet[2075]: W0213 20:05:48.335865 2075 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.156:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.156:6443: connect: connection refused Feb 13 20:05:48.336236 kubelet[2075]: E0213 20:05:48.335906 2075 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.156:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:48.336236 kubelet[2075]: W0213 20:05:48.336106 2075 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.156:6443: connect: connection refused Feb 13 20:05:48.336236 kubelet[2075]: E0213 20:05:48.336145 2075 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:48.336236 kubelet[2075]: I0213 20:05:48.336190 2075 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:05:48.336591 kubelet[2075]: I0213 20:05:48.336556 2075 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:05:48.336809 kubelet[2075]: I0213 20:05:48.336790 2075 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:05:48.336918 kubelet[2075]: I0213 20:05:48.336888 2075 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:05:48.337414 kubelet[2075]: I0213 20:05:48.337350 2075 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:05:48.337913 kubelet[2075]: I0213 20:05:48.337785 2075 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:05:48.338763 kubelet[2075]: E0213 20:05:48.338452 2075 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:05:48.338763 kubelet[2075]: I0213 20:05:48.338491 2075 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:05:48.338763 kubelet[2075]: I0213 20:05:48.338623 2075 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:05:48.338763 kubelet[2075]: I0213 20:05:48.338671 2075 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:05:48.339318 kubelet[2075]: W0213 20:05:48.339285 2075 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.156:6443: connect: connection refused Feb 13 20:05:48.339354 kubelet[2075]: E0213 20:05:48.339328 2075 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:48.339425 kubelet[2075]: E0213 20:05:48.339397 2075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.156:6443: connect: connection refused" interval="200ms" Feb 13 20:05:48.339605 kubelet[2075]: I0213 20:05:48.339548 2075 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:05:48.339639 kubelet[2075]: I0213 20:05:48.339625 2075 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:05:48.340216 kubelet[2075]: E0213 20:05:48.339921 2075 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.156:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.156:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dd4011677335 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:05:48.334781237 +0000 UTC m=+0.690005394,LastTimestamp:2025-02-13 20:05:48.334781237 +0000 UTC m=+0.690005394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:05:48.341448 kubelet[2075]: I0213 20:05:48.341326 2075 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:05:48.343502 kubelet[2075]: E0213 20:05:48.343481 2075 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:05:48.352199 kubelet[2075]: I0213 20:05:48.352124 2075 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:05:48.353301 kubelet[2075]: I0213 20:05:48.353278 2075 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:05:48.353301 kubelet[2075]: I0213 20:05:48.353301 2075 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:05:48.353399 kubelet[2075]: I0213 20:05:48.353320 2075 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:05:48.353399 kubelet[2075]: I0213 20:05:48.353327 2075 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:05:48.353399 kubelet[2075]: E0213 20:05:48.353364 2075 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:05:48.353754 kubelet[2075]: I0213 20:05:48.353666 2075 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:05:48.353754 kubelet[2075]: I0213 20:05:48.353680 2075 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:05:48.353754 kubelet[2075]: I0213 20:05:48.353695 2075 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:05:48.353754 kubelet[2075]: W0213 20:05:48.353734 2075 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.156:6443: connect: connection refused Feb 13 20:05:48.353856 kubelet[2075]: E0213 20:05:48.353759 2075 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:48.423227 kubelet[2075]: I0213 20:05:48.423180 2075 policy_none.go:49] "None policy: Start" Feb 13 20:05:48.423227 kubelet[2075]: I0213 20:05:48.423213 2075 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:05:48.423227 kubelet[2075]: I0213 20:05:48.423226 2075 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:05:48.427870 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:05:48.438674 kubelet[2075]: E0213 20:05:48.438642 2075 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:05:48.442694 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:05:48.454274 kubelet[2075]: E0213 20:05:48.454255 2075 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:05:48.454600 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:05:48.455671 kubelet[2075]: I0213 20:05:48.455645 2075 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:05:48.455998 kubelet[2075]: I0213 20:05:48.455813 2075 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:05:48.455998 kubelet[2075]: I0213 20:05:48.455831 2075 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:05:48.456071 kubelet[2075]: I0213 20:05:48.456032 2075 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:05:48.457065 kubelet[2075]: E0213 20:05:48.457046 2075 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:05:48.457172 kubelet[2075]: E0213 20:05:48.457153 2075 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 20:05:48.539920 kubelet[2075]: E0213 20:05:48.539833 2075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.156:6443: connect: connection refused" interval="400ms" Feb 13 20:05:48.557860 kubelet[2075]: I0213 20:05:48.557819 2075 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:05:48.558183 kubelet[2075]: E0213 20:05:48.558159 2075 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.156:6443/api/v1/nodes\": dial tcp 10.0.0.156:6443: connect: connection refused" node="localhost" Feb 13 20:05:48.661528 systemd[1]: Created slice kubepods-burstable-pod30b56832df5674787131b0b8be9a4fcf.slice - libcontainer container kubepods-burstable-pod30b56832df5674787131b0b8be9a4fcf.slice. Feb 13 20:05:48.679479 kubelet[2075]: E0213 20:05:48.679446 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:05:48.682771 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 20:05:48.696590 kubelet[2075]: E0213 20:05:48.696435 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:05:48.698891 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 20:05:48.700444 kubelet[2075]: E0213 20:05:48.700289 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:05:48.741479 kubelet[2075]: I0213 20:05:48.741417 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30b56832df5674787131b0b8be9a4fcf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"30b56832df5674787131b0b8be9a4fcf\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:48.741479 kubelet[2075]: I0213 20:05:48.741451 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:48.741479 kubelet[2075]: I0213 20:05:48.741469 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:48.741589 kubelet[2075]: I0213 20:05:48.741486 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:05:48.741589 kubelet[2075]: I0213 20:05:48.741501 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30b56832df5674787131b0b8be9a4fcf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"30b56832df5674787131b0b8be9a4fcf\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:48.741589 kubelet[2075]: I0213 20:05:48.741551 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30b56832df5674787131b0b8be9a4fcf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"30b56832df5674787131b0b8be9a4fcf\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:48.741589 kubelet[2075]: I0213 20:05:48.741582 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:48.741690 kubelet[2075]: I0213 20:05:48.741598 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:48.741690 kubelet[2075]: I0213 20:05:48.741613 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:48.759299 kubelet[2075]: I0213 20:05:48.759264 2075 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:05:48.759577 kubelet[2075]: E0213 20:05:48.759544 2075 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.156:6443/api/v1/nodes\": dial tcp 10.0.0.156:6443: connect: connection refused" node="localhost" Feb 13 20:05:48.941023 kubelet[2075]: E0213 20:05:48.940933 2075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.156:6443: connect: connection refused" interval="800ms" Feb 13 20:05:48.980335 kubelet[2075]: E0213 20:05:48.980301 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:48.982786 containerd[1440]: time="2025-02-13T20:05:48.982642142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:30b56832df5674787131b0b8be9a4fcf,Namespace:kube-system,Attempt:0,}" Feb 13 20:05:48.996872 kubelet[2075]: E0213 20:05:48.996808 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:48.997301 containerd[1440]: time="2025-02-13T20:05:48.997105712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 20:05:49.001436 kubelet[2075]: E0213 20:05:49.001407 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:49.001714 containerd[1440]: time="2025-02-13T20:05:49.001683728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 20:05:49.160639 kubelet[2075]: I0213 20:05:49.160614 2075 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:05:49.160938 kubelet[2075]: E0213 20:05:49.160903 2075 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.156:6443/api/v1/nodes\": dial tcp 10.0.0.156:6443: connect: connection refused" node="localhost" Feb 13 20:05:49.484412 kubelet[2075]: W0213 20:05:49.484340 2075 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.156:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.156:6443: connect: connection refused Feb 13 20:05:49.484496 kubelet[2075]: E0213 20:05:49.484421 2075 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.156:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:49.528139 kubelet[2075]: W0213 20:05:49.528060 2075 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.156:6443: connect: connection refused Feb 13 20:05:49.528139 kubelet[2075]: E0213 20:05:49.528117 2075 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:49.528297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount456307499.mount: Deactivated successfully. Feb 13 20:05:49.534571 containerd[1440]: time="2025-02-13T20:05:49.534512786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:05:49.535431 containerd[1440]: time="2025-02-13T20:05:49.535399640Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:05:49.536172 containerd[1440]: time="2025-02-13T20:05:49.536147911Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:05:49.536947 containerd[1440]: time="2025-02-13T20:05:49.536845939Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:05:49.537591 containerd[1440]: time="2025-02-13T20:05:49.537563552Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 20:05:49.539401 containerd[1440]: time="2025-02-13T20:05:49.539317914Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:05:49.539401 containerd[1440]: time="2025-02-13T20:05:49.539346254Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:05:49.543099 kubelet[2075]: W0213 20:05:49.543061 2075 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.156:6443: connect: connection refused Feb 13 20:05:49.543315 containerd[1440]: time="2025-02-13T20:05:49.543208367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:05:49.543396 kubelet[2075]: E0213 20:05:49.543278 2075 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:49.543995 containerd[1440]: time="2025-02-13T20:05:49.543942609Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 542.20328ms" Feb 13 20:05:49.547052 containerd[1440]: time="2025-02-13T20:05:49.547020916Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 564.305392ms" Feb 13 20:05:49.548004 containerd[1440]: time="2025-02-13T20:05:49.547887784Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 550.724717ms" Feb 13 20:05:49.686065 containerd[1440]: time="2025-02-13T20:05:49.685911868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:05:49.686065 containerd[1440]: time="2025-02-13T20:05:49.686006561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:05:49.686065 containerd[1440]: time="2025-02-13T20:05:49.686023629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:49.686263 containerd[1440]: time="2025-02-13T20:05:49.686144064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:49.686851 containerd[1440]: time="2025-02-13T20:05:49.686593227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:05:49.686851 containerd[1440]: time="2025-02-13T20:05:49.686671492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:05:49.686851 containerd[1440]: time="2025-02-13T20:05:49.686698313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:49.686851 containerd[1440]: time="2025-02-13T20:05:49.686805477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:49.688235 containerd[1440]: time="2025-02-13T20:05:49.688084134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:05:49.688235 containerd[1440]: time="2025-02-13T20:05:49.688161240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:05:49.688455 containerd[1440]: time="2025-02-13T20:05:49.688177189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:49.689023 containerd[1440]: time="2025-02-13T20:05:49.688972587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:49.704640 systemd[1]: Started cri-containerd-ebf1b118a0d435b82fc9f59b12114aadd131adc7706d18a794a8b084fb59ed93.scope - libcontainer container ebf1b118a0d435b82fc9f59b12114aadd131adc7706d18a794a8b084fb59ed93. Feb 13 20:05:49.707557 kubelet[2075]: W0213 20:05:49.707511 2075 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.156:6443: connect: connection refused Feb 13 20:05:49.707557 kubelet[2075]: E0213 20:05:49.707554 2075 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:49.708368 systemd[1]: Started cri-containerd-36b3effae96ce061cc4a9abfa041606810c371c1f55c5501ad099fa8e10a5e9a.scope - libcontainer container 36b3effae96ce061cc4a9abfa041606810c371c1f55c5501ad099fa8e10a5e9a. Feb 13 20:05:49.709532 systemd[1]: Started cri-containerd-b439d949747544293593be6de61798f9ac475d5f109fe74ceebe312f46daebba.scope - libcontainer container b439d949747544293593be6de61798f9ac475d5f109fe74ceebe312f46daebba. Feb 13 20:05:49.734718 containerd[1440]: time="2025-02-13T20:05:49.734354950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebf1b118a0d435b82fc9f59b12114aadd131adc7706d18a794a8b084fb59ed93\"" Feb 13 20:05:49.736648 kubelet[2075]: E0213 20:05:49.736601 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:49.739115 containerd[1440]: time="2025-02-13T20:05:49.739081774Z" level=info msg="CreateContainer within sandbox \"ebf1b118a0d435b82fc9f59b12114aadd131adc7706d18a794a8b084fb59ed93\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:05:49.740982 containerd[1440]: time="2025-02-13T20:05:49.740955970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"36b3effae96ce061cc4a9abfa041606810c371c1f55c5501ad099fa8e10a5e9a\"" Feb 13 20:05:49.741729 kubelet[2075]: E0213 20:05:49.741434 2075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.156:6443: connect: connection refused" interval="1.6s" Feb 13 20:05:49.741729 kubelet[2075]: E0213 20:05:49.741593 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:49.743462 containerd[1440]: time="2025-02-13T20:05:49.743366709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:30b56832df5674787131b0b8be9a4fcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"b439d949747544293593be6de61798f9ac475d5f109fe74ceebe312f46daebba\"" Feb 13 20:05:49.744234 containerd[1440]: time="2025-02-13T20:05:49.744163826Z" level=info msg="CreateContainer within sandbox \"36b3effae96ce061cc4a9abfa041606810c371c1f55c5501ad099fa8e10a5e9a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:05:49.744918 kubelet[2075]: E0213 20:05:49.744832 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:49.746517 containerd[1440]: time="2025-02-13T20:05:49.746457727Z" level=info msg="CreateContainer within sandbox \"b439d949747544293593be6de61798f9ac475d5f109fe74ceebe312f46daebba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:05:49.756914 containerd[1440]: time="2025-02-13T20:05:49.756844274Z" level=info msg="CreateContainer within sandbox \"ebf1b118a0d435b82fc9f59b12114aadd131adc7706d18a794a8b084fb59ed93\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"33a586515fe3fd4a7941f1f2744296d7d733e37a9fe84d1ce238bbffd40583e9\"" Feb 13 20:05:49.757908 containerd[1440]: time="2025-02-13T20:05:49.757879823Z" level=info msg="StartContainer for \"33a586515fe3fd4a7941f1f2744296d7d733e37a9fe84d1ce238bbffd40583e9\"" Feb 13 20:05:49.760709 containerd[1440]: time="2025-02-13T20:05:49.760677288Z" level=info msg="CreateContainer within sandbox \"36b3effae96ce061cc4a9abfa041606810c371c1f55c5501ad099fa8e10a5e9a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"334be0408d19ac1f8cbc3c1e582e0f17086e1112f9d090cb8c2535f18f2b282c\"" Feb 13 20:05:49.761186 containerd[1440]: time="2025-02-13T20:05:49.761161107Z" level=info msg="StartContainer for \"334be0408d19ac1f8cbc3c1e582e0f17086e1112f9d090cb8c2535f18f2b282c\"" Feb 13 20:05:49.764176 containerd[1440]: time="2025-02-13T20:05:49.764040754Z" level=info msg="CreateContainer within sandbox \"b439d949747544293593be6de61798f9ac475d5f109fe74ceebe312f46daebba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f65762062a98c3967808ffa6a40061791a4c85e3eacd19586124969c96535028\"" Feb 13 20:05:49.764710 containerd[1440]: time="2025-02-13T20:05:49.764667152Z" level=info msg="StartContainer for \"f65762062a98c3967808ffa6a40061791a4c85e3eacd19586124969c96535028\"" Feb 13 20:05:49.782520 systemd[1]: Started cri-containerd-33a586515fe3fd4a7941f1f2744296d7d733e37a9fe84d1ce238bbffd40583e9.scope - libcontainer container 33a586515fe3fd4a7941f1f2744296d7d733e37a9fe84d1ce238bbffd40583e9. Feb 13 20:05:49.786296 systemd[1]: Started cri-containerd-334be0408d19ac1f8cbc3c1e582e0f17086e1112f9d090cb8c2535f18f2b282c.scope - libcontainer container 334be0408d19ac1f8cbc3c1e582e0f17086e1112f9d090cb8c2535f18f2b282c. Feb 13 20:05:49.787951 systemd[1]: Started cri-containerd-f65762062a98c3967808ffa6a40061791a4c85e3eacd19586124969c96535028.scope - libcontainer container f65762062a98c3967808ffa6a40061791a4c85e3eacd19586124969c96535028. Feb 13 20:05:49.817659 containerd[1440]: time="2025-02-13T20:05:49.817529874Z" level=info msg="StartContainer for \"33a586515fe3fd4a7941f1f2744296d7d733e37a9fe84d1ce238bbffd40583e9\" returns successfully" Feb 13 20:05:49.822792 containerd[1440]: time="2025-02-13T20:05:49.822656135Z" level=info msg="StartContainer for \"f65762062a98c3967808ffa6a40061791a4c85e3eacd19586124969c96535028\" returns successfully" Feb 13 20:05:49.827468 containerd[1440]: time="2025-02-13T20:05:49.827390513Z" level=info msg="StartContainer for \"334be0408d19ac1f8cbc3c1e582e0f17086e1112f9d090cb8c2535f18f2b282c\" returns successfully" Feb 13 20:05:49.965750 kubelet[2075]: I0213 20:05:49.965724 2075 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:05:49.967651 kubelet[2075]: E0213 20:05:49.967561 2075 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.156:6443/api/v1/nodes\": dial tcp 10.0.0.156:6443: connect: connection refused" node="localhost" Feb 13 20:05:50.360010 kubelet[2075]: E0213 20:05:50.359977 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:05:50.360143 kubelet[2075]: E0213 20:05:50.360106 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:50.363406 kubelet[2075]: E0213 20:05:50.363320 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:05:50.363478 kubelet[2075]: E0213 20:05:50.363444 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:50.364631 kubelet[2075]: E0213 20:05:50.364607 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:05:50.364722 kubelet[2075]: E0213 20:05:50.364707 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:51.367928 kubelet[2075]: E0213 20:05:51.367896 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:05:51.368246 kubelet[2075]: E0213 20:05:51.367963 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:05:51.368246 kubelet[2075]: E0213 20:05:51.368024 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:51.368246 kubelet[2075]: E0213 20:05:51.368072 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:51.368320 kubelet[2075]: E0213 20:05:51.368246 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:05:51.368346 kubelet[2075]: E0213 20:05:51.368329 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:51.441813 kubelet[2075]: E0213 20:05:51.441776 2075 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 20:05:51.570470 kubelet[2075]: I0213 20:05:51.570439 2075 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:05:51.581766 kubelet[2075]: I0213 20:05:51.581737 2075 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 20:05:51.581815 kubelet[2075]: E0213 20:05:51.581772 2075 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 20:05:51.588735 kubelet[2075]: E0213 20:05:51.588708 2075 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:05:51.689390 kubelet[2075]: E0213 20:05:51.688976 2075 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:05:51.739640 kubelet[2075]: I0213 20:05:51.739465 2075 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:51.747046 kubelet[2075]: E0213 20:05:51.747022 2075 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:51.747046 kubelet[2075]: I0213 20:05:51.747045 2075 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:51.748724 kubelet[2075]: E0213 20:05:51.748530 2075 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:51.748724 kubelet[2075]: I0213 20:05:51.748551 2075 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 20:05:51.749876 kubelet[2075]: E0213 20:05:51.749855 2075 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 20:05:52.332296 kubelet[2075]: I0213 20:05:52.332262 2075 apiserver.go:52] "Watching apiserver" Feb 13 20:05:52.339432 kubelet[2075]: I0213 20:05:52.339408 2075 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:05:53.601727 systemd[1]: Reloading requested from client PID 2356 ('systemctl') (unit session-5.scope)... Feb 13 20:05:53.601741 systemd[1]: Reloading... Feb 13 20:05:53.658405 zram_generator::config[2398]: No configuration found. Feb 13 20:05:53.748163 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:05:53.810980 systemd[1]: Reloading finished in 208 ms. Feb 13 20:05:53.840591 kubelet[2075]: I0213 20:05:53.840562 2075 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:05:53.840661 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:05:53.858144 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:05:53.858362 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:05:53.858446 systemd[1]: kubelet.service: Consumed 1.039s CPU time, 122.2M memory peak, 0B memory swap peak. Feb 13 20:05:53.869657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:05:53.959677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:05:53.963146 (kubelet)[2437]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:05:53.993059 kubelet[2437]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:05:53.993059 kubelet[2437]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:05:53.993059 kubelet[2437]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:05:53.993462 kubelet[2437]: I0213 20:05:53.993139 2437 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:05:54.000928 kubelet[2437]: I0213 20:05:53.999603 2437 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:05:54.000928 kubelet[2437]: I0213 20:05:53.999629 2437 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:05:54.000928 kubelet[2437]: I0213 20:05:53.999861 2437 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:05:54.001110 kubelet[2437]: I0213 20:05:54.001080 2437 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:05:54.005854 kubelet[2437]: I0213 20:05:54.005827 2437 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:05:54.009941 kubelet[2437]: E0213 20:05:54.009856 2437 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:05:54.009941 kubelet[2437]: I0213 20:05:54.009935 2437 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:05:54.012962 kubelet[2437]: I0213 20:05:54.012931 2437 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:05:54.013205 kubelet[2437]: I0213 20:05:54.013162 2437 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:05:54.013344 kubelet[2437]: I0213 20:05:54.013191 2437 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:05:54.013344 kubelet[2437]: I0213 20:05:54.013345 2437 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:05:54.013471 kubelet[2437]: I0213 20:05:54.013353 2437 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:05:54.013471 kubelet[2437]: I0213 20:05:54.013410 2437 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:05:54.013579 kubelet[2437]: I0213 20:05:54.013550 2437 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:05:54.013579 kubelet[2437]: I0213 20:05:54.013570 2437 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:05:54.013630 kubelet[2437]: I0213 20:05:54.013586 2437 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:05:54.013685 kubelet[2437]: I0213 20:05:54.013675 2437 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:05:54.018190 kubelet[2437]: I0213 20:05:54.018161 2437 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:05:54.018694 kubelet[2437]: I0213 20:05:54.018672 2437 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:05:54.019098 kubelet[2437]: I0213 20:05:54.019078 2437 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:05:54.019134 kubelet[2437]: I0213 20:05:54.019113 2437 server.go:1287] "Started kubelet" Feb 13 20:05:54.022246 kubelet[2437]: I0213 20:05:54.022218 2437 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:05:54.022845 kubelet[2437]: I0213 20:05:54.022745 2437 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:05:54.023578 kubelet[2437]: I0213 20:05:54.023554 2437 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:05:54.023645 kubelet[2437]: I0213 20:05:54.023582 2437 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:05:54.024587 kubelet[2437]: E0213 20:05:54.024559 2437 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:05:54.024870 kubelet[2437]: I0213 20:05:54.024825 2437 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:05:54.024963 kubelet[2437]: I0213 20:05:54.024948 2437 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:05:54.026440 kubelet[2437]: I0213 20:05:54.025922 2437 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:05:54.027220 kubelet[2437]: I0213 20:05:54.027160 2437 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:05:54.027362 kubelet[2437]: I0213 20:05:54.027339 2437 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:05:54.031761 kubelet[2437]: I0213 20:05:54.031733 2437 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:05:54.032230 kubelet[2437]: I0213 20:05:54.031824 2437 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:05:54.039655 kubelet[2437]: I0213 20:05:54.039613 2437 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:05:54.042730 kubelet[2437]: I0213 20:05:54.042703 2437 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:05:54.042801 kubelet[2437]: I0213 20:05:54.042745 2437 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:05:54.042801 kubelet[2437]: I0213 20:05:54.042764 2437 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:05:54.042801 kubelet[2437]: I0213 20:05:54.042770 2437 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:05:54.042871 kubelet[2437]: E0213 20:05:54.042811 2437 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:05:54.046342 kubelet[2437]: I0213 20:05:54.046319 2437 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:05:54.048357 kubelet[2437]: E0213 20:05:54.048331 2437 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:05:54.072602 kubelet[2437]: I0213 20:05:54.072578 2437 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:05:54.072602 kubelet[2437]: I0213 20:05:54.072595 2437 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:05:54.072714 kubelet[2437]: I0213 20:05:54.072612 2437 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:05:54.072770 kubelet[2437]: I0213 20:05:54.072750 2437 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:05:54.072802 kubelet[2437]: I0213 20:05:54.072768 2437 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:05:54.072802 kubelet[2437]: I0213 20:05:54.072785 2437 policy_none.go:49] "None policy: Start" Feb 13 20:05:54.072802 kubelet[2437]: I0213 20:05:54.072792 2437 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:05:54.072802 kubelet[2437]: I0213 20:05:54.072803 2437 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:05:54.072901 kubelet[2437]: I0213 20:05:54.072890 2437 state_mem.go:75] "Updated machine memory state" Feb 13 20:05:54.076224 kubelet[2437]: I0213 20:05:54.076142 2437 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:05:54.076409 kubelet[2437]: I0213 20:05:54.076391 2437 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:05:54.076457 kubelet[2437]: I0213 20:05:54.076408 2437 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:05:54.076858 kubelet[2437]: I0213 20:05:54.076654 2437 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:05:54.077876 kubelet[2437]: E0213 20:05:54.077364 2437 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:05:54.143513 kubelet[2437]: I0213 20:05:54.143370 2437 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:54.143513 kubelet[2437]: I0213 20:05:54.143433 2437 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 20:05:54.143513 kubelet[2437]: I0213 20:05:54.143492 2437 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:54.179949 kubelet[2437]: I0213 20:05:54.179916 2437 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:05:54.186373 kubelet[2437]: I0213 20:05:54.186262 2437 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 20:05:54.186492 kubelet[2437]: I0213 20:05:54.186462 2437 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 20:05:54.226864 kubelet[2437]: I0213 20:05:54.226808 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30b56832df5674787131b0b8be9a4fcf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"30b56832df5674787131b0b8be9a4fcf\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:54.226864 kubelet[2437]: I0213 20:05:54.226843 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:54.226864 kubelet[2437]: I0213 20:05:54.226861 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:54.226864 kubelet[2437]: I0213 20:05:54.226875 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:54.227111 kubelet[2437]: I0213 20:05:54.226892 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30b56832df5674787131b0b8be9a4fcf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"30b56832df5674787131b0b8be9a4fcf\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:54.227111 kubelet[2437]: I0213 20:05:54.226908 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30b56832df5674787131b0b8be9a4fcf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"30b56832df5674787131b0b8be9a4fcf\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:54.227111 kubelet[2437]: I0213 20:05:54.226924 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:54.227111 kubelet[2437]: I0213 20:05:54.226942 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:54.227111 kubelet[2437]: I0213 20:05:54.226974 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:05:54.449417 kubelet[2437]: E0213 20:05:54.449074 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:54.449417 kubelet[2437]: E0213 20:05:54.449092 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:54.449417 kubelet[2437]: E0213 20:05:54.449265 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:55.015234 kubelet[2437]: I0213 20:05:55.015181 2437 apiserver.go:52] "Watching apiserver" Feb 13 20:05:55.024960 kubelet[2437]: I0213 20:05:55.024916 2437 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:05:55.056745 kubelet[2437]: I0213 20:05:55.056496 2437 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 20:05:55.057364 kubelet[2437]: E0213 20:05:55.057334 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:55.057534 kubelet[2437]: I0213 20:05:55.057516 2437 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:55.082335 kubelet[2437]: E0213 20:05:55.082284 2437 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 20:05:55.082547 kubelet[2437]: E0213 20:05:55.082440 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:55.082815 kubelet[2437]: E0213 20:05:55.082661 2437 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:55.083254 kubelet[2437]: E0213 20:05:55.083236 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:55.105660 kubelet[2437]: I0213 20:05:55.105613 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.105597951 podStartE2EDuration="1.105597951s" podCreationTimestamp="2025-02-13 20:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:05:55.099524142 +0000 UTC m=+1.133551206" watchObservedRunningTime="2025-02-13 20:05:55.105597951 +0000 UTC m=+1.139625095" Feb 13 20:05:55.112976 kubelet[2437]: I0213 20:05:55.112749 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.112736451 podStartE2EDuration="1.112736451s" podCreationTimestamp="2025-02-13 20:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:05:55.105671763 +0000 UTC m=+1.139698827" watchObservedRunningTime="2025-02-13 20:05:55.112736451 +0000 UTC m=+1.146763515" Feb 13 20:05:55.120575 kubelet[2437]: I0213 20:05:55.120532 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.120520853 podStartE2EDuration="1.120520853s" podCreationTimestamp="2025-02-13 20:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:05:55.112867712 +0000 UTC m=+1.146894776" watchObservedRunningTime="2025-02-13 20:05:55.120520853 +0000 UTC m=+1.154547917" Feb 13 20:05:55.460713 sudo[1579]: pam_unix(sudo:session): session closed for user root Feb 13 20:05:55.462252 sshd[1576]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:55.465334 systemd[1]: sshd@4-10.0.0.156:22-10.0.0.1:51776.service: Deactivated successfully. Feb 13 20:05:55.466943 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:05:55.468416 systemd[1]: session-5.scope: Consumed 7.096s CPU time, 153.6M memory peak, 0B memory swap peak. Feb 13 20:05:55.468830 systemd-logind[1427]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:05:55.469918 systemd-logind[1427]: Removed session 5. Feb 13 20:05:56.058403 kubelet[2437]: E0213 20:05:56.058352 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:56.058732 kubelet[2437]: E0213 20:05:56.058446 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:57.060177 kubelet[2437]: E0213 20:05:57.060147 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:00.401464 kubelet[2437]: E0213 20:06:00.401361 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:00.413083 kubelet[2437]: I0213 20:06:00.413046 2437 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:06:00.413393 containerd[1440]: time="2025-02-13T20:06:00.413332244Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:06:00.414304 kubelet[2437]: I0213 20:06:00.413878 2437 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:06:00.896651 kubelet[2437]: E0213 20:06:00.896545 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:01.066290 kubelet[2437]: E0213 20:06:01.065961 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:01.066290 kubelet[2437]: E0213 20:06:01.065970 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:01.405368 systemd[1]: Created slice kubepods-besteffort-pod4eeb8549_fd6d_4762_8bcf_0561b1d92c49.slice - libcontainer container kubepods-besteffort-pod4eeb8549_fd6d_4762_8bcf_0561b1d92c49.slice. Feb 13 20:06:01.417086 systemd[1]: Created slice kubepods-burstable-poddbd39ffa_0aa6_4acd_b1d8_c7e908994dc8.slice - libcontainer container kubepods-burstable-poddbd39ffa_0aa6_4acd_b1d8_c7e908994dc8.slice. Feb 13 20:06:01.473593 kubelet[2437]: I0213 20:06:01.473552 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4eeb8549-fd6d-4762-8bcf-0561b1d92c49-lib-modules\") pod \"kube-proxy-5xknl\" (UID: \"4eeb8549-fd6d-4762-8bcf-0561b1d92c49\") " pod="kube-system/kube-proxy-5xknl" Feb 13 20:06:01.473593 kubelet[2437]: I0213 20:06:01.473593 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8-cni\") pod \"kube-flannel-ds-mkbnt\" (UID: \"dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8\") " pod="kube-flannel/kube-flannel-ds-mkbnt" Feb 13 20:06:01.473906 kubelet[2437]: I0213 20:06:01.473613 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8-flannel-cfg\") pod \"kube-flannel-ds-mkbnt\" (UID: \"dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8\") " pod="kube-flannel/kube-flannel-ds-mkbnt" Feb 13 20:06:01.473906 kubelet[2437]: I0213 20:06:01.473629 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8-xtables-lock\") pod \"kube-flannel-ds-mkbnt\" (UID: \"dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8\") " pod="kube-flannel/kube-flannel-ds-mkbnt" Feb 13 20:06:01.473906 kubelet[2437]: I0213 20:06:01.473646 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4eeb8549-fd6d-4762-8bcf-0561b1d92c49-xtables-lock\") pod \"kube-proxy-5xknl\" (UID: \"4eeb8549-fd6d-4762-8bcf-0561b1d92c49\") " pod="kube-system/kube-proxy-5xknl" Feb 13 20:06:01.473906 kubelet[2437]: I0213 20:06:01.473661 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8-cni-plugin\") pod \"kube-flannel-ds-mkbnt\" (UID: \"dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8\") " pod="kube-flannel/kube-flannel-ds-mkbnt" Feb 13 20:06:01.473906 kubelet[2437]: I0213 20:06:01.473754 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8j4g\" (UniqueName: \"kubernetes.io/projected/4eeb8549-fd6d-4762-8bcf-0561b1d92c49-kube-api-access-l8j4g\") pod \"kube-proxy-5xknl\" (UID: \"4eeb8549-fd6d-4762-8bcf-0561b1d92c49\") " pod="kube-system/kube-proxy-5xknl" Feb 13 20:06:01.474025 kubelet[2437]: I0213 20:06:01.473806 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8-run\") pod \"kube-flannel-ds-mkbnt\" (UID: \"dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8\") " pod="kube-flannel/kube-flannel-ds-mkbnt" Feb 13 20:06:01.474025 kubelet[2437]: I0213 20:06:01.473834 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4eeb8549-fd6d-4762-8bcf-0561b1d92c49-kube-proxy\") pod \"kube-proxy-5xknl\" (UID: \"4eeb8549-fd6d-4762-8bcf-0561b1d92c49\") " pod="kube-system/kube-proxy-5xknl" Feb 13 20:06:01.474025 kubelet[2437]: I0213 20:06:01.473850 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r94c\" (UniqueName: \"kubernetes.io/projected/dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8-kube-api-access-7r94c\") pod \"kube-flannel-ds-mkbnt\" (UID: \"dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8\") " pod="kube-flannel/kube-flannel-ds-mkbnt" Feb 13 20:06:01.713273 kubelet[2437]: E0213 20:06:01.713161 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:01.714535 containerd[1440]: time="2025-02-13T20:06:01.714487531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xknl,Uid:4eeb8549-fd6d-4762-8bcf-0561b1d92c49,Namespace:kube-system,Attempt:0,}" Feb 13 20:06:01.719842 kubelet[2437]: E0213 20:06:01.719815 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:01.720241 containerd[1440]: time="2025-02-13T20:06:01.720201231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-mkbnt,Uid:dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8,Namespace:kube-flannel,Attempt:0,}" Feb 13 20:06:01.736845 containerd[1440]: time="2025-02-13T20:06:01.736768464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:06:01.736845 containerd[1440]: time="2025-02-13T20:06:01.736815910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:06:01.736845 containerd[1440]: time="2025-02-13T20:06:01.736826671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:06:01.737019 containerd[1440]: time="2025-02-13T20:06:01.736888678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:06:01.743866 containerd[1440]: time="2025-02-13T20:06:01.743772753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:06:01.744519 containerd[1440]: time="2025-02-13T20:06:01.743965455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:06:01.744519 containerd[1440]: time="2025-02-13T20:06:01.744483475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:06:01.744710 containerd[1440]: time="2025-02-13T20:06:01.744628052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:06:01.761545 systemd[1]: Started cri-containerd-e520c008239dd136448527e0b2e9c8021c6dc16c3c47d9128093f46d9a084bc4.scope - libcontainer container e520c008239dd136448527e0b2e9c8021c6dc16c3c47d9128093f46d9a084bc4. Feb 13 20:06:01.764551 systemd[1]: Started cri-containerd-f42594b866e5fed34d461f3adf1c7884d12543755c4cd525662b9fbfa1aec87b.scope - libcontainer container f42594b866e5fed34d461f3adf1c7884d12543755c4cd525662b9fbfa1aec87b. Feb 13 20:06:01.787621 containerd[1440]: time="2025-02-13T20:06:01.787575812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xknl,Uid:4eeb8549-fd6d-4762-8bcf-0561b1d92c49,Namespace:kube-system,Attempt:0,} returns sandbox id \"e520c008239dd136448527e0b2e9c8021c6dc16c3c47d9128093f46d9a084bc4\"" Feb 13 20:06:01.789125 kubelet[2437]: E0213 20:06:01.789041 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:01.793259 containerd[1440]: time="2025-02-13T20:06:01.793146895Z" level=info msg="CreateContainer within sandbox \"e520c008239dd136448527e0b2e9c8021c6dc16c3c47d9128093f46d9a084bc4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:06:01.803725 containerd[1440]: time="2025-02-13T20:06:01.803694673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-mkbnt,Uid:dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"f42594b866e5fed34d461f3adf1c7884d12543755c4cd525662b9fbfa1aec87b\"" Feb 13 20:06:01.805656 kubelet[2437]: E0213 20:06:01.805632 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:01.806557 containerd[1440]: time="2025-02-13T20:06:01.806518960Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:06:01.807299 containerd[1440]: time="2025-02-13T20:06:01.807217000Z" level=info msg="CreateContainer within sandbox \"e520c008239dd136448527e0b2e9c8021c6dc16c3c47d9128093f46d9a084bc4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"67817fea271784c07b955aa2aac4da7e0032760ffb300f1959ca8b42b041818f\"" Feb 13 20:06:01.807930 containerd[1440]: time="2025-02-13T20:06:01.807855074Z" level=info msg="StartContainer for \"67817fea271784c07b955aa2aac4da7e0032760ffb300f1959ca8b42b041818f\"" Feb 13 20:06:01.835535 systemd[1]: Started cri-containerd-67817fea271784c07b955aa2aac4da7e0032760ffb300f1959ca8b42b041818f.scope - libcontainer container 67817fea271784c07b955aa2aac4da7e0032760ffb300f1959ca8b42b041818f. Feb 13 20:06:01.860688 containerd[1440]: time="2025-02-13T20:06:01.860645771Z" level=info msg="StartContainer for \"67817fea271784c07b955aa2aac4da7e0032760ffb300f1959ca8b42b041818f\" returns successfully" Feb 13 20:06:02.071325 kubelet[2437]: E0213 20:06:02.071034 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:02.071672 kubelet[2437]: E0213 20:06:02.071648 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:03.133454 containerd[1440]: time="2025-02-13T20:06:03.133394928Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:06:03.133865 containerd[1440]: time="2025-02-13T20:06:03.133470736Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=13144" Feb 13 20:06:03.133960 kubelet[2437]: E0213 20:06:03.133661 2437 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:06:03.133960 kubelet[2437]: E0213 20:06:03.133726 2437 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:06:03.134297 kubelet[2437]: E0213 20:06:03.133898 2437 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7r94c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-mkbnt_kube-flannel(dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:06:03.135111 kubelet[2437]: E0213 20:06:03.135074 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:06:03.430426 kubelet[2437]: E0213 20:06:03.430305 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:03.444451 kubelet[2437]: I0213 20:06:03.444394 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5xknl" podStartSLOduration=2.444351569 podStartE2EDuration="2.444351569s" podCreationTimestamp="2025-02-13 20:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:06:02.083360774 +0000 UTC m=+8.117387838" watchObservedRunningTime="2025-02-13 20:06:03.444351569 +0000 UTC m=+9.478378633" Feb 13 20:06:04.075202 kubelet[2437]: E0213 20:06:04.075085 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:04.075523 kubelet[2437]: E0213 20:06:04.075484 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:04.076064 kubelet[2437]: E0213 20:06:04.075953 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:06:04.121032 update_engine[1431]: I20250213 20:06:04.120481 1431 update_attempter.cc:509] Updating boot flags... Feb 13 20:06:04.142582 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2763) Feb 13 20:06:04.167400 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2767) Feb 13 20:06:04.203127 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2767) Feb 13 20:06:05.076007 kubelet[2437]: E0213 20:06:05.075978 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:19.043640 kubelet[2437]: E0213 20:06:19.043593 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:19.044710 containerd[1440]: time="2025-02-13T20:06:19.044426869Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:06:19.202794 systemd[1]: Started sshd@5-10.0.0.156:22-10.0.0.1:48126.service - OpenSSH per-connection server daemon (10.0.0.1:48126). Feb 13 20:06:19.237932 sshd[2772]: Accepted publickey for core from 10.0.0.1 port 48126 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:19.239044 sshd[2772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:19.242386 systemd-logind[1427]: New session 6 of user core. Feb 13 20:06:19.248532 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:06:19.361683 sshd[2772]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:19.365800 systemd[1]: sshd@5-10.0.0.156:22-10.0.0.1:48126.service: Deactivated successfully. Feb 13 20:06:19.368756 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:06:19.369263 systemd-logind[1427]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:06:19.370164 systemd-logind[1427]: Removed session 6. Feb 13 20:06:20.155247 containerd[1440]: time="2025-02-13T20:06:20.155191501Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:06:20.155643 containerd[1440]: time="2025-02-13T20:06:20.155267904Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:06:20.155672 kubelet[2437]: E0213 20:06:20.155411 2437 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:06:20.155672 kubelet[2437]: E0213 20:06:20.155453 2437 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:06:20.156604 kubelet[2437]: E0213 20:06:20.155534 2437 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7r94c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-mkbnt_kube-flannel(dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:06:20.156885 kubelet[2437]: E0213 20:06:20.156822 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:06:24.372770 systemd[1]: Started sshd@6-10.0.0.156:22-10.0.0.1:49746.service - OpenSSH per-connection server daemon (10.0.0.1:49746). Feb 13 20:06:24.408655 sshd[2788]: Accepted publickey for core from 10.0.0.1 port 49746 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:24.409820 sshd[2788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:24.412975 systemd-logind[1427]: New session 7 of user core. Feb 13 20:06:24.424510 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:06:24.531575 sshd[2788]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:24.534507 systemd[1]: sshd@6-10.0.0.156:22-10.0.0.1:49746.service: Deactivated successfully. Feb 13 20:06:24.536056 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:06:24.537769 systemd-logind[1427]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:06:24.538774 systemd-logind[1427]: Removed session 7. Feb 13 20:06:29.541802 systemd[1]: Started sshd@7-10.0.0.156:22-10.0.0.1:49748.service - OpenSSH per-connection server daemon (10.0.0.1:49748). Feb 13 20:06:29.577160 sshd[2803]: Accepted publickey for core from 10.0.0.1 port 49748 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:29.578329 sshd[2803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:29.581695 systemd-logind[1427]: New session 8 of user core. Feb 13 20:06:29.588511 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:06:29.692913 sshd[2803]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:29.695305 systemd[1]: sshd@7-10.0.0.156:22-10.0.0.1:49748.service: Deactivated successfully. Feb 13 20:06:29.696750 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:06:29.698152 systemd-logind[1427]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:06:29.698914 systemd-logind[1427]: Removed session 8. Feb 13 20:06:34.703803 systemd[1]: Started sshd@8-10.0.0.156:22-10.0.0.1:59880.service - OpenSSH per-connection server daemon (10.0.0.1:59880). Feb 13 20:06:34.739320 sshd[2822]: Accepted publickey for core from 10.0.0.1 port 59880 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:34.740561 sshd[2822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:34.744021 systemd-logind[1427]: New session 9 of user core. Feb 13 20:06:34.757534 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:06:34.862016 sshd[2822]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:34.865006 systemd[1]: sshd@8-10.0.0.156:22-10.0.0.1:59880.service: Deactivated successfully. Feb 13 20:06:34.866524 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:06:34.867110 systemd-logind[1427]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:06:34.867972 systemd-logind[1427]: Removed session 9. Feb 13 20:06:35.043682 kubelet[2437]: E0213 20:06:35.043445 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:35.044940 kubelet[2437]: E0213 20:06:35.044906 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:06:39.873555 systemd[1]: Started sshd@9-10.0.0.156:22-10.0.0.1:59884.service - OpenSSH per-connection server daemon (10.0.0.1:59884). Feb 13 20:06:39.909445 sshd[2837]: Accepted publickey for core from 10.0.0.1 port 59884 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:39.910569 sshd[2837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:39.914297 systemd-logind[1427]: New session 10 of user core. Feb 13 20:06:39.925498 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:06:40.029467 sshd[2837]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:40.032415 systemd[1]: sshd@9-10.0.0.156:22-10.0.0.1:59884.service: Deactivated successfully. Feb 13 20:06:40.033933 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:06:40.035063 systemd-logind[1427]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:06:40.035955 systemd-logind[1427]: Removed session 10. Feb 13 20:06:45.040898 systemd[1]: Started sshd@10-10.0.0.156:22-10.0.0.1:60988.service - OpenSSH per-connection server daemon (10.0.0.1:60988). Feb 13 20:06:45.076534 sshd[2853]: Accepted publickey for core from 10.0.0.1 port 60988 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:45.077744 sshd[2853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:45.081563 systemd-logind[1427]: New session 11 of user core. Feb 13 20:06:45.096408 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:06:45.200724 sshd[2853]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:45.203738 systemd[1]: sshd@10-10.0.0.156:22-10.0.0.1:60988.service: Deactivated successfully. Feb 13 20:06:45.205698 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:06:45.206290 systemd-logind[1427]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:06:45.207058 systemd-logind[1427]: Removed session 11. Feb 13 20:06:50.045413 kubelet[2437]: E0213 20:06:50.045095 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:50.048220 containerd[1440]: time="2025-02-13T20:06:50.046260953Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:06:50.214773 systemd[1]: Started sshd@11-10.0.0.156:22-10.0.0.1:32772.service - OpenSSH per-connection server daemon (10.0.0.1:32772). Feb 13 20:06:50.250711 sshd[2868]: Accepted publickey for core from 10.0.0.1 port 32772 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:50.251884 sshd[2868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:50.255159 systemd-logind[1427]: New session 12 of user core. Feb 13 20:06:50.266527 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:06:50.370152 sshd[2868]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:50.373106 systemd[1]: sshd@11-10.0.0.156:22-10.0.0.1:32772.service: Deactivated successfully. Feb 13 20:06:50.374746 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:06:50.375451 systemd-logind[1427]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:06:50.376320 systemd-logind[1427]: Removed session 12. Feb 13 20:06:51.410109 containerd[1440]: time="2025-02-13T20:06:51.410051754Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:06:51.410554 containerd[1440]: time="2025-02-13T20:06:51.410131876Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=13144" Feb 13 20:06:51.410595 kubelet[2437]: E0213 20:06:51.410230 2437 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:06:51.410595 kubelet[2437]: E0213 20:06:51.410274 2437 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:06:51.411036 kubelet[2437]: E0213 20:06:51.410357 2437 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7r94c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-mkbnt_kube-flannel(dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:06:51.411853 kubelet[2437]: E0213 20:06:51.411814 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:06:55.384842 systemd[1]: Started sshd@12-10.0.0.156:22-10.0.0.1:39816.service - OpenSSH per-connection server daemon (10.0.0.1:39816). Feb 13 20:06:55.420467 sshd[2885]: Accepted publickey for core from 10.0.0.1 port 39816 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:55.421668 sshd[2885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:55.424916 systemd-logind[1427]: New session 13 of user core. Feb 13 20:06:55.434516 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:06:55.538125 sshd[2885]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:55.541121 systemd[1]: sshd@12-10.0.0.156:22-10.0.0.1:39816.service: Deactivated successfully. Feb 13 20:06:55.542654 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:06:55.543962 systemd-logind[1427]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:06:55.544826 systemd-logind[1427]: Removed session 13. Feb 13 20:07:00.555922 systemd[1]: Started sshd@13-10.0.0.156:22-10.0.0.1:39826.service - OpenSSH per-connection server daemon (10.0.0.1:39826). Feb 13 20:07:00.591078 sshd[2901]: Accepted publickey for core from 10.0.0.1 port 39826 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:07:00.592316 sshd[2901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:07:00.596308 systemd-logind[1427]: New session 14 of user core. Feb 13 20:07:00.610524 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:07:00.715582 sshd[2901]: pam_unix(sshd:session): session closed for user core Feb 13 20:07:00.718838 systemd[1]: sshd@13-10.0.0.156:22-10.0.0.1:39826.service: Deactivated successfully. Feb 13 20:07:00.720764 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:07:00.721637 systemd-logind[1427]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:07:00.722519 systemd-logind[1427]: Removed session 14. Feb 13 20:07:03.043217 kubelet[2437]: E0213 20:07:03.043145 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:07:03.044248 kubelet[2437]: E0213 20:07:03.044121 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:07:05.727851 systemd[1]: Started sshd@14-10.0.0.156:22-10.0.0.1:50334.service - OpenSSH per-connection server daemon (10.0.0.1:50334). Feb 13 20:07:05.762921 sshd[2918]: Accepted publickey for core from 10.0.0.1 port 50334 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:07:05.764038 sshd[2918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:07:05.767759 systemd-logind[1427]: New session 15 of user core. Feb 13 20:07:05.780525 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:07:05.883782 sshd[2918]: pam_unix(sshd:session): session closed for user core Feb 13 20:07:05.886763 systemd[1]: sshd@14-10.0.0.156:22-10.0.0.1:50334.service: Deactivated successfully. Feb 13 20:07:05.888999 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:07:05.889623 systemd-logind[1427]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:07:05.890341 systemd-logind[1427]: Removed session 15. Feb 13 20:07:07.043333 kubelet[2437]: E0213 20:07:07.043300 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:07:10.893989 systemd[1]: Started sshd@15-10.0.0.156:22-10.0.0.1:50340.service - OpenSSH per-connection server daemon (10.0.0.1:50340). Feb 13 20:07:10.929169 sshd[2933]: Accepted publickey for core from 10.0.0.1 port 50340 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:07:10.930273 sshd[2933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:07:10.933571 systemd-logind[1427]: New session 16 of user core. Feb 13 20:07:10.943563 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:07:11.047960 sshd[2933]: pam_unix(sshd:session): session closed for user core Feb 13 20:07:11.050531 systemd[1]: sshd@15-10.0.0.156:22-10.0.0.1:50340.service: Deactivated successfully. Feb 13 20:07:11.052492 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:07:11.054762 systemd-logind[1427]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:07:11.055927 systemd-logind[1427]: Removed session 16. Feb 13 20:07:14.043984 kubelet[2437]: E0213 20:07:14.043887 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:07:14.046090 kubelet[2437]: E0213 20:07:14.045968 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:07:16.064973 systemd[1]: Started sshd@16-10.0.0.156:22-10.0.0.1:48168.service - OpenSSH per-connection server daemon (10.0.0.1:48168). Feb 13 20:07:16.100998 sshd[2948]: Accepted publickey for core from 10.0.0.1 port 48168 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:07:16.102102 sshd[2948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:07:16.105935 systemd-logind[1427]: New session 17 of user core. Feb 13 20:07:16.111523 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:07:16.216607 sshd[2948]: pam_unix(sshd:session): session closed for user core Feb 13 20:07:16.219635 systemd[1]: sshd@16-10.0.0.156:22-10.0.0.1:48168.service: Deactivated successfully. Feb 13 20:07:16.221891 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:07:16.222679 systemd-logind[1427]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:07:16.223466 systemd-logind[1427]: Removed session 17. Feb 13 20:07:21.228987 systemd[1]: Started sshd@17-10.0.0.156:22-10.0.0.1:48170.service - OpenSSH per-connection server daemon (10.0.0.1:48170). Feb 13 20:07:21.264581 sshd[2963]: Accepted publickey for core from 10.0.0.1 port 48170 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:07:21.265758 sshd[2963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:07:21.269706 systemd-logind[1427]: New session 18 of user core. Feb 13 20:07:21.282602 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:07:21.385060 sshd[2963]: pam_unix(sshd:session): session closed for user core Feb 13 20:07:21.388035 systemd[1]: sshd@17-10.0.0.156:22-10.0.0.1:48170.service: Deactivated successfully. Feb 13 20:07:21.389754 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:07:21.390299 systemd-logind[1427]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:07:21.391028 systemd-logind[1427]: Removed session 18. Feb 13 20:07:22.043553 kubelet[2437]: E0213 20:07:22.043525 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:07:23.043745 kubelet[2437]: E0213 20:07:23.043708 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:07:25.043717 kubelet[2437]: E0213 20:07:25.043675 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:07:25.044771 kubelet[2437]: E0213 20:07:25.044712 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:07:26.395782 systemd[1]: Started sshd@18-10.0.0.156:22-10.0.0.1:42946.service - OpenSSH per-connection server daemon (10.0.0.1:42946). Feb 13 20:07:26.431416 sshd[2979]: Accepted publickey for core from 10.0.0.1 port 42946 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:07:26.432583 sshd[2979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:07:26.436016 systemd-logind[1427]: New session 19 of user core. Feb 13 20:07:26.442519 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:07:26.547611 sshd[2979]: pam_unix(sshd:session): session closed for user core Feb 13 20:07:26.550056 systemd[1]: sshd@18-10.0.0.156:22-10.0.0.1:42946.service: Deactivated successfully. Feb 13 20:07:26.551497 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:07:26.552842 systemd-logind[1427]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:07:26.553875 systemd-logind[1427]: Removed session 19. Feb 13 20:07:31.557973 systemd[1]: Started sshd@19-10.0.0.156:22-10.0.0.1:42956.service - OpenSSH per-connection server daemon (10.0.0.1:42956). Feb 13 20:07:31.593689 sshd[2994]: Accepted publickey for core from 10.0.0.1 port 42956 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:07:31.594864 sshd[2994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:07:31.598441 systemd-logind[1427]: New session 20 of user core. Feb 13 20:07:31.608509 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:07:31.714013 sshd[2994]: pam_unix(sshd:session): session closed for user core Feb 13 20:07:31.717339 systemd[1]: sshd@19-10.0.0.156:22-10.0.0.1:42956.service: Deactivated successfully. Feb 13 20:07:31.719642 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:07:31.720360 systemd-logind[1427]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:07:31.721209 systemd-logind[1427]: Removed session 20. Feb 13 20:07:33.044215 kubelet[2437]: E0213 20:07:33.044179 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:07:36.724972 systemd[1]: Started sshd@20-10.0.0.156:22-10.0.0.1:33184.service - OpenSSH per-connection server daemon (10.0.0.1:33184). Feb 13 20:07:36.761458 sshd[3012]: Accepted publickey for core from 10.0.0.1 port 33184 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:07:36.762621 sshd[3012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:07:36.766300 systemd-logind[1427]: New session 21 of user core. Feb 13 20:07:36.781540 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:07:36.885136 sshd[3012]: pam_unix(sshd:session): session closed for user core Feb 13 20:07:36.888288 systemd[1]: sshd@20-10.0.0.156:22-10.0.0.1:33184.service: Deactivated successfully. Feb 13 20:07:36.890924 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:07:36.891622 systemd-logind[1427]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:07:36.892412 systemd-logind[1427]: Removed session 21. Feb 13 20:07:40.043903 kubelet[2437]: E0213 20:07:40.043721 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:07:40.045159 containerd[1440]: time="2025-02-13T20:07:40.045022020Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:07:41.169481 containerd[1440]: time="2025-02-13T20:07:41.169432620Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:07:41.169848 containerd[1440]: time="2025-02-13T20:07:41.169469220Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:07:41.169921 kubelet[2437]: E0213 20:07:41.169632 2437 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:07:41.169921 kubelet[2437]: E0213 20:07:41.169674 2437 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:07:41.170194 kubelet[2437]: E0213 20:07:41.169758 2437 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7r94c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-mkbnt_kube-flannel(dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:07:41.171246 kubelet[2437]: E0213 20:07:41.171200 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:07:41.895874 systemd[1]: Started sshd@21-10.0.0.156:22-10.0.0.1:33200.service - OpenSSH per-connection server daemon (10.0.0.1:33200). Feb 13 20:07:41.931745 sshd[3028]: Accepted publickey for core from 10.0.0.1 port 33200 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:07:41.932959 sshd[3028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:07:41.936326 systemd-logind[1427]: New session 22 of user core. Feb 13 20:07:41.945583 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:07:42.050030 sshd[3028]: pam_unix(sshd:session): session closed for user core Feb 13 20:07:42.053821 systemd[1]: sshd@21-10.0.0.156:22-10.0.0.1:33200.service: Deactivated successfully. Feb 13 20:07:42.056287 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:07:42.057151 systemd-logind[1427]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:07:42.058225 systemd-logind[1427]: Removed session 22. Feb 13 20:07:47.065164 systemd[1]: Started sshd@22-10.0.0.156:22-10.0.0.1:60892.service - OpenSSH per-connection server daemon (10.0.0.1:60892). Feb 13 20:07:47.101674 sshd[3044]: Accepted publickey for core from 10.0.0.1 port 60892 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:07:47.102851 sshd[3044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:07:47.106732 systemd-logind[1427]: New session 23 of user core. Feb 13 20:07:47.112599 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:07:47.217049 sshd[3044]: pam_unix(sshd:session): session closed for user core Feb 13 20:07:47.220096 systemd[1]: sshd@22-10.0.0.156:22-10.0.0.1:60892.service: Deactivated successfully. Feb 13 20:07:47.221823 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:07:47.222550 systemd-logind[1427]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:07:47.223392 systemd-logind[1427]: Removed session 23. Feb 13 20:07:52.228089 systemd[1]: Started sshd@23-10.0.0.156:22-10.0.0.1:60900.service - OpenSSH per-connection server daemon (10.0.0.1:60900). Feb 13 20:07:52.263547 sshd[3060]: Accepted publickey for core from 10.0.0.1 port 60900 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:07:52.264695 sshd[3060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:07:52.268166 systemd-logind[1427]: New session 24 of user core. Feb 13 20:07:52.279784 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:07:52.384553 sshd[3060]: pam_unix(sshd:session): session closed for user core Feb 13 20:07:52.387930 systemd[1]: sshd@23-10.0.0.156:22-10.0.0.1:60900.service: Deactivated successfully. Feb 13 20:07:52.390573 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:07:52.391288 systemd-logind[1427]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:07:52.392302 systemd-logind[1427]: Removed session 24. Feb 13 20:07:54.043097 kubelet[2437]: E0213 20:07:54.043029 2437 kubelet_node_status.go:461] "Node not becoming ready in time after startup" Feb 13 20:07:54.098790 kubelet[2437]: E0213 20:07:54.098756 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:07:55.043879 kubelet[2437]: E0213 20:07:55.043851 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:07:55.044848 kubelet[2437]: E0213 20:07:55.044779 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:07:57.394886 systemd[1]: Started sshd@24-10.0.0.156:22-10.0.0.1:45576.service - OpenSSH per-connection server daemon (10.0.0.1:45576). Feb 13 20:07:57.430945 sshd[3077]: Accepted publickey for core from 10.0.0.1 port 45576 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:07:57.432070 sshd[3077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:07:57.435509 systemd-logind[1427]: New session 25 of user core. Feb 13 20:07:57.445516 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:07:57.550914 sshd[3077]: pam_unix(sshd:session): session closed for user core Feb 13 20:07:57.554116 systemd[1]: sshd@24-10.0.0.156:22-10.0.0.1:45576.service: Deactivated successfully. Feb 13 20:07:57.555692 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:07:57.556892 systemd-logind[1427]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:07:57.557727 systemd-logind[1427]: Removed session 25. Feb 13 20:07:59.099462 kubelet[2437]: E0213 20:07:59.099426 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:08:02.561949 systemd[1]: Started sshd@25-10.0.0.156:22-10.0.0.1:54026.service - OpenSSH per-connection server daemon (10.0.0.1:54026). Feb 13 20:08:02.597212 sshd[3095]: Accepted publickey for core from 10.0.0.1 port 54026 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:08:02.598372 sshd[3095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:02.602145 systemd-logind[1427]: New session 26 of user core. Feb 13 20:08:02.613514 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:08:02.719214 sshd[3095]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:02.722896 systemd[1]: sshd@25-10.0.0.156:22-10.0.0.1:54026.service: Deactivated successfully. Feb 13 20:08:02.724415 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:08:02.725827 systemd-logind[1427]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:08:02.726854 systemd-logind[1427]: Removed session 26. Feb 13 20:08:04.100283 kubelet[2437]: E0213 20:08:04.100246 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:08:07.043998 kubelet[2437]: E0213 20:08:07.043959 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:08:07.044639 kubelet[2437]: E0213 20:08:07.044589 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:08:07.729996 systemd[1]: Started sshd@26-10.0.0.156:22-10.0.0.1:54028.service - OpenSSH per-connection server daemon (10.0.0.1:54028). Feb 13 20:08:07.765245 sshd[3110]: Accepted publickey for core from 10.0.0.1 port 54028 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:08:07.766447 sshd[3110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:07.770395 systemd-logind[1427]: New session 27 of user core. Feb 13 20:08:07.779535 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:08:07.882525 sshd[3110]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:07.885500 systemd[1]: sshd@26-10.0.0.156:22-10.0.0.1:54028.service: Deactivated successfully. Feb 13 20:08:07.887104 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:08:07.888541 systemd-logind[1427]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:08:07.889346 systemd-logind[1427]: Removed session 27. Feb 13 20:08:09.100991 kubelet[2437]: E0213 20:08:09.100938 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:08:12.893008 systemd[1]: Started sshd@27-10.0.0.156:22-10.0.0.1:59408.service - OpenSSH per-connection server daemon (10.0.0.1:59408). Feb 13 20:08:12.928320 sshd[3125]: Accepted publickey for core from 10.0.0.1 port 59408 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:08:12.929501 sshd[3125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:12.933216 systemd-logind[1427]: New session 28 of user core. Feb 13 20:08:12.943587 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:08:13.047308 sshd[3125]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:13.049888 systemd-logind[1427]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:08:13.050119 systemd[1]: sshd@27-10.0.0.156:22-10.0.0.1:59408.service: Deactivated successfully. Feb 13 20:08:13.051705 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:08:13.053181 systemd-logind[1427]: Removed session 28. Feb 13 20:08:14.044366 kubelet[2437]: E0213 20:08:14.044271 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:08:14.102517 kubelet[2437]: E0213 20:08:14.102451 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:08:18.060904 systemd[1]: Started sshd@28-10.0.0.156:22-10.0.0.1:59410.service - OpenSSH per-connection server daemon (10.0.0.1:59410). Feb 13 20:08:18.097026 sshd[3140]: Accepted publickey for core from 10.0.0.1 port 59410 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:08:18.098175 sshd[3140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:18.101852 systemd-logind[1427]: New session 29 of user core. Feb 13 20:08:18.114534 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:08:18.219374 sshd[3140]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:18.222656 systemd[1]: sshd@28-10.0.0.156:22-10.0.0.1:59410.service: Deactivated successfully. Feb 13 20:08:18.225075 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:08:18.225845 systemd-logind[1427]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:08:18.227628 systemd-logind[1427]: Removed session 29. Feb 13 20:08:19.103427 kubelet[2437]: E0213 20:08:19.103391 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:08:22.044916 kubelet[2437]: E0213 20:08:22.044756 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:08:22.045470 kubelet[2437]: E0213 20:08:22.045414 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:08:23.229922 systemd[1]: Started sshd@29-10.0.0.156:22-10.0.0.1:33778.service - OpenSSH per-connection server daemon (10.0.0.1:33778). Feb 13 20:08:23.265355 sshd[3157]: Accepted publickey for core from 10.0.0.1 port 33778 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:08:23.266575 sshd[3157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:23.270718 systemd-logind[1427]: New session 30 of user core. Feb 13 20:08:23.276539 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 20:08:23.380872 sshd[3157]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:23.383943 systemd[1]: sshd@29-10.0.0.156:22-10.0.0.1:33778.service: Deactivated successfully. Feb 13 20:08:23.385594 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 20:08:23.386219 systemd-logind[1427]: Session 30 logged out. Waiting for processes to exit. Feb 13 20:08:23.387131 systemd-logind[1427]: Removed session 30. Feb 13 20:08:24.044250 kubelet[2437]: E0213 20:08:24.044008 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:08:24.044250 kubelet[2437]: E0213 20:08:24.044149 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:08:24.105031 kubelet[2437]: E0213 20:08:24.104977 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:08:28.394859 systemd[1]: Started sshd@30-10.0.0.156:22-10.0.0.1:33790.service - OpenSSH per-connection server daemon (10.0.0.1:33790). Feb 13 20:08:28.430189 sshd[3173]: Accepted publickey for core from 10.0.0.1 port 33790 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:08:28.431308 sshd[3173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:28.434626 systemd-logind[1427]: New session 31 of user core. Feb 13 20:08:28.450599 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 20:08:28.554595 sshd[3173]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:28.557646 systemd-logind[1427]: Session 31 logged out. Waiting for processes to exit. Feb 13 20:08:28.557960 systemd[1]: sshd@30-10.0.0.156:22-10.0.0.1:33790.service: Deactivated successfully. Feb 13 20:08:28.559745 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 20:08:28.561072 systemd-logind[1427]: Removed session 31. Feb 13 20:08:29.106118 kubelet[2437]: E0213 20:08:29.106073 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:08:33.564796 systemd[1]: Started sshd@31-10.0.0.156:22-10.0.0.1:43026.service - OpenSSH per-connection server daemon (10.0.0.1:43026). Feb 13 20:08:33.600835 sshd[3190]: Accepted publickey for core from 10.0.0.1 port 43026 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:08:33.601973 sshd[3190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:33.605674 systemd-logind[1427]: New session 32 of user core. Feb 13 20:08:33.611531 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 20:08:33.715459 sshd[3190]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:33.718446 systemd[1]: sshd@31-10.0.0.156:22-10.0.0.1:43026.service: Deactivated successfully. Feb 13 20:08:33.720660 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 20:08:33.721287 systemd-logind[1427]: Session 32 logged out. Waiting for processes to exit. Feb 13 20:08:33.722078 systemd-logind[1427]: Removed session 32. Feb 13 20:08:34.106826 kubelet[2437]: E0213 20:08:34.106782 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:08:36.043991 kubelet[2437]: E0213 20:08:36.043942 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:08:36.044531 kubelet[2437]: E0213 20:08:36.044482 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:08:38.725906 systemd[1]: Started sshd@32-10.0.0.156:22-10.0.0.1:43038.service - OpenSSH per-connection server daemon (10.0.0.1:43038). Feb 13 20:08:38.761365 sshd[3206]: Accepted publickey for core from 10.0.0.1 port 43038 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:08:38.762565 sshd[3206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:38.766318 systemd-logind[1427]: New session 33 of user core. Feb 13 20:08:38.775511 systemd[1]: Started session-33.scope - Session 33 of User core. Feb 13 20:08:38.880008 sshd[3206]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:38.883084 systemd[1]: sshd@32-10.0.0.156:22-10.0.0.1:43038.service: Deactivated successfully. Feb 13 20:08:38.884590 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 20:08:38.885171 systemd-logind[1427]: Session 33 logged out. Waiting for processes to exit. Feb 13 20:08:38.886189 systemd-logind[1427]: Removed session 33. Feb 13 20:08:39.108120 kubelet[2437]: E0213 20:08:39.108078 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:08:43.890821 systemd[1]: Started sshd@33-10.0.0.156:22-10.0.0.1:51484.service - OpenSSH per-connection server daemon (10.0.0.1:51484). Feb 13 20:08:43.925970 sshd[3221]: Accepted publickey for core from 10.0.0.1 port 51484 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:08:43.927212 sshd[3221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:43.930413 systemd-logind[1427]: New session 34 of user core. Feb 13 20:08:43.936515 systemd[1]: Started session-34.scope - Session 34 of User core. Feb 13 20:08:44.040197 sshd[3221]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:44.043185 systemd[1]: sshd@33-10.0.0.156:22-10.0.0.1:51484.service: Deactivated successfully. Feb 13 20:08:44.044878 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 20:08:44.046046 systemd-logind[1427]: Session 34 logged out. Waiting for processes to exit. Feb 13 20:08:44.046759 systemd-logind[1427]: Removed session 34. Feb 13 20:08:44.109008 kubelet[2437]: E0213 20:08:44.108965 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:08:49.051799 systemd[1]: Started sshd@34-10.0.0.156:22-10.0.0.1:51486.service - OpenSSH per-connection server daemon (10.0.0.1:51486). Feb 13 20:08:49.088692 sshd[3237]: Accepted publickey for core from 10.0.0.1 port 51486 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:08:49.089865 sshd[3237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:49.093598 systemd-logind[1427]: New session 35 of user core. Feb 13 20:08:49.099522 systemd[1]: Started session-35.scope - Session 35 of User core. Feb 13 20:08:49.109717 kubelet[2437]: E0213 20:08:49.109664 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:08:49.204705 sshd[3237]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:49.208369 systemd[1]: sshd@34-10.0.0.156:22-10.0.0.1:51486.service: Deactivated successfully. Feb 13 20:08:49.210317 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 20:08:49.211946 systemd-logind[1427]: Session 35 logged out. Waiting for processes to exit. Feb 13 20:08:49.212769 systemd-logind[1427]: Removed session 35. Feb 13 20:08:50.044248 kubelet[2437]: E0213 20:08:50.044204 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:08:50.044982 kubelet[2437]: E0213 20:08:50.044925 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:08:54.110704 kubelet[2437]: E0213 20:08:54.110670 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:08:54.218750 systemd[1]: Started sshd@35-10.0.0.156:22-10.0.0.1:41580.service - OpenSSH per-connection server daemon (10.0.0.1:41580). Feb 13 20:08:54.254516 sshd[3254]: Accepted publickey for core from 10.0.0.1 port 41580 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:08:54.255815 sshd[3254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:54.260707 systemd-logind[1427]: New session 36 of user core. Feb 13 20:08:54.267525 systemd[1]: Started session-36.scope - Session 36 of User core. Feb 13 20:08:54.375841 sshd[3254]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:54.379470 systemd[1]: sshd@35-10.0.0.156:22-10.0.0.1:41580.service: Deactivated successfully. Feb 13 20:08:54.381145 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 20:08:54.382443 systemd-logind[1427]: Session 36 logged out. Waiting for processes to exit. Feb 13 20:08:54.383339 systemd-logind[1427]: Removed session 36. Feb 13 20:08:55.043504 kubelet[2437]: E0213 20:08:55.043477 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:08:59.111970 kubelet[2437]: E0213 20:08:59.111934 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:08:59.389884 systemd[1]: Started sshd@36-10.0.0.156:22-10.0.0.1:41594.service - OpenSSH per-connection server daemon (10.0.0.1:41594). Feb 13 20:08:59.425981 sshd[3269]: Accepted publickey for core from 10.0.0.1 port 41594 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:08:59.427105 sshd[3269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:59.430855 systemd-logind[1427]: New session 37 of user core. Feb 13 20:08:59.440523 systemd[1]: Started session-37.scope - Session 37 of User core. Feb 13 20:08:59.544626 sshd[3269]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:59.547260 systemd[1]: sshd@36-10.0.0.156:22-10.0.0.1:41594.service: Deactivated successfully. Feb 13 20:08:59.548929 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 20:08:59.550218 systemd-logind[1427]: Session 37 logged out. Waiting for processes to exit. Feb 13 20:08:59.551186 systemd-logind[1427]: Removed session 37. Feb 13 20:09:04.043597 kubelet[2437]: E0213 20:09:04.043566 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:09:04.044679 containerd[1440]: time="2025-02-13T20:09:04.044490008Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:09:04.112939 kubelet[2437]: E0213 20:09:04.112878 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:09:04.559864 systemd[1]: Started sshd@37-10.0.0.156:22-10.0.0.1:32880.service - OpenSSH per-connection server daemon (10.0.0.1:32880). Feb 13 20:09:04.595029 sshd[3287]: Accepted publickey for core from 10.0.0.1 port 32880 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:09:04.596166 sshd[3287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:04.599840 systemd-logind[1427]: New session 38 of user core. Feb 13 20:09:04.606515 systemd[1]: Started session-38.scope - Session 38 of User core. Feb 13 20:09:04.711474 sshd[3287]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:04.714420 systemd[1]: sshd@37-10.0.0.156:22-10.0.0.1:32880.service: Deactivated successfully. Feb 13 20:09:04.716818 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 20:09:04.717687 systemd-logind[1427]: Session 38 logged out. Waiting for processes to exit. Feb 13 20:09:04.718489 systemd-logind[1427]: Removed session 38. Feb 13 20:09:05.152263 containerd[1440]: time="2025-02-13T20:09:05.152204057Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:09:05.152598 containerd[1440]: time="2025-02-13T20:09:05.152279376Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:09:05.152630 kubelet[2437]: E0213 20:09:05.152420 2437 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:09:05.152630 kubelet[2437]: E0213 20:09:05.152462 2437 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:09:05.152847 kubelet[2437]: E0213 20:09:05.152573 2437 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7r94c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-mkbnt_kube-flannel(dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:09:05.153914 kubelet[2437]: E0213 20:09:05.153872 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:09:09.114259 kubelet[2437]: E0213 20:09:09.114208 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:09:09.722037 systemd[1]: Started sshd@38-10.0.0.156:22-10.0.0.1:32888.service - OpenSSH per-connection server daemon (10.0.0.1:32888). Feb 13 20:09:09.757757 sshd[3302]: Accepted publickey for core from 10.0.0.1 port 32888 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:09:09.758969 sshd[3302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:09.763050 systemd-logind[1427]: New session 39 of user core. Feb 13 20:09:09.773509 systemd[1]: Started session-39.scope - Session 39 of User core. Feb 13 20:09:09.877692 sshd[3302]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:09.880751 systemd[1]: sshd@38-10.0.0.156:22-10.0.0.1:32888.service: Deactivated successfully. Feb 13 20:09:09.882373 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 20:09:09.883856 systemd-logind[1427]: Session 39 logged out. Waiting for processes to exit. Feb 13 20:09:09.884836 systemd-logind[1427]: Removed session 39. Feb 13 20:09:14.115083 kubelet[2437]: E0213 20:09:14.115041 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:09:14.889161 systemd[1]: Started sshd@39-10.0.0.156:22-10.0.0.1:52344.service - OpenSSH per-connection server daemon (10.0.0.1:52344). Feb 13 20:09:14.924290 sshd[3318]: Accepted publickey for core from 10.0.0.1 port 52344 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:09:14.925404 sshd[3318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:14.929172 systemd-logind[1427]: New session 40 of user core. Feb 13 20:09:14.938525 systemd[1]: Started session-40.scope - Session 40 of User core. Feb 13 20:09:15.045809 sshd[3318]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:15.048854 systemd[1]: sshd@39-10.0.0.156:22-10.0.0.1:52344.service: Deactivated successfully. Feb 13 20:09:15.051726 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 20:09:15.052364 systemd-logind[1427]: Session 40 logged out. Waiting for processes to exit. Feb 13 20:09:15.053451 systemd-logind[1427]: Removed session 40. Feb 13 20:09:19.115915 kubelet[2437]: E0213 20:09:19.115874 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:09:20.045877 kubelet[2437]: E0213 20:09:20.045841 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:09:20.046493 kubelet[2437]: E0213 20:09:20.046457 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:09:20.060752 systemd[1]: Started sshd@40-10.0.0.156:22-10.0.0.1:52352.service - OpenSSH per-connection server daemon (10.0.0.1:52352). Feb 13 20:09:20.096331 sshd[3333]: Accepted publickey for core from 10.0.0.1 port 52352 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:09:20.097674 sshd[3333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:20.101210 systemd-logind[1427]: New session 41 of user core. Feb 13 20:09:20.110533 systemd[1]: Started session-41.scope - Session 41 of User core. Feb 13 20:09:20.216051 sshd[3333]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:20.226707 systemd[1]: sshd@40-10.0.0.156:22-10.0.0.1:52352.service: Deactivated successfully. Feb 13 20:09:20.228874 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 20:09:20.230365 systemd-logind[1427]: Session 41 logged out. Waiting for processes to exit. Feb 13 20:09:20.238738 systemd[1]: Started sshd@41-10.0.0.156:22-10.0.0.1:52368.service - OpenSSH per-connection server daemon (10.0.0.1:52368). Feb 13 20:09:20.240073 systemd-logind[1427]: Removed session 41. Feb 13 20:09:20.269865 sshd[3349]: Accepted publickey for core from 10.0.0.1 port 52368 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:09:20.270995 sshd[3349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:20.275211 systemd-logind[1427]: New session 42 of user core. Feb 13 20:09:20.283516 systemd[1]: Started session-42.scope - Session 42 of User core. Feb 13 20:09:20.423993 sshd[3349]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:20.434767 systemd[1]: sshd@41-10.0.0.156:22-10.0.0.1:52368.service: Deactivated successfully. Feb 13 20:09:20.436929 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 20:09:20.439269 systemd-logind[1427]: Session 42 logged out. Waiting for processes to exit. Feb 13 20:09:20.451644 systemd[1]: Started sshd@42-10.0.0.156:22-10.0.0.1:52370.service - OpenSSH per-connection server daemon (10.0.0.1:52370). Feb 13 20:09:20.452603 systemd-logind[1427]: Removed session 42. Feb 13 20:09:20.484601 sshd[3362]: Accepted publickey for core from 10.0.0.1 port 52370 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:09:20.485718 sshd[3362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:20.489192 systemd-logind[1427]: New session 43 of user core. Feb 13 20:09:20.498507 systemd[1]: Started session-43.scope - Session 43 of User core. Feb 13 20:09:20.603148 sshd[3362]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:20.606129 systemd[1]: sshd@42-10.0.0.156:22-10.0.0.1:52370.service: Deactivated successfully. Feb 13 20:09:20.607851 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 20:09:20.608490 systemd-logind[1427]: Session 43 logged out. Waiting for processes to exit. Feb 13 20:09:20.609574 systemd-logind[1427]: Removed session 43. Feb 13 20:09:24.117161 kubelet[2437]: E0213 20:09:24.117120 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:09:25.043950 kubelet[2437]: E0213 20:09:25.043915 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:09:25.044064 kubelet[2437]: E0213 20:09:25.044001 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:09:25.613883 systemd[1]: Started sshd@43-10.0.0.156:22-10.0.0.1:38794.service - OpenSSH per-connection server daemon (10.0.0.1:38794). Feb 13 20:09:25.649262 sshd[3378]: Accepted publickey for core from 10.0.0.1 port 38794 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:09:25.650666 sshd[3378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:25.654474 systemd-logind[1427]: New session 44 of user core. Feb 13 20:09:25.662588 systemd[1]: Started session-44.scope - Session 44 of User core. Feb 13 20:09:25.764909 sshd[3378]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:25.767972 systemd[1]: sshd@43-10.0.0.156:22-10.0.0.1:38794.service: Deactivated successfully. Feb 13 20:09:25.769671 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 20:09:25.770271 systemd-logind[1427]: Session 44 logged out. Waiting for processes to exit. Feb 13 20:09:25.771041 systemd-logind[1427]: Removed session 44. Feb 13 20:09:29.118671 kubelet[2437]: E0213 20:09:29.118625 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:09:30.775006 systemd[1]: Started sshd@44-10.0.0.156:22-10.0.0.1:38798.service - OpenSSH per-connection server daemon (10.0.0.1:38798). Feb 13 20:09:30.810729 sshd[3393]: Accepted publickey for core from 10.0.0.1 port 38798 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:09:30.811920 sshd[3393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:30.815201 systemd-logind[1427]: New session 45 of user core. Feb 13 20:09:30.825520 systemd[1]: Started session-45.scope - Session 45 of User core. Feb 13 20:09:30.930798 sshd[3393]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:30.933935 systemd[1]: sshd@44-10.0.0.156:22-10.0.0.1:38798.service: Deactivated successfully. Feb 13 20:09:30.935627 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 20:09:30.936166 systemd-logind[1427]: Session 45 logged out. Waiting for processes to exit. Feb 13 20:09:30.936874 systemd-logind[1427]: Removed session 45. Feb 13 20:09:33.043411 kubelet[2437]: E0213 20:09:33.043278 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:09:33.043908 kubelet[2437]: E0213 20:09:33.043856 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:09:34.119902 kubelet[2437]: E0213 20:09:34.119861 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:09:35.941908 systemd[1]: Started sshd@45-10.0.0.156:22-10.0.0.1:41656.service - OpenSSH per-connection server daemon (10.0.0.1:41656). Feb 13 20:09:35.977172 sshd[3409]: Accepted publickey for core from 10.0.0.1 port 41656 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:09:35.978304 sshd[3409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:35.982976 systemd-logind[1427]: New session 46 of user core. Feb 13 20:09:35.989534 systemd[1]: Started session-46.scope - Session 46 of User core. Feb 13 20:09:36.097144 sshd[3409]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:36.100421 systemd[1]: sshd@45-10.0.0.156:22-10.0.0.1:41656.service: Deactivated successfully. Feb 13 20:09:36.102653 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 20:09:36.103427 systemd-logind[1427]: Session 46 logged out. Waiting for processes to exit. Feb 13 20:09:36.104519 systemd-logind[1427]: Removed session 46. Feb 13 20:09:39.121449 kubelet[2437]: E0213 20:09:39.121404 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:09:41.110821 systemd[1]: Started sshd@46-10.0.0.156:22-10.0.0.1:41668.service - OpenSSH per-connection server daemon (10.0.0.1:41668). Feb 13 20:09:41.146174 sshd[3423]: Accepted publickey for core from 10.0.0.1 port 41668 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:09:41.147465 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:41.151049 systemd-logind[1427]: New session 47 of user core. Feb 13 20:09:41.162522 systemd[1]: Started session-47.scope - Session 47 of User core. Feb 13 20:09:41.270787 sshd[3423]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:41.273688 systemd[1]: sshd@46-10.0.0.156:22-10.0.0.1:41668.service: Deactivated successfully. Feb 13 20:09:41.276095 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 20:09:41.276792 systemd-logind[1427]: Session 47 logged out. Waiting for processes to exit. Feb 13 20:09:41.277650 systemd-logind[1427]: Removed session 47. Feb 13 20:09:44.043970 kubelet[2437]: E0213 20:09:44.043922 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:09:44.044564 kubelet[2437]: E0213 20:09:44.044503 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:09:44.122535 kubelet[2437]: E0213 20:09:44.122450 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:09:46.288983 systemd[1]: Started sshd@47-10.0.0.156:22-10.0.0.1:47644.service - OpenSSH per-connection server daemon (10.0.0.1:47644). Feb 13 20:09:46.325139 sshd[3437]: Accepted publickey for core from 10.0.0.1 port 47644 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:09:46.326286 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:46.329976 systemd-logind[1427]: New session 48 of user core. Feb 13 20:09:46.339528 systemd[1]: Started session-48.scope - Session 48 of User core. Feb 13 20:09:46.444329 sshd[3437]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:46.447616 systemd[1]: sshd@47-10.0.0.156:22-10.0.0.1:47644.service: Deactivated successfully. Feb 13 20:09:46.449941 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 20:09:46.450742 systemd-logind[1427]: Session 48 logged out. Waiting for processes to exit. Feb 13 20:09:46.451566 systemd-logind[1427]: Removed session 48. Feb 13 20:09:49.043748 kubelet[2437]: E0213 20:09:49.043662 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:09:49.123635 kubelet[2437]: E0213 20:09:49.123590 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:09:51.455900 systemd[1]: Started sshd@48-10.0.0.156:22-10.0.0.1:47646.service - OpenSSH per-connection server daemon (10.0.0.1:47646). Feb 13 20:09:51.491490 sshd[3451]: Accepted publickey for core from 10.0.0.1 port 47646 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:09:51.492648 sshd[3451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:51.496033 systemd-logind[1427]: New session 49 of user core. Feb 13 20:09:51.504523 systemd[1]: Started session-49.scope - Session 49 of User core. Feb 13 20:09:51.610478 sshd[3451]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:51.613775 systemd[1]: sshd@48-10.0.0.156:22-10.0.0.1:47646.service: Deactivated successfully. Feb 13 20:09:51.616878 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 20:09:51.617596 systemd-logind[1427]: Session 49 logged out. Waiting for processes to exit. Feb 13 20:09:51.618483 systemd-logind[1427]: Removed session 49. Feb 13 20:09:54.124955 kubelet[2437]: E0213 20:09:54.124910 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:09:56.043480 kubelet[2437]: E0213 20:09:56.043284 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:09:56.044028 kubelet[2437]: E0213 20:09:56.043877 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:09:56.621972 systemd[1]: Started sshd@49-10.0.0.156:22-10.0.0.1:45016.service - OpenSSH per-connection server daemon (10.0.0.1:45016). Feb 13 20:09:56.657590 sshd[3467]: Accepted publickey for core from 10.0.0.1 port 45016 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:09:56.658741 sshd[3467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:56.662529 systemd-logind[1427]: New session 50 of user core. Feb 13 20:09:56.667596 systemd[1]: Started session-50.scope - Session 50 of User core. Feb 13 20:09:56.771743 sshd[3467]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:56.774693 systemd[1]: sshd@49-10.0.0.156:22-10.0.0.1:45016.service: Deactivated successfully. Feb 13 20:09:56.777015 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 20:09:56.777580 systemd-logind[1427]: Session 50 logged out. Waiting for processes to exit. Feb 13 20:09:56.778694 systemd-logind[1427]: Removed session 50. Feb 13 20:09:59.125771 kubelet[2437]: E0213 20:09:59.125735 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:10:01.782857 systemd[1]: Started sshd@50-10.0.0.156:22-10.0.0.1:45018.service - OpenSSH per-connection server daemon (10.0.0.1:45018). Feb 13 20:10:01.818870 sshd[3481]: Accepted publickey for core from 10.0.0.1 port 45018 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:10:01.820160 sshd[3481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:01.823525 systemd-logind[1427]: New session 51 of user core. Feb 13 20:10:01.838589 systemd[1]: Started session-51.scope - Session 51 of User core. Feb 13 20:10:01.945335 sshd[3481]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:01.948290 systemd[1]: sshd@50-10.0.0.156:22-10.0.0.1:45018.service: Deactivated successfully. Feb 13 20:10:01.950202 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 20:10:01.951727 systemd-logind[1427]: Session 51 logged out. Waiting for processes to exit. Feb 13 20:10:01.952902 systemd-logind[1427]: Removed session 51. Feb 13 20:10:03.044541 kubelet[2437]: E0213 20:10:03.044502 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:10:04.127226 kubelet[2437]: E0213 20:10:04.127181 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:10:06.955892 systemd[1]: Started sshd@51-10.0.0.156:22-10.0.0.1:37518.service - OpenSSH per-connection server daemon (10.0.0.1:37518). Feb 13 20:10:06.991494 sshd[3498]: Accepted publickey for core from 10.0.0.1 port 37518 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:10:06.992652 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:06.996452 systemd-logind[1427]: New session 52 of user core. Feb 13 20:10:07.010529 systemd[1]: Started session-52.scope - Session 52 of User core. Feb 13 20:10:07.043920 kubelet[2437]: E0213 20:10:07.043881 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:10:07.044553 kubelet[2437]: E0213 20:10:07.044515 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:10:07.117150 sshd[3498]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:07.120472 systemd[1]: sshd@51-10.0.0.156:22-10.0.0.1:37518.service: Deactivated successfully. Feb 13 20:10:07.122126 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 20:10:07.123642 systemd-logind[1427]: Session 52 logged out. Waiting for processes to exit. Feb 13 20:10:07.124840 systemd-logind[1427]: Removed session 52. Feb 13 20:10:09.128282 kubelet[2437]: E0213 20:10:09.128216 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:10:12.131150 systemd[1]: Started sshd@52-10.0.0.156:22-10.0.0.1:37528.service - OpenSSH per-connection server daemon (10.0.0.1:37528). Feb 13 20:10:12.166691 sshd[3513]: Accepted publickey for core from 10.0.0.1 port 37528 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:10:12.167914 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:12.171940 systemd-logind[1427]: New session 53 of user core. Feb 13 20:10:12.178517 systemd[1]: Started session-53.scope - Session 53 of User core. Feb 13 20:10:12.282594 sshd[3513]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:12.286004 systemd[1]: sshd@52-10.0.0.156:22-10.0.0.1:37528.service: Deactivated successfully. Feb 13 20:10:12.287549 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 20:10:12.288134 systemd-logind[1427]: Session 53 logged out. Waiting for processes to exit. Feb 13 20:10:12.288878 systemd-logind[1427]: Removed session 53. Feb 13 20:10:14.128826 kubelet[2437]: E0213 20:10:14.128768 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:10:17.296693 systemd[1]: Started sshd@53-10.0.0.156:22-10.0.0.1:58496.service - OpenSSH per-connection server daemon (10.0.0.1:58496). Feb 13 20:10:17.332015 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 58496 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:10:17.333194 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:17.337260 systemd-logind[1427]: New session 54 of user core. Feb 13 20:10:17.344525 systemd[1]: Started session-54.scope - Session 54 of User core. Feb 13 20:10:17.449612 sshd[3527]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:17.452646 systemd[1]: sshd@53-10.0.0.156:22-10.0.0.1:58496.service: Deactivated successfully. Feb 13 20:10:17.454162 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 20:10:17.455617 systemd-logind[1427]: Session 54 logged out. Waiting for processes to exit. Feb 13 20:10:17.457033 systemd-logind[1427]: Removed session 54. Feb 13 20:10:19.130018 kubelet[2437]: E0213 20:10:19.129976 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:10:20.044145 kubelet[2437]: E0213 20:10:20.044097 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:10:20.044939 kubelet[2437]: E0213 20:10:20.044879 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:10:22.464674 systemd[1]: Started sshd@54-10.0.0.156:22-10.0.0.1:57182.service - OpenSSH per-connection server daemon (10.0.0.1:57182). Feb 13 20:10:22.499875 sshd[3542]: Accepted publickey for core from 10.0.0.1 port 57182 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:10:22.501021 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:22.504297 systemd-logind[1427]: New session 55 of user core. Feb 13 20:10:22.512512 systemd[1]: Started session-55.scope - Session 55 of User core. Feb 13 20:10:22.617392 sshd[3542]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:22.620618 systemd[1]: sshd@54-10.0.0.156:22-10.0.0.1:57182.service: Deactivated successfully. Feb 13 20:10:22.622630 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 20:10:22.623241 systemd-logind[1427]: Session 55 logged out. Waiting for processes to exit. Feb 13 20:10:22.624219 systemd-logind[1427]: Removed session 55. Feb 13 20:10:24.130768 kubelet[2437]: E0213 20:10:24.130732 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:10:27.630784 systemd[1]: Started sshd@55-10.0.0.156:22-10.0.0.1:57190.service - OpenSSH per-connection server daemon (10.0.0.1:57190). Feb 13 20:10:27.666178 sshd[3557]: Accepted publickey for core from 10.0.0.1 port 57190 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:10:27.667412 sshd[3557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:27.671222 systemd-logind[1427]: New session 56 of user core. Feb 13 20:10:27.685510 systemd[1]: Started session-56.scope - Session 56 of User core. Feb 13 20:10:27.791957 sshd[3557]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:27.795074 systemd[1]: sshd@55-10.0.0.156:22-10.0.0.1:57190.service: Deactivated successfully. Feb 13 20:10:27.796597 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 20:10:27.798509 systemd-logind[1427]: Session 56 logged out. Waiting for processes to exit. Feb 13 20:10:27.799300 systemd-logind[1427]: Removed session 56. Feb 13 20:10:29.132452 kubelet[2437]: E0213 20:10:29.132417 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:10:32.044319 kubelet[2437]: E0213 20:10:32.044285 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:10:32.045074 kubelet[2437]: E0213 20:10:32.045010 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:10:32.801725 systemd[1]: Started sshd@56-10.0.0.156:22-10.0.0.1:52344.service - OpenSSH per-connection server daemon (10.0.0.1:52344). Feb 13 20:10:32.837172 sshd[3573]: Accepted publickey for core from 10.0.0.1 port 52344 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:10:32.838298 sshd[3573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:32.842027 systemd-logind[1427]: New session 57 of user core. Feb 13 20:10:32.857512 systemd[1]: Started session-57.scope - Session 57 of User core. Feb 13 20:10:32.967069 sshd[3573]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:32.970960 systemd[1]: sshd@56-10.0.0.156:22-10.0.0.1:52344.service: Deactivated successfully. Feb 13 20:10:32.972548 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 20:10:32.973980 systemd-logind[1427]: Session 57 logged out. Waiting for processes to exit. Feb 13 20:10:32.974819 systemd-logind[1427]: Removed session 57. Feb 13 20:10:34.133741 kubelet[2437]: E0213 20:10:34.133672 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:10:37.978951 systemd[1]: Started sshd@57-10.0.0.156:22-10.0.0.1:52346.service - OpenSSH per-connection server daemon (10.0.0.1:52346). Feb 13 20:10:38.014662 sshd[3588]: Accepted publickey for core from 10.0.0.1 port 52346 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:10:38.015796 sshd[3588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:38.019211 systemd-logind[1427]: New session 58 of user core. Feb 13 20:10:38.028520 systemd[1]: Started session-58.scope - Session 58 of User core. Feb 13 20:10:38.134509 sshd[3588]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:38.137608 systemd[1]: sshd@57-10.0.0.156:22-10.0.0.1:52346.service: Deactivated successfully. Feb 13 20:10:38.139863 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 20:10:38.140680 systemd-logind[1427]: Session 58 logged out. Waiting for processes to exit. Feb 13 20:10:38.141493 systemd-logind[1427]: Removed session 58. Feb 13 20:10:39.135251 kubelet[2437]: E0213 20:10:39.135210 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:10:40.043452 kubelet[2437]: E0213 20:10:40.043424 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:10:43.145975 systemd[1]: Started sshd@58-10.0.0.156:22-10.0.0.1:48458.service - OpenSSH per-connection server daemon (10.0.0.1:48458). Feb 13 20:10:43.181262 sshd[3603]: Accepted publickey for core from 10.0.0.1 port 48458 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:10:43.182496 sshd[3603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:43.186435 systemd-logind[1427]: New session 59 of user core. Feb 13 20:10:43.196514 systemd[1]: Started session-59.scope - Session 59 of User core. Feb 13 20:10:43.303638 sshd[3603]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:43.306594 systemd[1]: sshd@58-10.0.0.156:22-10.0.0.1:48458.service: Deactivated successfully. Feb 13 20:10:43.308777 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 20:10:43.309501 systemd-logind[1427]: Session 59 logged out. Waiting for processes to exit. Feb 13 20:10:43.310614 systemd-logind[1427]: Removed session 59. Feb 13 20:10:44.136026 kubelet[2437]: E0213 20:10:44.135966 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:10:45.044046 kubelet[2437]: E0213 20:10:45.044020 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:10:47.043874 kubelet[2437]: E0213 20:10:47.043837 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:10:47.044709 kubelet[2437]: E0213 20:10:47.044343 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:10:48.313875 systemd[1]: Started sshd@59-10.0.0.156:22-10.0.0.1:48472.service - OpenSSH per-connection server daemon (10.0.0.1:48472). Feb 13 20:10:48.349847 sshd[3618]: Accepted publickey for core from 10.0.0.1 port 48472 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:10:48.351020 sshd[3618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:48.354837 systemd-logind[1427]: New session 60 of user core. Feb 13 20:10:48.372525 systemd[1]: Started session-60.scope - Session 60 of User core. Feb 13 20:10:48.483238 sshd[3618]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:48.486116 systemd[1]: sshd@59-10.0.0.156:22-10.0.0.1:48472.service: Deactivated successfully. Feb 13 20:10:48.488992 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 20:10:48.489561 systemd-logind[1427]: Session 60 logged out. Waiting for processes to exit. Feb 13 20:10:48.490294 systemd-logind[1427]: Removed session 60. Feb 13 20:10:49.137123 kubelet[2437]: E0213 20:10:49.137077 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:10:53.044049 kubelet[2437]: E0213 20:10:53.043987 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:10:53.499956 systemd[1]: Started sshd@60-10.0.0.156:22-10.0.0.1:33492.service - OpenSSH per-connection server daemon (10.0.0.1:33492). Feb 13 20:10:53.535103 sshd[3633]: Accepted publickey for core from 10.0.0.1 port 33492 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:10:53.536290 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:53.539525 systemd-logind[1427]: New session 61 of user core. Feb 13 20:10:53.551520 systemd[1]: Started session-61.scope - Session 61 of User core. Feb 13 20:10:53.657427 sshd[3633]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:53.660476 systemd[1]: sshd@60-10.0.0.156:22-10.0.0.1:33492.service: Deactivated successfully. Feb 13 20:10:53.662961 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 20:10:53.663780 systemd-logind[1427]: Session 61 logged out. Waiting for processes to exit. Feb 13 20:10:53.664491 systemd-logind[1427]: Removed session 61. Feb 13 20:10:54.138126 kubelet[2437]: E0213 20:10:54.138098 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:10:58.667982 systemd[1]: Started sshd@61-10.0.0.156:22-10.0.0.1:33498.service - OpenSSH per-connection server daemon (10.0.0.1:33498). Feb 13 20:10:58.703082 sshd[3650]: Accepted publickey for core from 10.0.0.1 port 33498 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:10:58.704291 sshd[3650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:58.709482 systemd-logind[1427]: New session 62 of user core. Feb 13 20:10:58.717542 systemd[1]: Started session-62.scope - Session 62 of User core. Feb 13 20:10:58.826074 sshd[3650]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:58.829470 systemd[1]: sshd@61-10.0.0.156:22-10.0.0.1:33498.service: Deactivated successfully. Feb 13 20:10:58.831153 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 20:10:58.832501 systemd-logind[1427]: Session 62 logged out. Waiting for processes to exit. Feb 13 20:10:58.833457 systemd-logind[1427]: Removed session 62. Feb 13 20:10:59.139752 kubelet[2437]: E0213 20:10:59.139722 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:11:01.044084 kubelet[2437]: E0213 20:11:01.044042 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:11:01.044682 kubelet[2437]: E0213 20:11:01.044628 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:11:03.836830 systemd[1]: Started sshd@62-10.0.0.156:22-10.0.0.1:52302.service - OpenSSH per-connection server daemon (10.0.0.1:52302). Feb 13 20:11:03.872891 sshd[3669]: Accepted publickey for core from 10.0.0.1 port 52302 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:11:03.874067 sshd[3669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:03.877802 systemd-logind[1427]: New session 63 of user core. Feb 13 20:11:03.883511 systemd[1]: Started session-63.scope - Session 63 of User core. Feb 13 20:11:03.990827 sshd[3669]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:03.993223 systemd[1]: sshd@62-10.0.0.156:22-10.0.0.1:52302.service: Deactivated successfully. Feb 13 20:11:03.994687 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 20:11:03.995888 systemd-logind[1427]: Session 63 logged out. Waiting for processes to exit. Feb 13 20:11:03.996675 systemd-logind[1427]: Removed session 63. Feb 13 20:11:04.141278 kubelet[2437]: E0213 20:11:04.141126 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:11:09.000949 systemd[1]: Started sshd@63-10.0.0.156:22-10.0.0.1:52304.service - OpenSSH per-connection server daemon (10.0.0.1:52304). Feb 13 20:11:09.036470 sshd[3684]: Accepted publickey for core from 10.0.0.1 port 52304 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:11:09.037719 sshd[3684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:09.041446 systemd-logind[1427]: New session 64 of user core. Feb 13 20:11:09.047519 systemd[1]: Started session-64.scope - Session 64 of User core. Feb 13 20:11:09.142321 kubelet[2437]: E0213 20:11:09.142283 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:11:09.153819 sshd[3684]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:09.156894 systemd[1]: sshd@63-10.0.0.156:22-10.0.0.1:52304.service: Deactivated successfully. Feb 13 20:11:09.158523 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 20:11:09.159071 systemd-logind[1427]: Session 64 logged out. Waiting for processes to exit. Feb 13 20:11:09.159773 systemd-logind[1427]: Removed session 64. Feb 13 20:11:14.142868 kubelet[2437]: E0213 20:11:14.142789 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:11:14.165929 systemd[1]: Started sshd@64-10.0.0.156:22-10.0.0.1:41704.service - OpenSSH per-connection server daemon (10.0.0.1:41704). Feb 13 20:11:14.201209 sshd[3699]: Accepted publickey for core from 10.0.0.1 port 41704 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:11:14.202692 sshd[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:14.206019 systemd-logind[1427]: New session 65 of user core. Feb 13 20:11:14.215618 systemd[1]: Started session-65.scope - Session 65 of User core. Feb 13 20:11:14.323309 sshd[3699]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:14.326426 systemd[1]: sshd@64-10.0.0.156:22-10.0.0.1:41704.service: Deactivated successfully. Feb 13 20:11:14.329528 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 20:11:14.330111 systemd-logind[1427]: Session 65 logged out. Waiting for processes to exit. Feb 13 20:11:14.331130 systemd-logind[1427]: Removed session 65. Feb 13 20:11:16.044161 kubelet[2437]: E0213 20:11:16.044123 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:11:16.044920 kubelet[2437]: E0213 20:11:16.044886 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:11:17.044013 kubelet[2437]: E0213 20:11:17.043972 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:11:19.144082 kubelet[2437]: E0213 20:11:19.144002 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:11:19.333854 systemd[1]: Started sshd@65-10.0.0.156:22-10.0.0.1:41720.service - OpenSSH per-connection server daemon (10.0.0.1:41720). Feb 13 20:11:19.369087 sshd[3713]: Accepted publickey for core from 10.0.0.1 port 41720 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:11:19.370246 sshd[3713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:19.373730 systemd-logind[1427]: New session 66 of user core. Feb 13 20:11:19.383510 systemd[1]: Started session-66.scope - Session 66 of User core. Feb 13 20:11:19.489564 sshd[3713]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:19.492762 systemd[1]: sshd@65-10.0.0.156:22-10.0.0.1:41720.service: Deactivated successfully. Feb 13 20:11:19.495204 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 20:11:19.496218 systemd-logind[1427]: Session 66 logged out. Waiting for processes to exit. Feb 13 20:11:19.497225 systemd-logind[1427]: Removed session 66. Feb 13 20:11:24.145049 kubelet[2437]: E0213 20:11:24.145007 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:11:24.499896 systemd[1]: Started sshd@66-10.0.0.156:22-10.0.0.1:50430.service - OpenSSH per-connection server daemon (10.0.0.1:50430). Feb 13 20:11:24.534950 sshd[3729]: Accepted publickey for core from 10.0.0.1 port 50430 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:11:24.536085 sshd[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:24.539304 systemd-logind[1427]: New session 67 of user core. Feb 13 20:11:24.556586 systemd[1]: Started session-67.scope - Session 67 of User core. Feb 13 20:11:24.663947 sshd[3729]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:24.666956 systemd[1]: sshd@66-10.0.0.156:22-10.0.0.1:50430.service: Deactivated successfully. Feb 13 20:11:24.668837 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 20:11:24.669425 systemd-logind[1427]: Session 67 logged out. Waiting for processes to exit. Feb 13 20:11:24.670156 systemd-logind[1427]: Removed session 67. Feb 13 20:11:29.146796 kubelet[2437]: E0213 20:11:29.146761 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:11:29.677966 systemd[1]: Started sshd@67-10.0.0.156:22-10.0.0.1:50440.service - OpenSSH per-connection server daemon (10.0.0.1:50440). Feb 13 20:11:29.713041 sshd[3744]: Accepted publickey for core from 10.0.0.1 port 50440 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:11:29.714229 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:29.717805 systemd-logind[1427]: New session 68 of user core. Feb 13 20:11:29.731555 systemd[1]: Started session-68.scope - Session 68 of User core. Feb 13 20:11:29.839934 sshd[3744]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:29.842916 systemd[1]: sshd@67-10.0.0.156:22-10.0.0.1:50440.service: Deactivated successfully. Feb 13 20:11:29.844822 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 20:11:29.845500 systemd-logind[1427]: Session 68 logged out. Waiting for processes to exit. Feb 13 20:11:29.846169 systemd-logind[1427]: Removed session 68. Feb 13 20:11:31.043554 kubelet[2437]: E0213 20:11:31.043517 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:11:31.044110 kubelet[2437]: E0213 20:11:31.044068 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:11:34.148148 kubelet[2437]: E0213 20:11:34.148104 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:11:34.853906 systemd[1]: Started sshd@68-10.0.0.156:22-10.0.0.1:55064.service - OpenSSH per-connection server daemon (10.0.0.1:55064). Feb 13 20:11:34.889152 sshd[3761]: Accepted publickey for core from 10.0.0.1 port 55064 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:11:34.890407 sshd[3761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:34.894017 systemd-logind[1427]: New session 69 of user core. Feb 13 20:11:34.906560 systemd[1]: Started session-69.scope - Session 69 of User core. Feb 13 20:11:35.014557 sshd[3761]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:35.018177 systemd[1]: sshd@68-10.0.0.156:22-10.0.0.1:55064.service: Deactivated successfully. Feb 13 20:11:35.020758 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 20:11:35.021411 systemd-logind[1427]: Session 69 logged out. Waiting for processes to exit. Feb 13 20:11:35.022588 systemd-logind[1427]: Removed session 69. Feb 13 20:11:39.148675 kubelet[2437]: E0213 20:11:39.148634 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:11:40.030890 systemd[1]: Started sshd@69-10.0.0.156:22-10.0.0.1:55076.service - OpenSSH per-connection server daemon (10.0.0.1:55076). Feb 13 20:11:40.066561 sshd[3776]: Accepted publickey for core from 10.0.0.1 port 55076 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:11:40.067838 sshd[3776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:40.071567 systemd-logind[1427]: New session 70 of user core. Feb 13 20:11:40.081513 systemd[1]: Started session-70.scope - Session 70 of User core. Feb 13 20:11:40.187921 sshd[3776]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:40.190962 systemd[1]: sshd@69-10.0.0.156:22-10.0.0.1:55076.service: Deactivated successfully. Feb 13 20:11:40.192617 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 20:11:40.193192 systemd-logind[1427]: Session 70 logged out. Waiting for processes to exit. Feb 13 20:11:40.193974 systemd-logind[1427]: Removed session 70. Feb 13 20:11:44.149174 kubelet[2437]: E0213 20:11:44.149138 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:11:45.198774 systemd[1]: Started sshd@70-10.0.0.156:22-10.0.0.1:34916.service - OpenSSH per-connection server daemon (10.0.0.1:34916). Feb 13 20:11:45.233940 sshd[3791]: Accepted publickey for core from 10.0.0.1 port 34916 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:11:45.235021 sshd[3791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:45.238362 systemd-logind[1427]: New session 71 of user core. Feb 13 20:11:45.248593 systemd[1]: Started session-71.scope - Session 71 of User core. Feb 13 20:11:45.356005 sshd[3791]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:45.358608 systemd[1]: sshd@70-10.0.0.156:22-10.0.0.1:34916.service: Deactivated successfully. Feb 13 20:11:45.360439 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 20:11:45.361946 systemd-logind[1427]: Session 71 logged out. Waiting for processes to exit. Feb 13 20:11:45.362909 systemd-logind[1427]: Removed session 71. Feb 13 20:11:46.043813 kubelet[2437]: E0213 20:11:46.043769 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:11:46.044774 containerd[1440]: time="2025-02-13T20:11:46.044706286Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:11:47.374659 containerd[1440]: time="2025-02-13T20:11:47.374613316Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:11:47.375042 containerd[1440]: time="2025-02-13T20:11:47.374741318Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=13144" Feb 13 20:11:47.375084 kubelet[2437]: E0213 20:11:47.374780 2437 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:11:47.375084 kubelet[2437]: E0213 20:11:47.374844 2437 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:11:47.375313 kubelet[2437]: E0213 20:11:47.374928 2437 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7r94c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-mkbnt_kube-flannel(dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:11:47.376149 kubelet[2437]: E0213 20:11:47.376096 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:11:49.149965 kubelet[2437]: E0213 20:11:49.149896 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:11:50.365822 systemd[1]: Started sshd@71-10.0.0.156:22-10.0.0.1:34926.service - OpenSSH per-connection server daemon (10.0.0.1:34926). Feb 13 20:11:50.400931 sshd[3806]: Accepted publickey for core from 10.0.0.1 port 34926 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:11:50.402139 sshd[3806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:50.405673 systemd-logind[1427]: New session 72 of user core. Feb 13 20:11:50.416579 systemd[1]: Started session-72.scope - Session 72 of User core. Feb 13 20:11:50.522988 sshd[3806]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:50.525946 systemd[1]: sshd@71-10.0.0.156:22-10.0.0.1:34926.service: Deactivated successfully. Feb 13 20:11:50.527549 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 20:11:50.528430 systemd-logind[1427]: Session 72 logged out. Waiting for processes to exit. Feb 13 20:11:50.529251 systemd-logind[1427]: Removed session 72. Feb 13 20:11:54.150798 kubelet[2437]: E0213 20:11:54.150735 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:11:55.533695 systemd[1]: Started sshd@72-10.0.0.156:22-10.0.0.1:36920.service - OpenSSH per-connection server daemon (10.0.0.1:36920). Feb 13 20:11:55.569146 sshd[3822]: Accepted publickey for core from 10.0.0.1 port 36920 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:11:55.570285 sshd[3822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:55.574176 systemd-logind[1427]: New session 73 of user core. Feb 13 20:11:55.584524 systemd[1]: Started session-73.scope - Session 73 of User core. Feb 13 20:11:55.690162 sshd[3822]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:55.693863 systemd[1]: sshd@72-10.0.0.156:22-10.0.0.1:36920.service: Deactivated successfully. Feb 13 20:11:55.695591 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 20:11:55.696490 systemd-logind[1427]: Session 73 logged out. Waiting for processes to exit. Feb 13 20:11:55.697899 systemd-logind[1427]: Removed session 73. Feb 13 20:11:59.151515 kubelet[2437]: E0213 20:11:59.151467 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:12:00.701052 systemd[1]: Started sshd@73-10.0.0.156:22-10.0.0.1:36936.service - OpenSSH per-connection server daemon (10.0.0.1:36936). Feb 13 20:12:00.736576 sshd[3837]: Accepted publickey for core from 10.0.0.1 port 36936 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:12:00.737754 sshd[3837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:00.741627 systemd-logind[1427]: New session 74 of user core. Feb 13 20:12:00.747581 systemd[1]: Started session-74.scope - Session 74 of User core. Feb 13 20:12:00.856059 sshd[3837]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:00.859247 systemd[1]: sshd@73-10.0.0.156:22-10.0.0.1:36936.service: Deactivated successfully. Feb 13 20:12:00.861223 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 20:12:00.862021 systemd-logind[1427]: Session 74 logged out. Waiting for processes to exit. Feb 13 20:12:00.862966 systemd-logind[1427]: Removed session 74. Feb 13 20:12:01.044283 kubelet[2437]: E0213 20:12:01.044194 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:12:03.043355 kubelet[2437]: E0213 20:12:03.043325 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:12:03.044219 kubelet[2437]: E0213 20:12:03.043960 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:12:04.152608 kubelet[2437]: E0213 20:12:04.152575 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:12:05.866892 systemd[1]: Started sshd@74-10.0.0.156:22-10.0.0.1:34762.service - OpenSSH per-connection server daemon (10.0.0.1:34762). Feb 13 20:12:05.902267 sshd[3855]: Accepted publickey for core from 10.0.0.1 port 34762 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:12:05.903436 sshd[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:05.907241 systemd-logind[1427]: New session 75 of user core. Feb 13 20:12:05.915613 systemd[1]: Started session-75.scope - Session 75 of User core. Feb 13 20:12:06.024412 sshd[3855]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:06.027034 systemd[1]: sshd@74-10.0.0.156:22-10.0.0.1:34762.service: Deactivated successfully. Feb 13 20:12:06.028643 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 20:12:06.030082 systemd-logind[1427]: Session 75 logged out. Waiting for processes to exit. Feb 13 20:12:06.031033 systemd-logind[1427]: Removed session 75. Feb 13 20:12:09.154151 kubelet[2437]: E0213 20:12:09.154116 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:12:11.034911 systemd[1]: Started sshd@75-10.0.0.156:22-10.0.0.1:34768.service - OpenSSH per-connection server daemon (10.0.0.1:34768). Feb 13 20:12:11.071166 sshd[3869]: Accepted publickey for core from 10.0.0.1 port 34768 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:12:11.072919 sshd[3869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:11.076779 systemd-logind[1427]: New session 76 of user core. Feb 13 20:12:11.084606 systemd[1]: Started session-76.scope - Session 76 of User core. Feb 13 20:12:11.191954 sshd[3869]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:11.195529 systemd[1]: sshd@75-10.0.0.156:22-10.0.0.1:34768.service: Deactivated successfully. Feb 13 20:12:11.197157 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 20:12:11.197788 systemd-logind[1427]: Session 76 logged out. Waiting for processes to exit. Feb 13 20:12:11.198576 systemd-logind[1427]: Removed session 76. Feb 13 20:12:12.043918 kubelet[2437]: E0213 20:12:12.043881 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:12:14.043647 kubelet[2437]: E0213 20:12:14.043497 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:12:14.047026 kubelet[2437]: E0213 20:12:14.046192 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:12:14.155608 kubelet[2437]: E0213 20:12:14.155580 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:12:16.202014 systemd[1]: Started sshd@76-10.0.0.156:22-10.0.0.1:37802.service - OpenSSH per-connection server daemon (10.0.0.1:37802). Feb 13 20:12:16.237492 sshd[3883]: Accepted publickey for core from 10.0.0.1 port 37802 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:12:16.238702 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:16.242150 systemd-logind[1427]: New session 77 of user core. Feb 13 20:12:16.251530 systemd[1]: Started session-77.scope - Session 77 of User core. Feb 13 20:12:16.359834 sshd[3883]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:16.362764 systemd[1]: sshd@76-10.0.0.156:22-10.0.0.1:37802.service: Deactivated successfully. Feb 13 20:12:16.364339 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 20:12:16.365608 systemd-logind[1427]: Session 77 logged out. Waiting for processes to exit. Feb 13 20:12:16.366555 systemd-logind[1427]: Removed session 77. Feb 13 20:12:19.156428 kubelet[2437]: E0213 20:12:19.156339 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:12:21.370900 systemd[1]: Started sshd@77-10.0.0.156:22-10.0.0.1:37806.service - OpenSSH per-connection server daemon (10.0.0.1:37806). Feb 13 20:12:21.406768 sshd[3898]: Accepted publickey for core from 10.0.0.1 port 37806 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:12:21.407961 sshd[3898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:21.411871 systemd-logind[1427]: New session 78 of user core. Feb 13 20:12:21.425582 systemd[1]: Started session-78.scope - Session 78 of User core. Feb 13 20:12:21.533981 sshd[3898]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:21.540738 systemd[1]: sshd@77-10.0.0.156:22-10.0.0.1:37806.service: Deactivated successfully. Feb 13 20:12:21.542058 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 20:12:21.543829 systemd-logind[1427]: Session 78 logged out. Waiting for processes to exit. Feb 13 20:12:21.553854 systemd[1]: Started sshd@78-10.0.0.156:22-10.0.0.1:37816.service - OpenSSH per-connection server daemon (10.0.0.1:37816). Feb 13 20:12:21.554819 systemd-logind[1427]: Removed session 78. Feb 13 20:12:21.584716 sshd[3912]: Accepted publickey for core from 10.0.0.1 port 37816 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:12:21.585819 sshd[3912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:21.589257 systemd-logind[1427]: New session 79 of user core. Feb 13 20:12:21.602585 systemd[1]: Started session-79.scope - Session 79 of User core. Feb 13 20:12:21.847849 sshd[3912]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:21.860954 systemd[1]: sshd@78-10.0.0.156:22-10.0.0.1:37816.service: Deactivated successfully. Feb 13 20:12:21.862370 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 20:12:21.863730 systemd-logind[1427]: Session 79 logged out. Waiting for processes to exit. Feb 13 20:12:21.864948 systemd[1]: Started sshd@79-10.0.0.156:22-10.0.0.1:37828.service - OpenSSH per-connection server daemon (10.0.0.1:37828). Feb 13 20:12:21.865728 systemd-logind[1427]: Removed session 79. Feb 13 20:12:21.901794 sshd[3925]: Accepted publickey for core from 10.0.0.1 port 37828 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:12:21.902960 sshd[3925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:21.906846 systemd-logind[1427]: New session 80 of user core. Feb 13 20:12:21.915578 systemd[1]: Started session-80.scope - Session 80 of User core. Feb 13 20:12:22.043548 kubelet[2437]: E0213 20:12:22.043467 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:12:22.482682 sshd[3925]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:22.494765 systemd[1]: sshd@79-10.0.0.156:22-10.0.0.1:37828.service: Deactivated successfully. Feb 13 20:12:22.497001 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 20:12:22.500600 systemd-logind[1427]: Session 80 logged out. Waiting for processes to exit. Feb 13 20:12:22.506659 systemd[1]: Started sshd@80-10.0.0.156:22-10.0.0.1:47324.service - OpenSSH per-connection server daemon (10.0.0.1:47324). Feb 13 20:12:22.507527 systemd-logind[1427]: Removed session 80. Feb 13 20:12:22.539700 sshd[3950]: Accepted publickey for core from 10.0.0.1 port 47324 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:12:22.540834 sshd[3950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:22.544888 systemd-logind[1427]: New session 81 of user core. Feb 13 20:12:22.550517 systemd[1]: Started session-81.scope - Session 81 of User core. Feb 13 20:12:22.753336 sshd[3950]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:22.764949 systemd[1]: sshd@80-10.0.0.156:22-10.0.0.1:47324.service: Deactivated successfully. Feb 13 20:12:22.766453 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 20:12:22.767869 systemd-logind[1427]: Session 81 logged out. Waiting for processes to exit. Feb 13 20:12:22.774681 systemd[1]: Started sshd@81-10.0.0.156:22-10.0.0.1:47340.service - OpenSSH per-connection server daemon (10.0.0.1:47340). Feb 13 20:12:22.775611 systemd-logind[1427]: Removed session 81. Feb 13 20:12:22.806687 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 47340 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:12:22.807967 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:22.812053 systemd-logind[1427]: New session 82 of user core. Feb 13 20:12:22.821516 systemd[1]: Started session-82.scope - Session 82 of User core. Feb 13 20:12:22.927247 sshd[3964]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:22.930464 systemd[1]: sshd@81-10.0.0.156:22-10.0.0.1:47340.service: Deactivated successfully. Feb 13 20:12:22.932571 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 20:12:22.933224 systemd-logind[1427]: Session 82 logged out. Waiting for processes to exit. Feb 13 20:12:22.933929 systemd-logind[1427]: Removed session 82. Feb 13 20:12:24.157690 kubelet[2437]: E0213 20:12:24.157639 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:12:26.043984 kubelet[2437]: E0213 20:12:26.043958 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:12:26.044657 kubelet[2437]: E0213 20:12:26.044594 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:12:27.044206 kubelet[2437]: E0213 20:12:27.044163 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:12:27.938854 systemd[1]: Started sshd@82-10.0.0.156:22-10.0.0.1:47352.service - OpenSSH per-connection server daemon (10.0.0.1:47352). Feb 13 20:12:27.973938 sshd[3978]: Accepted publickey for core from 10.0.0.1 port 47352 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:12:27.975117 sshd[3978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:27.978609 systemd-logind[1427]: New session 83 of user core. Feb 13 20:12:27.994508 systemd[1]: Started session-83.scope - Session 83 of User core. Feb 13 20:12:28.102569 sshd[3978]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:28.105605 systemd[1]: sshd@82-10.0.0.156:22-10.0.0.1:47352.service: Deactivated successfully. Feb 13 20:12:28.107910 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 20:12:28.108718 systemd-logind[1427]: Session 83 logged out. Waiting for processes to exit. Feb 13 20:12:28.109459 systemd-logind[1427]: Removed session 83. Feb 13 20:12:29.158352 kubelet[2437]: E0213 20:12:29.158311 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:12:33.112836 systemd[1]: Started sshd@83-10.0.0.156:22-10.0.0.1:59228.service - OpenSSH per-connection server daemon (10.0.0.1:59228). Feb 13 20:12:33.148023 sshd[3994]: Accepted publickey for core from 10.0.0.1 port 59228 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:12:33.149197 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:33.152958 systemd-logind[1427]: New session 84 of user core. Feb 13 20:12:33.161563 systemd[1]: Started session-84.scope - Session 84 of User core. Feb 13 20:12:33.267478 sshd[3994]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:33.270522 systemd[1]: sshd@83-10.0.0.156:22-10.0.0.1:59228.service: Deactivated successfully. Feb 13 20:12:33.274964 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 20:12:33.275790 systemd-logind[1427]: Session 84 logged out. Waiting for processes to exit. Feb 13 20:12:33.277673 systemd-logind[1427]: Removed session 84. Feb 13 20:12:34.159299 kubelet[2437]: E0213 20:12:34.159244 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:12:38.277743 systemd[1]: Started sshd@84-10.0.0.156:22-10.0.0.1:59240.service - OpenSSH per-connection server daemon (10.0.0.1:59240). Feb 13 20:12:38.313017 sshd[4009]: Accepted publickey for core from 10.0.0.1 port 59240 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:12:38.314172 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:38.317999 systemd-logind[1427]: New session 85 of user core. Feb 13 20:12:38.332522 systemd[1]: Started session-85.scope - Session 85 of User core. Feb 13 20:12:38.436347 sshd[4009]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:38.438948 systemd-logind[1427]: Session 85 logged out. Waiting for processes to exit. Feb 13 20:12:38.439115 systemd[1]: sshd@84-10.0.0.156:22-10.0.0.1:59240.service: Deactivated successfully. Feb 13 20:12:38.440843 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 20:12:38.442582 systemd-logind[1427]: Removed session 85. Feb 13 20:12:39.044624 kubelet[2437]: E0213 20:12:39.044546 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:12:39.045296 kubelet[2437]: E0213 20:12:39.045242 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:12:39.160294 kubelet[2437]: E0213 20:12:39.160258 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:12:43.449018 systemd[1]: Started sshd@85-10.0.0.156:22-10.0.0.1:51006.service - OpenSSH per-connection server daemon (10.0.0.1:51006). Feb 13 20:12:43.498765 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 51006 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:12:43.499982 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:43.503685 systemd-logind[1427]: New session 86 of user core. Feb 13 20:12:43.523520 systemd[1]: Started session-86.scope - Session 86 of User core. Feb 13 20:12:43.628766 sshd[4023]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:43.631759 systemd[1]: sshd@85-10.0.0.156:22-10.0.0.1:51006.service: Deactivated successfully. Feb 13 20:12:43.634213 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 20:12:43.635164 systemd-logind[1427]: Session 86 logged out. Waiting for processes to exit. Feb 13 20:12:43.636647 systemd-logind[1427]: Removed session 86. Feb 13 20:12:44.161225 kubelet[2437]: E0213 20:12:44.161186 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:12:48.639200 systemd[1]: Started sshd@86-10.0.0.156:22-10.0.0.1:51014.service - OpenSSH per-connection server daemon (10.0.0.1:51014). Feb 13 20:12:48.675317 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 51014 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:12:48.676529 sshd[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:48.680094 systemd-logind[1427]: New session 87 of user core. Feb 13 20:12:48.689505 systemd[1]: Started session-87.scope - Session 87 of User core. Feb 13 20:12:48.792902 sshd[4037]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:48.795240 systemd[1]: sshd@86-10.0.0.156:22-10.0.0.1:51014.service: Deactivated successfully. Feb 13 20:12:48.796869 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 20:12:48.798305 systemd-logind[1427]: Session 87 logged out. Waiting for processes to exit. Feb 13 20:12:48.800002 systemd-logind[1427]: Removed session 87. Feb 13 20:12:49.162819 kubelet[2437]: E0213 20:12:49.162777 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:12:50.043811 kubelet[2437]: E0213 20:12:50.043770 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:12:50.044393 kubelet[2437]: E0213 20:12:50.044342 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:12:53.803531 systemd[1]: Started sshd@87-10.0.0.156:22-10.0.0.1:48838.service - OpenSSH per-connection server daemon (10.0.0.1:48838). Feb 13 20:12:53.839310 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 48838 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:12:53.840469 sshd[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:53.843876 systemd-logind[1427]: New session 88 of user core. Feb 13 20:12:53.858518 systemd[1]: Started session-88.scope - Session 88 of User core. Feb 13 20:12:53.964279 sshd[4051]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:53.967316 systemd[1]: sshd@87-10.0.0.156:22-10.0.0.1:48838.service: Deactivated successfully. Feb 13 20:12:53.969655 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 20:12:53.970611 systemd-logind[1427]: Session 88 logged out. Waiting for processes to exit. Feb 13 20:12:53.971357 systemd-logind[1427]: Removed session 88. Feb 13 20:12:54.163407 kubelet[2437]: E0213 20:12:54.163338 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:12:58.974803 systemd[1]: Started sshd@88-10.0.0.156:22-10.0.0.1:48844.service - OpenSSH per-connection server daemon (10.0.0.1:48844). Feb 13 20:12:59.010300 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 48844 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:12:59.011468 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:59.014833 systemd-logind[1427]: New session 89 of user core. Feb 13 20:12:59.023523 systemd[1]: Started session-89.scope - Session 89 of User core. Feb 13 20:12:59.127241 sshd[4068]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:59.130181 systemd[1]: sshd@88-10.0.0.156:22-10.0.0.1:48844.service: Deactivated successfully. Feb 13 20:12:59.131739 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 20:12:59.132300 systemd-logind[1427]: Session 89 logged out. Waiting for processes to exit. Feb 13 20:12:59.133055 systemd-logind[1427]: Removed session 89. Feb 13 20:12:59.164506 kubelet[2437]: E0213 20:12:59.164464 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:13:01.119869 update_engine[1431]: I20250213 20:13:01.119798 1431 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 20:13:01.119869 update_engine[1431]: I20250213 20:13:01.119865 1431 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 20:13:01.120233 update_engine[1431]: I20250213 20:13:01.120146 1431 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 20:13:01.120554 update_engine[1431]: I20250213 20:13:01.120520 1431 omaha_request_params.cc:62] Current group set to lts Feb 13 20:13:01.120645 update_engine[1431]: I20250213 20:13:01.120621 1431 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 20:13:01.120645 update_engine[1431]: I20250213 20:13:01.120635 1431 update_attempter.cc:643] Scheduling an action processor start. Feb 13 20:13:01.120694 update_engine[1431]: I20250213 20:13:01.120651 1431 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:13:01.120694 update_engine[1431]: I20250213 20:13:01.120677 1431 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 20:13:01.120738 update_engine[1431]: I20250213 20:13:01.120724 1431 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:13:01.120738 update_engine[1431]: I20250213 20:13:01.120733 1431 omaha_request_action.cc:272] Request: Feb 13 20:13:01.120738 update_engine[1431]: Feb 13 20:13:01.120738 update_engine[1431]: Feb 13 20:13:01.120738 update_engine[1431]: Feb 13 20:13:01.120738 update_engine[1431]: Feb 13 20:13:01.120738 update_engine[1431]: Feb 13 20:13:01.120738 update_engine[1431]: Feb 13 20:13:01.120738 update_engine[1431]: Feb 13 20:13:01.120738 update_engine[1431]: Feb 13 20:13:01.120738 update_engine[1431]: I20250213 20:13:01.120738 1431 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:13:01.120990 locksmithd[1453]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 20:13:01.121764 update_engine[1431]: I20250213 20:13:01.121723 1431 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:13:01.121972 update_engine[1431]: I20250213 20:13:01.121945 1431 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:13:01.127827 update_engine[1431]: E20250213 20:13:01.127787 1431 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:13:01.127885 update_engine[1431]: I20250213 20:13:01.127852 1431 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 20:13:04.043489 kubelet[2437]: E0213 20:13:04.043451 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:13:04.044125 kubelet[2437]: E0213 20:13:04.044024 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:13:04.138792 systemd[1]: Started sshd@89-10.0.0.156:22-10.0.0.1:37108.service - OpenSSH per-connection server daemon (10.0.0.1:37108). Feb 13 20:13:04.165802 kubelet[2437]: E0213 20:13:04.165776 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:13:04.174138 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 37108 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:13:04.175322 sshd[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:04.178728 systemd-logind[1427]: New session 90 of user core. Feb 13 20:13:04.186510 systemd[1]: Started session-90.scope - Session 90 of User core. Feb 13 20:13:04.289323 sshd[4085]: pam_unix(sshd:session): session closed for user core Feb 13 20:13:04.292493 systemd[1]: sshd@89-10.0.0.156:22-10.0.0.1:37108.service: Deactivated successfully. Feb 13 20:13:04.294664 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 20:13:04.296151 systemd-logind[1427]: Session 90 logged out. Waiting for processes to exit. Feb 13 20:13:04.296881 systemd-logind[1427]: Removed session 90. Feb 13 20:13:07.043803 kubelet[2437]: E0213 20:13:07.043699 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:13:09.166741 kubelet[2437]: E0213 20:13:09.166706 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:13:09.303834 systemd[1]: Started sshd@90-10.0.0.156:22-10.0.0.1:37122.service - OpenSSH per-connection server daemon (10.0.0.1:37122). Feb 13 20:13:09.339270 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 37122 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:13:09.340437 sshd[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:09.343707 systemd-logind[1427]: New session 91 of user core. Feb 13 20:13:09.357510 systemd[1]: Started session-91.scope - Session 91 of User core. Feb 13 20:13:09.460364 sshd[4099]: pam_unix(sshd:session): session closed for user core Feb 13 20:13:09.463424 systemd[1]: sshd@90-10.0.0.156:22-10.0.0.1:37122.service: Deactivated successfully. Feb 13 20:13:09.465021 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 20:13:09.466248 systemd-logind[1427]: Session 91 logged out. Waiting for processes to exit. Feb 13 20:13:09.467016 systemd-logind[1427]: Removed session 91. Feb 13 20:13:11.119906 update_engine[1431]: I20250213 20:13:11.119372 1431 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:13:11.119906 update_engine[1431]: I20250213 20:13:11.119714 1431 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:13:11.119906 update_engine[1431]: I20250213 20:13:11.119864 1431 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:13:11.141762 update_engine[1431]: E20250213 20:13:11.141644 1431 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:13:11.141762 update_engine[1431]: I20250213 20:13:11.141731 1431 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 20:13:14.167981 kubelet[2437]: E0213 20:13:14.167941 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:13:14.470797 systemd[1]: Started sshd@91-10.0.0.156:22-10.0.0.1:54210.service - OpenSSH per-connection server daemon (10.0.0.1:54210). Feb 13 20:13:14.508316 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 54210 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:13:14.509767 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:14.515703 systemd-logind[1427]: New session 92 of user core. Feb 13 20:13:14.525516 systemd[1]: Started session-92.scope - Session 92 of User core. Feb 13 20:13:14.631923 sshd[4113]: pam_unix(sshd:session): session closed for user core Feb 13 20:13:14.635085 systemd[1]: sshd@91-10.0.0.156:22-10.0.0.1:54210.service: Deactivated successfully. Feb 13 20:13:14.636833 systemd[1]: session-92.scope: Deactivated successfully. Feb 13 20:13:14.637508 systemd-logind[1427]: Session 92 logged out. Waiting for processes to exit. Feb 13 20:13:14.638278 systemd-logind[1427]: Removed session 92. Feb 13 20:13:17.043813 kubelet[2437]: E0213 20:13:17.043781 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:13:17.043813 kubelet[2437]: E0213 20:13:17.043813 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:13:17.044620 kubelet[2437]: E0213 20:13:17.044329 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:13:19.169229 kubelet[2437]: E0213 20:13:19.169183 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:13:19.642833 systemd[1]: Started sshd@92-10.0.0.156:22-10.0.0.1:54216.service - OpenSSH per-connection server daemon (10.0.0.1:54216). Feb 13 20:13:19.678230 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 54216 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:13:19.679493 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:19.683204 systemd-logind[1427]: New session 93 of user core. Feb 13 20:13:19.691515 systemd[1]: Started session-93.scope - Session 93 of User core. Feb 13 20:13:19.798126 sshd[4129]: pam_unix(sshd:session): session closed for user core Feb 13 20:13:19.802265 systemd[1]: sshd@92-10.0.0.156:22-10.0.0.1:54216.service: Deactivated successfully. Feb 13 20:13:19.804492 systemd[1]: session-93.scope: Deactivated successfully. Feb 13 20:13:19.805243 systemd-logind[1427]: Session 93 logged out. Waiting for processes to exit. Feb 13 20:13:19.806307 systemd-logind[1427]: Removed session 93. Feb 13 20:13:21.119166 update_engine[1431]: I20250213 20:13:21.119061 1431 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:13:21.119674 update_engine[1431]: I20250213 20:13:21.119448 1431 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:13:21.119674 update_engine[1431]: I20250213 20:13:21.119619 1431 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:13:21.132325 update_engine[1431]: E20250213 20:13:21.132274 1431 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:13:21.132420 update_engine[1431]: I20250213 20:13:21.132329 1431 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 20:13:24.043872 kubelet[2437]: E0213 20:13:24.043844 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:13:24.170374 kubelet[2437]: E0213 20:13:24.170292 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:13:24.811931 systemd[1]: Started sshd@93-10.0.0.156:22-10.0.0.1:57732.service - OpenSSH per-connection server daemon (10.0.0.1:57732). Feb 13 20:13:24.847176 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 57732 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:13:24.848310 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:24.852066 systemd-logind[1427]: New session 94 of user core. Feb 13 20:13:24.861510 systemd[1]: Started session-94.scope - Session 94 of User core. Feb 13 20:13:24.965764 sshd[4144]: pam_unix(sshd:session): session closed for user core Feb 13 20:13:24.968808 systemd[1]: sshd@93-10.0.0.156:22-10.0.0.1:57732.service: Deactivated successfully. Feb 13 20:13:24.971310 systemd[1]: session-94.scope: Deactivated successfully. Feb 13 20:13:24.972891 systemd-logind[1427]: Session 94 logged out. Waiting for processes to exit. Feb 13 20:13:24.973867 systemd-logind[1427]: Removed session 94. Feb 13 20:13:29.171341 kubelet[2437]: E0213 20:13:29.171300 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:13:29.976934 systemd[1]: Started sshd@94-10.0.0.156:22-10.0.0.1:57738.service - OpenSSH per-connection server daemon (10.0.0.1:57738). Feb 13 20:13:30.012627 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 57738 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:13:30.013875 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:30.017948 systemd-logind[1427]: New session 95 of user core. Feb 13 20:13:30.030522 systemd[1]: Started session-95.scope - Session 95 of User core. Feb 13 20:13:30.134487 sshd[4161]: pam_unix(sshd:session): session closed for user core Feb 13 20:13:30.137641 systemd[1]: sshd@94-10.0.0.156:22-10.0.0.1:57738.service: Deactivated successfully. Feb 13 20:13:30.139242 systemd[1]: session-95.scope: Deactivated successfully. Feb 13 20:13:30.139912 systemd-logind[1427]: Session 95 logged out. Waiting for processes to exit. Feb 13 20:13:30.140676 systemd-logind[1427]: Removed session 95. Feb 13 20:13:31.043764 kubelet[2437]: E0213 20:13:31.043591 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:13:31.044138 kubelet[2437]: E0213 20:13:31.044098 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:13:31.118518 update_engine[1431]: I20250213 20:13:31.118435 1431 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:13:31.118884 update_engine[1431]: I20250213 20:13:31.118731 1431 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:13:31.118912 update_engine[1431]: I20250213 20:13:31.118878 1431 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:13:31.126133 update_engine[1431]: E20250213 20:13:31.126082 1431 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:13:31.126190 update_engine[1431]: I20250213 20:13:31.126159 1431 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:13:31.126190 update_engine[1431]: I20250213 20:13:31.126173 1431 omaha_request_action.cc:617] Omaha request response: Feb 13 20:13:31.126298 update_engine[1431]: E20250213 20:13:31.126269 1431 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 20:13:31.126298 update_engine[1431]: I20250213 20:13:31.126292 1431 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 20:13:31.126298 update_engine[1431]: I20250213 20:13:31.126298 1431 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:13:31.126369 update_engine[1431]: I20250213 20:13:31.126303 1431 update_attempter.cc:306] Processing Done. Feb 13 20:13:31.126369 update_engine[1431]: E20250213 20:13:31.126315 1431 update_attempter.cc:619] Update failed. Feb 13 20:13:31.126369 update_engine[1431]: I20250213 20:13:31.126320 1431 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 20:13:31.126369 update_engine[1431]: I20250213 20:13:31.126324 1431 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 20:13:31.126369 update_engine[1431]: I20250213 20:13:31.126329 1431 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 20:13:31.126499 update_engine[1431]: I20250213 20:13:31.126410 1431 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:13:31.126499 update_engine[1431]: I20250213 20:13:31.126431 1431 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:13:31.126499 update_engine[1431]: I20250213 20:13:31.126438 1431 omaha_request_action.cc:272] Request: Feb 13 20:13:31.126499 update_engine[1431]: Feb 13 20:13:31.126499 update_engine[1431]: Feb 13 20:13:31.126499 update_engine[1431]: Feb 13 20:13:31.126499 update_engine[1431]: Feb 13 20:13:31.126499 update_engine[1431]: Feb 13 20:13:31.126499 update_engine[1431]: Feb 13 20:13:31.126499 update_engine[1431]: I20250213 20:13:31.126443 1431 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:13:31.126683 update_engine[1431]: I20250213 20:13:31.126576 1431 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:13:31.126708 update_engine[1431]: I20250213 20:13:31.126695 1431 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:13:31.127007 locksmithd[1453]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 20:13:31.139765 update_engine[1431]: E20250213 20:13:31.139725 1431 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:13:31.139813 update_engine[1431]: I20250213 20:13:31.139778 1431 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:13:31.139813 update_engine[1431]: I20250213 20:13:31.139786 1431 omaha_request_action.cc:617] Omaha request response: Feb 13 20:13:31.139813 update_engine[1431]: I20250213 20:13:31.139791 1431 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:13:31.139813 update_engine[1431]: I20250213 20:13:31.139796 1431 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:13:31.139813 update_engine[1431]: I20250213 20:13:31.139801 1431 update_attempter.cc:306] Processing Done. Feb 13 20:13:31.139813 update_engine[1431]: I20250213 20:13:31.139806 1431 update_attempter.cc:310] Error event sent. Feb 13 20:13:31.139813 update_engine[1431]: I20250213 20:13:31.139813 1431 update_check_scheduler.cc:74] Next update check in 47m20s Feb 13 20:13:31.140065 locksmithd[1453]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 20:13:34.172834 kubelet[2437]: E0213 20:13:34.172765 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:13:35.144970 systemd[1]: Started sshd@95-10.0.0.156:22-10.0.0.1:47544.service - OpenSSH per-connection server daemon (10.0.0.1:47544). Feb 13 20:13:35.180271 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 47544 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:13:35.181481 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:35.184795 systemd-logind[1427]: New session 96 of user core. Feb 13 20:13:35.197515 systemd[1]: Started session-96.scope - Session 96 of User core. Feb 13 20:13:35.300942 sshd[4178]: pam_unix(sshd:session): session closed for user core Feb 13 20:13:35.303908 systemd[1]: sshd@95-10.0.0.156:22-10.0.0.1:47544.service: Deactivated successfully. Feb 13 20:13:35.305465 systemd[1]: session-96.scope: Deactivated successfully. Feb 13 20:13:35.305976 systemd-logind[1427]: Session 96 logged out. Waiting for processes to exit. Feb 13 20:13:35.306740 systemd-logind[1427]: Removed session 96. Feb 13 20:13:39.174314 kubelet[2437]: E0213 20:13:39.174264 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:13:40.311791 systemd[1]: Started sshd@96-10.0.0.156:22-10.0.0.1:47552.service - OpenSSH per-connection server daemon (10.0.0.1:47552). Feb 13 20:13:40.347257 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 47552 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:13:40.348560 sshd[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:40.352126 systemd-logind[1427]: New session 97 of user core. Feb 13 20:13:40.358524 systemd[1]: Started session-97.scope - Session 97 of User core. Feb 13 20:13:40.461005 sshd[4192]: pam_unix(sshd:session): session closed for user core Feb 13 20:13:40.464169 systemd[1]: sshd@96-10.0.0.156:22-10.0.0.1:47552.service: Deactivated successfully. Feb 13 20:13:40.465718 systemd[1]: session-97.scope: Deactivated successfully. Feb 13 20:13:40.466299 systemd-logind[1427]: Session 97 logged out. Waiting for processes to exit. Feb 13 20:13:40.468325 systemd-logind[1427]: Removed session 97. Feb 13 20:13:42.043817 kubelet[2437]: E0213 20:13:42.043788 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:13:42.044641 kubelet[2437]: E0213 20:13:42.044402 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:13:44.175363 kubelet[2437]: E0213 20:13:44.175315 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:13:45.474739 systemd[1]: Started sshd@97-10.0.0.156:22-10.0.0.1:48066.service - OpenSSH per-connection server daemon (10.0.0.1:48066). Feb 13 20:13:45.510616 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 48066 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:13:45.511822 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:45.515451 systemd-logind[1427]: New session 98 of user core. Feb 13 20:13:45.524514 systemd[1]: Started session-98.scope - Session 98 of User core. Feb 13 20:13:45.629990 sshd[4206]: pam_unix(sshd:session): session closed for user core Feb 13 20:13:45.635110 systemd[1]: sshd@97-10.0.0.156:22-10.0.0.1:48066.service: Deactivated successfully. Feb 13 20:13:45.636613 systemd[1]: session-98.scope: Deactivated successfully. Feb 13 20:13:45.637832 systemd-logind[1427]: Session 98 logged out. Waiting for processes to exit. Feb 13 20:13:45.638597 systemd-logind[1427]: Removed session 98. Feb 13 20:13:49.176700 kubelet[2437]: E0213 20:13:49.176634 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:13:50.045639 kubelet[2437]: E0213 20:13:50.045595 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:13:50.641247 systemd[1]: Started sshd@98-10.0.0.156:22-10.0.0.1:48078.service - OpenSSH per-connection server daemon (10.0.0.1:48078). Feb 13 20:13:50.676480 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 48078 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:13:50.677604 sshd[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:50.681299 systemd-logind[1427]: New session 99 of user core. Feb 13 20:13:50.695584 systemd[1]: Started session-99.scope - Session 99 of User core. Feb 13 20:13:50.799300 sshd[4221]: pam_unix(sshd:session): session closed for user core Feb 13 20:13:50.801691 systemd[1]: sshd@98-10.0.0.156:22-10.0.0.1:48078.service: Deactivated successfully. Feb 13 20:13:50.803746 systemd[1]: session-99.scope: Deactivated successfully. Feb 13 20:13:50.806663 systemd-logind[1427]: Session 99 logged out. Waiting for processes to exit. Feb 13 20:13:50.808894 systemd-logind[1427]: Removed session 99. Feb 13 20:13:54.177851 kubelet[2437]: E0213 20:13:54.177797 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:13:55.810721 systemd[1]: Started sshd@99-10.0.0.156:22-10.0.0.1:35812.service - OpenSSH per-connection server daemon (10.0.0.1:35812). Feb 13 20:13:55.846389 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 35812 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:13:55.847619 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:13:55.851117 systemd-logind[1427]: New session 100 of user core. Feb 13 20:13:55.856513 systemd[1]: Started session-100.scope - Session 100 of User core. Feb 13 20:13:55.956948 sshd[4238]: pam_unix(sshd:session): session closed for user core Feb 13 20:13:55.959921 systemd[1]: sshd@99-10.0.0.156:22-10.0.0.1:35812.service: Deactivated successfully. Feb 13 20:13:55.962151 systemd[1]: session-100.scope: Deactivated successfully. Feb 13 20:13:55.962825 systemd-logind[1427]: Session 100 logged out. Waiting for processes to exit. Feb 13 20:13:55.963512 systemd-logind[1427]: Removed session 100. Feb 13 20:13:57.043688 kubelet[2437]: E0213 20:13:57.043657 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:13:57.044335 kubelet[2437]: E0213 20:13:57.044268 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:13:59.179452 kubelet[2437]: E0213 20:13:59.179418 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:14:00.967865 systemd[1]: Started sshd@100-10.0.0.156:22-10.0.0.1:35826.service - OpenSSH per-connection server daemon (10.0.0.1:35826). Feb 13 20:14:01.003415 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 35826 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:14:01.004568 sshd[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:01.008219 systemd-logind[1427]: New session 101 of user core. Feb 13 20:14:01.025571 systemd[1]: Started session-101.scope - Session 101 of User core. Feb 13 20:14:01.128638 sshd[4253]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:01.131671 systemd[1]: sshd@100-10.0.0.156:22-10.0.0.1:35826.service: Deactivated successfully. Feb 13 20:14:01.133795 systemd[1]: session-101.scope: Deactivated successfully. Feb 13 20:14:01.134337 systemd-logind[1427]: Session 101 logged out. Waiting for processes to exit. Feb 13 20:14:01.135102 systemd-logind[1427]: Removed session 101. Feb 13 20:14:04.180578 kubelet[2437]: E0213 20:14:04.180538 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:14:06.139882 systemd[1]: Started sshd@101-10.0.0.156:22-10.0.0.1:49006.service - OpenSSH per-connection server daemon (10.0.0.1:49006). Feb 13 20:14:06.175307 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 49006 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:14:06.176513 sshd[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:06.179919 systemd-logind[1427]: New session 102 of user core. Feb 13 20:14:06.187531 systemd[1]: Started session-102.scope - Session 102 of User core. Feb 13 20:14:06.288992 sshd[4269]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:06.292167 systemd[1]: sshd@101-10.0.0.156:22-10.0.0.1:49006.service: Deactivated successfully. Feb 13 20:14:06.293826 systemd[1]: session-102.scope: Deactivated successfully. Feb 13 20:14:06.294417 systemd-logind[1427]: Session 102 logged out. Waiting for processes to exit. Feb 13 20:14:06.295364 systemd-logind[1427]: Removed session 102. Feb 13 20:14:08.043592 kubelet[2437]: E0213 20:14:08.043552 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:14:08.044509 kubelet[2437]: E0213 20:14:08.044443 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:14:09.181161 kubelet[2437]: E0213 20:14:09.181123 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:14:11.299923 systemd[1]: Started sshd@102-10.0.0.156:22-10.0.0.1:49014.service - OpenSSH per-connection server daemon (10.0.0.1:49014). Feb 13 20:14:11.336603 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 49014 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:14:11.337776 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:11.341420 systemd-logind[1427]: New session 103 of user core. Feb 13 20:14:11.348595 systemd[1]: Started session-103.scope - Session 103 of User core. Feb 13 20:14:11.450042 sshd[4283]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:11.453038 systemd[1]: sshd@102-10.0.0.156:22-10.0.0.1:49014.service: Deactivated successfully. Feb 13 20:14:11.454641 systemd[1]: session-103.scope: Deactivated successfully. Feb 13 20:14:11.456137 systemd-logind[1427]: Session 103 logged out. Waiting for processes to exit. Feb 13 20:14:11.457452 systemd-logind[1427]: Removed session 103. Feb 13 20:14:14.181983 kubelet[2437]: E0213 20:14:14.181947 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:14:16.461176 systemd[1]: Started sshd@103-10.0.0.156:22-10.0.0.1:38548.service - OpenSSH per-connection server daemon (10.0.0.1:38548). Feb 13 20:14:16.496412 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 38548 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:14:16.497589 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:16.501447 systemd-logind[1427]: New session 104 of user core. Feb 13 20:14:16.512577 systemd[1]: Started session-104.scope - Session 104 of User core. Feb 13 20:14:16.617466 sshd[4298]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:16.620365 systemd[1]: sshd@103-10.0.0.156:22-10.0.0.1:38548.service: Deactivated successfully. Feb 13 20:14:16.622286 systemd[1]: session-104.scope: Deactivated successfully. Feb 13 20:14:16.624879 systemd-logind[1427]: Session 104 logged out. Waiting for processes to exit. Feb 13 20:14:16.625774 systemd-logind[1427]: Removed session 104. Feb 13 20:14:19.183622 kubelet[2437]: E0213 20:14:19.183534 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:14:21.629754 systemd[1]: Started sshd@104-10.0.0.156:22-10.0.0.1:38562.service - OpenSSH per-connection server daemon (10.0.0.1:38562). Feb 13 20:14:21.665048 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 38562 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:14:21.666190 sshd[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:21.670066 systemd-logind[1427]: New session 105 of user core. Feb 13 20:14:21.680523 systemd[1]: Started session-105.scope - Session 105 of User core. Feb 13 20:14:21.783848 sshd[4313]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:21.786794 systemd[1]: sshd@104-10.0.0.156:22-10.0.0.1:38562.service: Deactivated successfully. Feb 13 20:14:21.789218 systemd[1]: session-105.scope: Deactivated successfully. Feb 13 20:14:21.790237 systemd-logind[1427]: Session 105 logged out. Waiting for processes to exit. Feb 13 20:14:21.791097 systemd-logind[1427]: Removed session 105. Feb 13 20:14:22.043576 kubelet[2437]: E0213 20:14:22.043470 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:14:22.044655 kubelet[2437]: E0213 20:14:22.044027 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:14:24.184783 kubelet[2437]: E0213 20:14:24.184743 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:14:26.794091 systemd[1]: Started sshd@105-10.0.0.156:22-10.0.0.1:47044.service - OpenSSH per-connection server daemon (10.0.0.1:47044). Feb 13 20:14:26.830623 sshd[4327]: Accepted publickey for core from 10.0.0.1 port 47044 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:14:26.831892 sshd[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:26.836192 systemd-logind[1427]: New session 106 of user core. Feb 13 20:14:26.845521 systemd[1]: Started session-106.scope - Session 106 of User core. Feb 13 20:14:26.948251 sshd[4327]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:26.951424 systemd[1]: sshd@105-10.0.0.156:22-10.0.0.1:47044.service: Deactivated successfully. Feb 13 20:14:26.953672 systemd[1]: session-106.scope: Deactivated successfully. Feb 13 20:14:26.954444 systemd-logind[1427]: Session 106 logged out. Waiting for processes to exit. Feb 13 20:14:26.955325 systemd-logind[1427]: Removed session 106. Feb 13 20:14:29.186215 kubelet[2437]: E0213 20:14:29.186176 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:14:30.044091 kubelet[2437]: E0213 20:14:30.043745 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:14:31.958866 systemd[1]: Started sshd@106-10.0.0.156:22-10.0.0.1:47048.service - OpenSSH per-connection server daemon (10.0.0.1:47048). Feb 13 20:14:31.994068 sshd[4341]: Accepted publickey for core from 10.0.0.1 port 47048 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:14:31.995250 sshd[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:31.999003 systemd-logind[1427]: New session 107 of user core. Feb 13 20:14:32.006605 systemd[1]: Started session-107.scope - Session 107 of User core. Feb 13 20:14:32.110430 sshd[4341]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:32.113626 systemd[1]: sshd@106-10.0.0.156:22-10.0.0.1:47048.service: Deactivated successfully. Feb 13 20:14:32.115142 systemd[1]: session-107.scope: Deactivated successfully. Feb 13 20:14:32.116857 systemd-logind[1427]: Session 107 logged out. Waiting for processes to exit. Feb 13 20:14:32.117664 systemd-logind[1427]: Removed session 107. Feb 13 20:14:33.043945 kubelet[2437]: E0213 20:14:33.043851 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:14:34.043647 kubelet[2437]: E0213 20:14:34.043434 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:14:34.044001 kubelet[2437]: E0213 20:14:34.043935 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:14:34.187578 kubelet[2437]: E0213 20:14:34.187537 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:14:37.121071 systemd[1]: Started sshd@107-10.0.0.156:22-10.0.0.1:46426.service - OpenSSH per-connection server daemon (10.0.0.1:46426). Feb 13 20:14:37.157140 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 46426 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:14:37.158275 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:37.162035 systemd-logind[1427]: New session 108 of user core. Feb 13 20:14:37.168610 systemd[1]: Started session-108.scope - Session 108 of User core. Feb 13 20:14:37.271540 sshd[4359]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:37.274508 systemd[1]: sshd@107-10.0.0.156:22-10.0.0.1:46426.service: Deactivated successfully. Feb 13 20:14:37.276020 systemd[1]: session-108.scope: Deactivated successfully. Feb 13 20:14:37.278706 systemd-logind[1427]: Session 108 logged out. Waiting for processes to exit. Feb 13 20:14:37.279594 systemd-logind[1427]: Removed session 108. Feb 13 20:14:38.044223 kubelet[2437]: E0213 20:14:38.044146 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:14:39.188214 kubelet[2437]: E0213 20:14:39.188174 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:14:42.281974 systemd[1]: Started sshd@108-10.0.0.156:22-10.0.0.1:46432.service - OpenSSH per-connection server daemon (10.0.0.1:46432). Feb 13 20:14:42.317442 sshd[4373]: Accepted publickey for core from 10.0.0.1 port 46432 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:14:42.318595 sshd[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:42.322262 systemd-logind[1427]: New session 109 of user core. Feb 13 20:14:42.341538 systemd[1]: Started session-109.scope - Session 109 of User core. Feb 13 20:14:42.446607 sshd[4373]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:42.449586 systemd[1]: sshd@108-10.0.0.156:22-10.0.0.1:46432.service: Deactivated successfully. Feb 13 20:14:42.452045 systemd[1]: session-109.scope: Deactivated successfully. Feb 13 20:14:42.452969 systemd-logind[1427]: Session 109 logged out. Waiting for processes to exit. Feb 13 20:14:42.453841 systemd-logind[1427]: Removed session 109. Feb 13 20:14:44.189667 kubelet[2437]: E0213 20:14:44.189601 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:14:46.043763 kubelet[2437]: E0213 20:14:46.043730 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:14:46.044462 kubelet[2437]: E0213 20:14:46.044202 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:14:47.459884 systemd[1]: Started sshd@109-10.0.0.156:22-10.0.0.1:44458.service - OpenSSH per-connection server daemon (10.0.0.1:44458). Feb 13 20:14:47.495467 sshd[4388]: Accepted publickey for core from 10.0.0.1 port 44458 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:14:47.496659 sshd[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:47.500321 systemd-logind[1427]: New session 110 of user core. Feb 13 20:14:47.515572 systemd[1]: Started session-110.scope - Session 110 of User core. Feb 13 20:14:47.619123 sshd[4388]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:47.622800 systemd[1]: sshd@109-10.0.0.156:22-10.0.0.1:44458.service: Deactivated successfully. Feb 13 20:14:47.625040 systemd[1]: session-110.scope: Deactivated successfully. Feb 13 20:14:47.625951 systemd-logind[1427]: Session 110 logged out. Waiting for processes to exit. Feb 13 20:14:47.626780 systemd-logind[1427]: Removed session 110. Feb 13 20:14:49.190596 kubelet[2437]: E0213 20:14:49.190542 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:14:52.629621 systemd[1]: Started sshd@110-10.0.0.156:22-10.0.0.1:58896.service - OpenSSH per-connection server daemon (10.0.0.1:58896). Feb 13 20:14:52.670547 sshd[4404]: Accepted publickey for core from 10.0.0.1 port 58896 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:14:52.671757 sshd[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:52.675260 systemd-logind[1427]: New session 111 of user core. Feb 13 20:14:52.694511 systemd[1]: Started session-111.scope - Session 111 of User core. Feb 13 20:14:52.798234 sshd[4404]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:52.802157 systemd[1]: sshd@110-10.0.0.156:22-10.0.0.1:58896.service: Deactivated successfully. Feb 13 20:14:52.804050 systemd[1]: session-111.scope: Deactivated successfully. Feb 13 20:14:52.805122 systemd-logind[1427]: Session 111 logged out. Waiting for processes to exit. Feb 13 20:14:52.806262 systemd-logind[1427]: Removed session 111. Feb 13 20:14:53.044041 kubelet[2437]: E0213 20:14:53.043924 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:14:54.191431 kubelet[2437]: E0213 20:14:54.191353 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:14:57.810141 systemd[1]: Started sshd@111-10.0.0.156:22-10.0.0.1:58908.service - OpenSSH per-connection server daemon (10.0.0.1:58908). Feb 13 20:14:57.845807 sshd[4421]: Accepted publickey for core from 10.0.0.1 port 58908 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:14:57.846931 sshd[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:57.850435 systemd-logind[1427]: New session 112 of user core. Feb 13 20:14:57.860533 systemd[1]: Started session-112.scope - Session 112 of User core. Feb 13 20:14:57.962984 sshd[4421]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:57.966181 systemd[1]: sshd@111-10.0.0.156:22-10.0.0.1:58908.service: Deactivated successfully. Feb 13 20:14:57.967721 systemd[1]: session-112.scope: Deactivated successfully. Feb 13 20:14:57.968419 systemd-logind[1427]: Session 112 logged out. Waiting for processes to exit. Feb 13 20:14:57.969497 systemd-logind[1427]: Removed session 112. Feb 13 20:14:58.045077 kubelet[2437]: E0213 20:14:58.044859 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:14:58.045541 kubelet[2437]: E0213 20:14:58.045452 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:14:59.192396 kubelet[2437]: E0213 20:14:59.192346 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:15:02.973876 systemd[1]: Started sshd@112-10.0.0.156:22-10.0.0.1:55760.service - OpenSSH per-connection server daemon (10.0.0.1:55760). Feb 13 20:15:03.009334 sshd[4438]: Accepted publickey for core from 10.0.0.1 port 55760 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:15:03.010487 sshd[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:03.014168 systemd-logind[1427]: New session 113 of user core. Feb 13 20:15:03.019602 systemd[1]: Started session-113.scope - Session 113 of User core. Feb 13 20:15:03.122356 sshd[4438]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:03.125369 systemd[1]: sshd@112-10.0.0.156:22-10.0.0.1:55760.service: Deactivated successfully. Feb 13 20:15:03.127984 systemd[1]: session-113.scope: Deactivated successfully. Feb 13 20:15:03.128784 systemd-logind[1427]: Session 113 logged out. Waiting for processes to exit. Feb 13 20:15:03.129634 systemd-logind[1427]: Removed session 113. Feb 13 20:15:04.193695 kubelet[2437]: E0213 20:15:04.193656 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:15:08.132776 systemd[1]: Started sshd@113-10.0.0.156:22-10.0.0.1:55774.service - OpenSSH per-connection server daemon (10.0.0.1:55774). Feb 13 20:15:08.168284 sshd[4453]: Accepted publickey for core from 10.0.0.1 port 55774 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:15:08.169493 sshd[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:08.172794 systemd-logind[1427]: New session 114 of user core. Feb 13 20:15:08.186585 systemd[1]: Started session-114.scope - Session 114 of User core. Feb 13 20:15:08.288786 sshd[4453]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:08.291810 systemd[1]: sshd@113-10.0.0.156:22-10.0.0.1:55774.service: Deactivated successfully. Feb 13 20:15:08.293368 systemd[1]: session-114.scope: Deactivated successfully. Feb 13 20:15:08.293914 systemd-logind[1427]: Session 114 logged out. Waiting for processes to exit. Feb 13 20:15:08.295085 systemd-logind[1427]: Removed session 114. Feb 13 20:15:09.194753 kubelet[2437]: E0213 20:15:09.194713 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:15:11.043683 kubelet[2437]: E0213 20:15:11.043658 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:15:11.044333 kubelet[2437]: E0213 20:15:11.044131 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8" Feb 13 20:15:13.300028 systemd[1]: Started sshd@114-10.0.0.156:22-10.0.0.1:35846.service - OpenSSH per-connection server daemon (10.0.0.1:35846). Feb 13 20:15:13.335424 sshd[4467]: Accepted publickey for core from 10.0.0.1 port 35846 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:15:13.336564 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:13.339846 systemd-logind[1427]: New session 115 of user core. Feb 13 20:15:13.350519 systemd[1]: Started session-115.scope - Session 115 of User core. Feb 13 20:15:13.454334 sshd[4467]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:13.456746 systemd[1]: sshd@114-10.0.0.156:22-10.0.0.1:35846.service: Deactivated successfully. Feb 13 20:15:13.458183 systemd[1]: session-115.scope: Deactivated successfully. Feb 13 20:15:13.459446 systemd-logind[1427]: Session 115 logged out. Waiting for processes to exit. Feb 13 20:15:13.460600 systemd-logind[1427]: Removed session 115. Feb 13 20:15:14.195367 kubelet[2437]: E0213 20:15:14.195296 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:15:18.464978 systemd[1]: Started sshd@115-10.0.0.156:22-10.0.0.1:35852.service - OpenSSH per-connection server daemon (10.0.0.1:35852). Feb 13 20:15:18.500515 sshd[4481]: Accepted publickey for core from 10.0.0.1 port 35852 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:15:18.501659 sshd[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:18.504932 systemd-logind[1427]: New session 116 of user core. Feb 13 20:15:18.518526 systemd[1]: Started session-116.scope - Session 116 of User core. Feb 13 20:15:18.623214 sshd[4481]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:18.626320 systemd[1]: sshd@115-10.0.0.156:22-10.0.0.1:35852.service: Deactivated successfully. Feb 13 20:15:18.628531 systemd[1]: session-116.scope: Deactivated successfully. Feb 13 20:15:18.629093 systemd-logind[1427]: Session 116 logged out. Waiting for processes to exit. Feb 13 20:15:18.630156 systemd-logind[1427]: Removed session 116. Feb 13 20:15:19.197041 kubelet[2437]: E0213 20:15:19.196901 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:15:23.633754 systemd[1]: Started sshd@116-10.0.0.156:22-10.0.0.1:39610.service - OpenSSH per-connection server daemon (10.0.0.1:39610). Feb 13 20:15:23.669222 sshd[4497]: Accepted publickey for core from 10.0.0.1 port 39610 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:15:23.670407 sshd[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:23.673978 systemd-logind[1427]: New session 117 of user core. Feb 13 20:15:23.680535 systemd[1]: Started session-117.scope - Session 117 of User core. Feb 13 20:15:23.783727 sshd[4497]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:23.787122 systemd[1]: sshd@116-10.0.0.156:22-10.0.0.1:39610.service: Deactivated successfully. Feb 13 20:15:23.788725 systemd[1]: session-117.scope: Deactivated successfully. Feb 13 20:15:23.789357 systemd-logind[1427]: Session 117 logged out. Waiting for processes to exit. Feb 13 20:15:23.790324 systemd-logind[1427]: Removed session 117. Feb 13 20:15:24.197715 kubelet[2437]: E0213 20:15:24.197656 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:15:25.043568 kubelet[2437]: E0213 20:15:25.043467 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:15:25.044033 kubelet[2437]: E0213 20:15:25.044000 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-mkbnt" podUID="dbd39ffa-0aa6-4acd-b1d8-c7e908994dc8"