Feb 13 19:51:00.890462 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:51:00.890481 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 19:51:00.890491 kernel: KASLR enabled Feb 13 19:51:00.890496 kernel: efi: EFI v2.7 by EDK II Feb 13 19:51:00.890502 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 19:51:00.890508 kernel: random: crng init done Feb 13 19:51:00.890515 kernel: ACPI: Early table checksum verification disabled Feb 13 19:51:00.890521 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 19:51:00.890527 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:51:00.890534 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:00.890540 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:00.890546 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:00.890552 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:00.890558 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:00.890565 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:00.890573 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:00.890579 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:00.890586 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:00.890592 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:51:00.890599 kernel: NUMA: Failed to initialise from firmware Feb 13 19:51:00.890605 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:51:00.890611 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 19:51:00.890617 kernel: Zone ranges: Feb 13 19:51:00.890624 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:51:00.890630 kernel: DMA32 empty Feb 13 19:51:00.890637 kernel: Normal empty Feb 13 19:51:00.890643 kernel: Movable zone start for each node Feb 13 19:51:00.890650 kernel: Early memory node ranges Feb 13 19:51:00.890656 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 19:51:00.890663 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:51:00.890669 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:51:00.890675 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:51:00.890681 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:51:00.890688 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:51:00.890694 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:51:00.890700 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:51:00.890706 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:51:00.890714 kernel: psci: probing for conduit method from ACPI. Feb 13 19:51:00.890720 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:51:00.890726 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:51:00.890735 kernel: psci: Trusted OS migration not required Feb 13 19:51:00.890742 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:51:00.890749 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:51:00.890756 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:51:00.890763 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:51:00.890770 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:51:00.890777 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:51:00.890784 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:51:00.890790 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:51:00.890804 kernel: CPU features: detected: Spectre-v4 Feb 13 19:51:00.890811 kernel: CPU features: detected: Spectre-BHB Feb 13 19:51:00.890818 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:51:00.890825 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:51:00.890834 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:51:00.890841 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:51:00.890847 kernel: alternatives: applying boot alternatives Feb 13 19:51:00.890855 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:51:00.890862 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:51:00.890869 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:51:00.890876 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:51:00.890882 kernel: Fallback order for Node 0: 0 Feb 13 19:51:00.890889 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:51:00.890895 kernel: Policy zone: DMA Feb 13 19:51:00.890902 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:51:00.890910 kernel: software IO TLB: area num 4. Feb 13 19:51:00.890918 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:51:00.890925 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Feb 13 19:51:00.890932 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:51:00.890939 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:51:00.890946 kernel: rcu: RCU event tracing is enabled. Feb 13 19:51:00.890953 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:51:00.890960 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:51:00.890967 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:51:00.890975 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:51:00.890981 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:51:00.890988 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:51:00.890996 kernel: GICv3: 256 SPIs implemented Feb 13 19:51:00.891002 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:51:00.891009 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:51:00.891016 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:51:00.891023 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:51:00.891030 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:51:00.891036 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:51:00.891043 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:51:00.891050 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:51:00.891057 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:51:00.891063 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:51:00.891071 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:51:00.891078 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:51:00.891085 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:51:00.891092 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:51:00.891099 kernel: arm-pv: using stolen time PV Feb 13 19:51:00.891106 kernel: Console: colour dummy device 80x25 Feb 13 19:51:00.891113 kernel: ACPI: Core revision 20230628 Feb 13 19:51:00.891120 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:51:00.891127 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:51:00.891134 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:51:00.891141 kernel: landlock: Up and running. Feb 13 19:51:00.891148 kernel: SELinux: Initializing. Feb 13 19:51:00.891155 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:51:00.891162 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:51:00.891169 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:51:00.891176 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:51:00.891183 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:51:00.891202 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:51:00.891210 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:51:00.891218 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:51:00.891225 kernel: Remapping and enabling EFI services. Feb 13 19:51:00.891232 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:51:00.891239 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:51:00.891246 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:51:00.891253 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:51:00.891260 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:51:00.891266 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:51:00.891273 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:51:00.891280 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:51:00.891289 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:51:00.891296 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:51:00.891306 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:51:00.891315 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:51:00.891322 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:51:00.891329 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:51:00.891336 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:51:00.891343 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:51:00.891351 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:51:00.891359 kernel: SMP: Total of 4 processors activated. Feb 13 19:51:00.891366 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:51:00.891374 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:51:00.891381 kernel: CPU features: detected: Common not Private translations Feb 13 19:51:00.891388 kernel: CPU features: detected: CRC32 instructions Feb 13 19:51:00.891395 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:51:00.891402 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:51:00.891410 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:51:00.891418 kernel: CPU features: detected: Privileged Access Never Feb 13 19:51:00.891425 kernel: CPU features: detected: RAS Extension Support Feb 13 19:51:00.891432 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:51:00.891440 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:51:00.891447 kernel: alternatives: applying system-wide alternatives Feb 13 19:51:00.891454 kernel: devtmpfs: initialized Feb 13 19:51:00.891461 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:51:00.891469 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:51:00.891476 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:51:00.891484 kernel: SMBIOS 3.0.0 present. Feb 13 19:51:00.891491 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 19:51:00.891499 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:51:00.891506 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:51:00.891514 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:51:00.891521 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:51:00.891528 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:51:00.891535 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Feb 13 19:51:00.891543 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:51:00.891551 kernel: cpuidle: using governor menu Feb 13 19:51:00.891558 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:51:00.891565 kernel: ASID allocator initialised with 32768 entries Feb 13 19:51:00.891573 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:51:00.891580 kernel: Serial: AMBA PL011 UART driver Feb 13 19:51:00.891587 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:51:00.891594 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:51:00.891601 kernel: Modules: 509040 pages in range for PLT usage Feb 13 19:51:00.891608 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:51:00.891617 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:51:00.891624 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:51:00.891631 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:51:00.891638 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:51:00.891646 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:51:00.891653 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:51:00.891660 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:51:00.891667 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:51:00.891674 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:51:00.891682 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:51:00.891690 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:51:00.891697 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:51:00.891704 kernel: ACPI: Interpreter enabled Feb 13 19:51:00.891711 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:51:00.891719 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:51:00.891726 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:51:00.891733 kernel: printk: console [ttyAMA0] enabled Feb 13 19:51:00.891740 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:51:00.891864 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:51:00.891937 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:51:00.892018 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:51:00.892082 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:51:00.892145 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:51:00.892155 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:51:00.892162 kernel: PCI host bridge to bus 0000:00 Feb 13 19:51:00.892241 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:51:00.892300 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:51:00.892357 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:51:00.892413 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:51:00.892488 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:51:00.892567 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:51:00.892632 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:51:00.892704 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:51:00.892769 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:51:00.892848 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:51:00.892914 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:51:00.892979 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:51:00.893038 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:51:00.893096 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:51:00.893154 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:51:00.893164 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:51:00.893171 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:51:00.893179 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:51:00.893193 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:51:00.893203 kernel: iommu: Default domain type: Translated Feb 13 19:51:00.893210 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:51:00.893217 kernel: efivars: Registered efivars operations Feb 13 19:51:00.893226 kernel: vgaarb: loaded Feb 13 19:51:00.893234 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:51:00.893241 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:51:00.893249 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:51:00.893256 kernel: pnp: PnP ACPI init Feb 13 19:51:00.893328 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:51:00.893339 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:51:00.893346 kernel: NET: Registered PF_INET protocol family Feb 13 19:51:00.893355 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:51:00.893363 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:51:00.893370 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:51:00.893378 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:51:00.893385 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:51:00.893392 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:51:00.893400 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:51:00.893407 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:51:00.893415 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:51:00.893423 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:51:00.893430 kernel: kvm [1]: HYP mode not available Feb 13 19:51:00.893437 kernel: Initialise system trusted keyrings Feb 13 19:51:00.893445 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:51:00.893452 kernel: Key type asymmetric registered Feb 13 19:51:00.893459 kernel: Asymmetric key parser 'x509' registered Feb 13 19:51:00.893466 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:51:00.893473 kernel: io scheduler mq-deadline registered Feb 13 19:51:00.893481 kernel: io scheduler kyber registered Feb 13 19:51:00.893489 kernel: io scheduler bfq registered Feb 13 19:51:00.893496 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:51:00.893503 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:51:00.893511 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:51:00.893576 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:51:00.893586 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:51:00.893593 kernel: thunder_xcv, ver 1.0 Feb 13 19:51:00.893600 kernel: thunder_bgx, ver 1.0 Feb 13 19:51:00.893607 kernel: nicpf, ver 1.0 Feb 13 19:51:00.893616 kernel: nicvf, ver 1.0 Feb 13 19:51:00.893685 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:51:00.893747 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:51:00 UTC (1739476260) Feb 13 19:51:00.893756 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:51:00.893764 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:51:00.893771 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:51:00.893778 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:51:00.893785 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:51:00.893794 kernel: Segment Routing with IPv6 Feb 13 19:51:00.893809 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:51:00.893817 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:51:00.893824 kernel: Key type dns_resolver registered Feb 13 19:51:00.893831 kernel: registered taskstats version 1 Feb 13 19:51:00.893838 kernel: Loading compiled-in X.509 certificates Feb 13 19:51:00.893845 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 19:51:00.893853 kernel: Key type .fscrypt registered Feb 13 19:51:00.893860 kernel: Key type fscrypt-provisioning registered Feb 13 19:51:00.893869 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:51:00.893876 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:51:00.893884 kernel: ima: No architecture policies found Feb 13 19:51:00.893891 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:51:00.893898 kernel: clk: Disabling unused clocks Feb 13 19:51:00.893905 kernel: Freeing unused kernel memory: 39360K Feb 13 19:51:00.893912 kernel: Run /init as init process Feb 13 19:51:00.893919 kernel: with arguments: Feb 13 19:51:00.893926 kernel: /init Feb 13 19:51:00.893934 kernel: with environment: Feb 13 19:51:00.893942 kernel: HOME=/ Feb 13 19:51:00.893949 kernel: TERM=linux Feb 13 19:51:00.893956 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:51:00.893965 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:51:00.893975 systemd[1]: Detected virtualization kvm. Feb 13 19:51:00.893983 systemd[1]: Detected architecture arm64. Feb 13 19:51:00.893990 systemd[1]: Running in initrd. Feb 13 19:51:00.893999 systemd[1]: No hostname configured, using default hostname. Feb 13 19:51:00.894006 systemd[1]: Hostname set to . Feb 13 19:51:00.894015 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:51:00.894022 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:51:00.894030 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:51:00.894038 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:51:00.894046 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:51:00.894054 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:51:00.894063 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:51:00.894071 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:51:00.894081 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:51:00.894089 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:51:00.894097 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:51:00.894105 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:51:00.894113 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:51:00.894143 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:51:00.894153 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:51:00.894161 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:51:00.894168 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:51:00.894176 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:51:00.894184 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:51:00.894199 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:51:00.894210 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:51:00.894221 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:51:00.894229 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:51:00.894237 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:51:00.894245 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:51:00.894253 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:51:00.894261 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:51:00.894269 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:51:00.894277 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:51:00.894285 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:51:00.894294 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:00.894302 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:51:00.894309 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:51:00.894317 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:51:00.894343 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 19:51:00.894364 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:51:00.894372 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:00.894380 systemd-journald[238]: Journal started Feb 13 19:51:00.894400 systemd-journald[238]: Runtime Journal (/run/log/journal/5bca5c2c6f414c9f9e086795d41450e8) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:51:00.887968 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 19:51:00.898204 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:51:00.898238 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:51:00.901178 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:51:00.901217 kernel: Bridge firewalling registered Feb 13 19:51:00.901587 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 19:51:00.902500 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:00.903868 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:51:00.907445 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:51:00.908428 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:51:00.911925 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:51:00.914314 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:51:00.922003 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:00.923777 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:00.925702 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:51:00.927608 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:51:00.929347 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:51:00.940291 dracut-cmdline[275]: dracut-dracut-053 Feb 13 19:51:00.942704 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:51:00.956789 systemd-resolved[276]: Positive Trust Anchors: Feb 13 19:51:00.956815 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:51:00.956848 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:51:00.961475 systemd-resolved[276]: Defaulting to hostname 'linux'. Feb 13 19:51:00.962508 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:51:00.963643 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:51:01.007219 kernel: SCSI subsystem initialized Feb 13 19:51:01.012208 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:51:01.019216 kernel: iscsi: registered transport (tcp) Feb 13 19:51:01.031206 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:51:01.031221 kernel: QLogic iSCSI HBA Driver Feb 13 19:51:01.071867 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:51:01.081308 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:51:01.098326 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:51:01.098369 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:51:01.098389 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:51:01.144220 kernel: raid6: neonx8 gen() 15733 MB/s Feb 13 19:51:01.161220 kernel: raid6: neonx4 gen() 15590 MB/s Feb 13 19:51:01.178214 kernel: raid6: neonx2 gen() 13245 MB/s Feb 13 19:51:01.195205 kernel: raid6: neonx1 gen() 10472 MB/s Feb 13 19:51:01.212214 kernel: raid6: int64x8 gen() 6943 MB/s Feb 13 19:51:01.229213 kernel: raid6: int64x4 gen() 7335 MB/s Feb 13 19:51:01.246215 kernel: raid6: int64x2 gen() 6120 MB/s Feb 13 19:51:01.263214 kernel: raid6: int64x1 gen() 5046 MB/s Feb 13 19:51:01.263240 kernel: raid6: using algorithm neonx8 gen() 15733 MB/s Feb 13 19:51:01.280218 kernel: raid6: .... xor() 11915 MB/s, rmw enabled Feb 13 19:51:01.280237 kernel: raid6: using neon recovery algorithm Feb 13 19:51:01.287278 kernel: xor: measuring software checksum speed Feb 13 19:51:01.287306 kernel: 8regs : 19788 MB/sec Feb 13 19:51:01.287324 kernel: 32regs : 19263 MB/sec Feb 13 19:51:01.288205 kernel: arm64_neon : 27105 MB/sec Feb 13 19:51:01.288219 kernel: xor: using function: arm64_neon (27105 MB/sec) Feb 13 19:51:01.339219 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:51:01.349687 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:51:01.362342 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:51:01.373009 systemd-udevd[460]: Using default interface naming scheme 'v255'. Feb 13 19:51:01.376114 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:51:01.378352 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:51:01.392626 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Feb 13 19:51:01.418305 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:51:01.426327 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:51:01.465343 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:51:01.473336 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:51:01.484244 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:51:01.485145 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:51:01.488690 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:51:01.489842 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:51:01.497320 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:51:01.507724 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:51:01.515309 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:51:01.515403 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:51:01.515415 kernel: GPT:9289727 != 19775487 Feb 13 19:51:01.515424 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:51:01.515434 kernel: GPT:9289727 != 19775487 Feb 13 19:51:01.515442 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:51:01.515452 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:51:01.511846 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:51:01.511956 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:01.514251 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:01.519294 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:51:01.519425 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:01.521003 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:01.528253 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (508) Feb 13 19:51:01.530283 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (516) Feb 13 19:51:01.533445 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:01.534514 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:51:01.544326 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:51:01.545397 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:01.555169 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:51:01.564436 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:51:01.567900 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:51:01.568780 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:51:01.584336 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:51:01.587035 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:01.591476 disk-uuid[554]: Primary Header is updated. Feb 13 19:51:01.591476 disk-uuid[554]: Secondary Entries is updated. Feb 13 19:51:01.591476 disk-uuid[554]: Secondary Header is updated. Feb 13 19:51:01.595941 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:51:01.608511 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:02.609209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:51:02.609914 disk-uuid[555]: The operation has completed successfully. Feb 13 19:51:02.630810 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:51:02.630908 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:51:02.654328 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:51:02.657046 sh[574]: Success Feb 13 19:51:02.668209 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:51:02.694672 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:51:02.714392 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:51:02.717603 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:51:02.724276 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 19:51:02.724317 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:51:02.724338 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:51:02.725561 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:51:02.725575 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:51:02.729031 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:51:02.730064 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:51:02.737360 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:51:02.738597 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:51:02.746215 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:51:02.746251 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:51:02.747197 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:51:02.748218 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:51:02.755020 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:51:02.757236 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:51:02.761544 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:51:02.768365 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:51:02.834411 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:51:02.843348 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:51:02.858688 ignition[667]: Ignition 2.19.0 Feb 13 19:51:02.858697 ignition[667]: Stage: fetch-offline Feb 13 19:51:02.858730 ignition[667]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:02.858737 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:51:02.858898 ignition[667]: parsed url from cmdline: "" Feb 13 19:51:02.858902 ignition[667]: no config URL provided Feb 13 19:51:02.858906 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:51:02.858913 ignition[667]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:51:02.858933 ignition[667]: op(1): [started] loading QEMU firmware config module Feb 13 19:51:02.858938 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:51:02.868956 systemd-networkd[766]: lo: Link UP Feb 13 19:51:02.868968 systemd-networkd[766]: lo: Gained carrier Feb 13 19:51:02.869659 systemd-networkd[766]: Enumeration completed Feb 13 19:51:02.870167 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:51:02.871069 systemd[1]: Reached target network.target - Network. Feb 13 19:51:02.872747 ignition[667]: op(1): [finished] loading QEMU firmware config module Feb 13 19:51:02.871506 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:02.871512 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:51:02.872333 systemd-networkd[766]: eth0: Link UP Feb 13 19:51:02.872336 systemd-networkd[766]: eth0: Gained carrier Feb 13 19:51:02.872343 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:02.885221 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:51:02.899633 ignition[667]: parsing config with SHA512: 5dc9f822997cb3d191d6a84b39e9cd4702dc2179e39172e6990701592f487c24abedf2fd84ba26ce2c2131fcf9206a63af8eac7b3c0a1572a5c65ebd9f566da5 Feb 13 19:51:02.903353 unknown[667]: fetched base config from "system" Feb 13 19:51:02.903361 unknown[667]: fetched user config from "qemu" Feb 13 19:51:02.903766 ignition[667]: fetch-offline: fetch-offline passed Feb 13 19:51:02.905347 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:51:02.903838 ignition[667]: Ignition finished successfully Feb 13 19:51:02.906503 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:51:02.915334 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:51:02.925073 ignition[771]: Ignition 2.19.0 Feb 13 19:51:02.925083 ignition[771]: Stage: kargs Feb 13 19:51:02.925279 ignition[771]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:02.925289 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:51:02.926221 ignition[771]: kargs: kargs passed Feb 13 19:51:02.926265 ignition[771]: Ignition finished successfully Feb 13 19:51:02.928639 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:51:02.930716 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:51:02.942261 ignition[779]: Ignition 2.19.0 Feb 13 19:51:02.942273 ignition[779]: Stage: disks Feb 13 19:51:02.942423 ignition[779]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:02.942432 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:51:02.943284 ignition[779]: disks: disks passed Feb 13 19:51:02.943324 ignition[779]: Ignition finished successfully Feb 13 19:51:02.945515 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:51:02.946709 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:51:02.947925 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:51:02.949383 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:51:02.950913 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:51:02.952169 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:51:02.959311 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:51:02.968603 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:51:02.972620 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:51:02.988302 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:51:03.028203 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 19:51:03.028549 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:51:03.029538 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:51:03.047268 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:51:03.048691 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:51:03.049907 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:51:03.053264 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Feb 13 19:51:03.049945 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:51:03.049966 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:51:03.055113 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:51:03.058929 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:51:03.058997 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:51:03.059025 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:51:03.059351 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:51:03.062224 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:51:03.063257 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:51:03.101646 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:51:03.104763 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:51:03.108743 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:51:03.112081 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:51:03.179810 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:51:03.190272 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:51:03.191555 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:51:03.196232 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:51:03.210117 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:51:03.212945 ignition[912]: INFO : Ignition 2.19.0 Feb 13 19:51:03.212945 ignition[912]: INFO : Stage: mount Feb 13 19:51:03.214161 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:03.214161 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:51:03.214161 ignition[912]: INFO : mount: mount passed Feb 13 19:51:03.214161 ignition[912]: INFO : Ignition finished successfully Feb 13 19:51:03.215316 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:51:03.226337 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:51:03.723950 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:51:03.733340 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:51:03.739350 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Feb 13 19:51:03.739382 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:51:03.739393 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:51:03.740465 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:51:03.742206 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:51:03.743412 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:51:03.762509 ignition[944]: INFO : Ignition 2.19.0 Feb 13 19:51:03.762509 ignition[944]: INFO : Stage: files Feb 13 19:51:03.763690 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:03.763690 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:51:03.763690 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:51:03.766175 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:51:03.766175 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:51:03.766175 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:51:03.766175 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:51:03.770480 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:51:03.770480 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:51:03.770480 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:51:03.770480 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:51:03.770480 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:51:03.766475 unknown[944]: wrote ssh authorized keys file for user: core Feb 13 19:51:03.906810 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:51:04.121614 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:51:04.121614 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:51:04.124267 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:51:04.124267 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:51:04.124267 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:51:04.124267 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:51:04.124267 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:51:04.124267 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:51:04.124267 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:51:04.124267 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:51:04.124267 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:51:04.124267 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:51:04.124267 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:51:04.124267 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:51:04.124267 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:51:04.385397 systemd-networkd[766]: eth0: Gained IPv6LL Feb 13 19:51:04.432719 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:51:04.659396 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:51:04.659396 ignition[944]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 19:51:04.662003 ignition[944]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:51:04.662003 ignition[944]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:51:04.662003 ignition[944]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 19:51:04.662003 ignition[944]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 19:51:04.662003 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:51:04.662003 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:51:04.662003 ignition[944]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 19:51:04.662003 ignition[944]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Feb 13 19:51:04.662003 ignition[944]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:51:04.662003 ignition[944]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:51:04.662003 ignition[944]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Feb 13 19:51:04.662003 ignition[944]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:51:04.683738 ignition[944]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:51:04.687129 ignition[944]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:51:04.688243 ignition[944]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:51:04.688243 ignition[944]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:51:04.688243 ignition[944]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:51:04.688243 ignition[944]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:51:04.688243 ignition[944]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:51:04.688243 ignition[944]: INFO : files: files passed Feb 13 19:51:04.688243 ignition[944]: INFO : Ignition finished successfully Feb 13 19:51:04.688984 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:51:04.710403 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:51:04.712356 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:51:04.714709 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:51:04.714822 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:51:04.719500 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:51:04.722898 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:04.722898 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:04.725425 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:04.726376 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:51:04.727469 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:51:04.738372 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:51:04.756470 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:51:04.756603 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:51:04.758235 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:51:04.759487 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:51:04.760780 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:51:04.761541 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:51:04.775949 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:51:04.785345 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:51:04.792752 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:51:04.793686 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:51:04.795118 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:51:04.796406 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:51:04.796524 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:51:04.798476 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:51:04.800004 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:51:04.801172 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:51:04.802483 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:51:04.803911 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:51:04.805344 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:51:04.806759 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:51:04.808242 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:51:04.809825 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:51:04.811078 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:51:04.812262 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:51:04.812381 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:51:04.814077 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:51:04.815489 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:51:04.816965 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:51:04.821249 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:51:04.822154 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:51:04.822283 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:51:04.824503 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:51:04.824621 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:51:04.826047 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:51:04.827157 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:51:04.829237 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:51:04.830204 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:51:04.831920 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:51:04.833051 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:51:04.833149 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:51:04.834240 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:51:04.834323 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:51:04.835553 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:51:04.835658 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:51:04.836933 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:51:04.837029 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:51:04.849366 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:51:04.850038 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:51:04.850167 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:51:04.852738 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:51:04.853544 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:51:04.853657 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:51:04.854976 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:51:04.855065 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:51:04.859267 ignition[1000]: INFO : Ignition 2.19.0 Feb 13 19:51:04.860275 ignition[1000]: INFO : Stage: umount Feb 13 19:51:04.861009 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:04.861009 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:51:04.862757 ignition[1000]: INFO : umount: umount passed Feb 13 19:51:04.862757 ignition[1000]: INFO : Ignition finished successfully Feb 13 19:51:04.863401 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:51:04.864305 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:51:04.867584 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:51:04.868054 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:51:04.868135 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:51:04.869454 systemd[1]: Stopped target network.target - Network. Feb 13 19:51:04.870231 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:51:04.870295 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:51:04.871583 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:51:04.871621 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:51:04.872798 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:51:04.872838 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:51:04.874027 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:51:04.874064 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:51:04.875475 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:51:04.876711 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:51:04.882976 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:51:04.883093 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:51:04.884244 systemd-networkd[766]: eth0: DHCPv6 lease lost Feb 13 19:51:04.885417 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:51:04.885471 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:51:04.886833 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:51:04.886940 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:51:04.888572 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:51:04.888626 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:51:04.900298 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:51:04.900968 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:51:04.901028 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:51:04.902851 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:51:04.902896 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:04.904424 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:51:04.904461 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:51:04.906398 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:51:04.916376 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:51:04.916514 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:51:04.924400 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:51:04.925398 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:51:04.926519 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:51:04.926560 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:51:04.928849 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:51:04.928992 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:51:04.930751 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:51:04.930806 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:51:04.932259 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:51:04.932290 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:51:04.934059 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:51:04.934105 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:51:04.936313 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:51:04.936357 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:51:04.938533 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:51:04.938578 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:04.953314 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:51:04.954346 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:51:04.954405 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:51:04.956291 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:51:04.956339 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:04.961222 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:51:04.961325 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:51:04.963520 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:51:04.965497 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:51:04.974441 systemd[1]: Switching root. Feb 13 19:51:04.993693 systemd-journald[238]: Journal stopped Feb 13 19:51:05.724469 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 19:51:05.724524 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:51:05.724536 kernel: SELinux: policy capability open_perms=1 Feb 13 19:51:05.724546 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:51:05.724559 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:51:05.724568 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:51:05.724578 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:51:05.724588 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:51:05.724603 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:51:05.724613 kernel: audit: type=1403 audit(1739476265.187:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:51:05.724623 systemd[1]: Successfully loaded SELinux policy in 37.277ms. Feb 13 19:51:05.724640 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.134ms. Feb 13 19:51:05.724652 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:51:05.724664 systemd[1]: Detected virtualization kvm. Feb 13 19:51:05.724674 systemd[1]: Detected architecture arm64. Feb 13 19:51:05.724684 systemd[1]: Detected first boot. Feb 13 19:51:05.724695 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:51:05.724706 zram_generator::config[1063]: No configuration found. Feb 13 19:51:05.724721 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:51:05.724731 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:51:05.724742 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:51:05.724756 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:51:05.724766 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:51:05.724786 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:51:05.724802 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:51:05.724813 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:51:05.724824 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:51:05.724835 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:51:05.724845 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:51:05.724857 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:51:05.724869 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:51:05.724880 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:51:05.724890 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:51:05.724901 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:51:05.724912 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:51:05.724922 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:51:05.724932 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:51:05.724944 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:51:05.724954 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:51:05.724966 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:51:05.724977 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:51:05.724988 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:51:05.724998 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:51:05.725009 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:51:05.725020 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:51:05.725030 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:51:05.725041 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:51:05.725053 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:51:05.725064 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:51:05.725074 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:51:05.725086 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:51:05.725096 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:51:05.725107 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:51:05.725117 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:51:05.725128 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:51:05.725138 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:51:05.725150 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:51:05.725161 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:51:05.725172 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:51:05.725183 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:51:05.725217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:51:05.725229 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:51:05.725240 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:51:05.725250 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:51:05.725263 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:51:05.725275 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:51:05.725286 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 19:51:05.725297 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 19:51:05.725308 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:51:05.725319 kernel: fuse: init (API version 7.39) Feb 13 19:51:05.725329 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:51:05.725341 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:51:05.725351 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:51:05.725362 kernel: loop: module loaded Feb 13 19:51:05.725372 kernel: ACPI: bus type drm_connector registered Feb 13 19:51:05.725382 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:51:05.725410 systemd-journald[1138]: Collecting audit messages is disabled. Feb 13 19:51:05.725433 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:51:05.725445 systemd-journald[1138]: Journal started Feb 13 19:51:05.725467 systemd-journald[1138]: Runtime Journal (/run/log/journal/5bca5c2c6f414c9f9e086795d41450e8) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:51:05.727229 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:51:05.727850 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:51:05.728767 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:51:05.729588 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:51:05.730485 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:51:05.731380 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:51:05.732459 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:51:05.733574 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:51:05.733736 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:51:05.734951 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:51:05.735226 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:51:05.736265 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:51:05.736412 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:51:05.737554 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:51:05.737700 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:51:05.738828 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:51:05.738978 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:51:05.740008 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:51:05.740298 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:51:05.741415 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:51:05.742689 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:51:05.743877 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:51:05.745353 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:51:05.755989 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:51:05.763306 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:51:05.765112 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:51:05.766051 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:51:05.768380 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:51:05.773404 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:51:05.776349 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:51:05.777367 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:51:05.778517 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:51:05.779077 systemd-journald[1138]: Time spent on flushing to /var/log/journal/5bca5c2c6f414c9f9e086795d41450e8 is 17.287ms for 841 entries. Feb 13 19:51:05.779077 systemd-journald[1138]: System Journal (/var/log/journal/5bca5c2c6f414c9f9e086795d41450e8) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:51:05.804602 systemd-journald[1138]: Received client request to flush runtime journal. Feb 13 19:51:05.779550 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:51:05.784347 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:51:05.786629 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:51:05.787886 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:51:05.789145 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:51:05.801350 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:51:05.802751 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:51:05.804559 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:51:05.809543 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:51:05.813551 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:05.817615 udevadm[1205]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:51:05.819694 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Feb 13 19:51:05.819712 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Feb 13 19:51:05.823869 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:51:05.834400 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:51:05.861676 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:51:05.871393 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:51:05.882116 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Feb 13 19:51:05.882135 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Feb 13 19:51:05.885730 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:51:06.207784 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:51:06.219327 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:51:06.237750 systemd-udevd[1225]: Using default interface naming scheme 'v255'. Feb 13 19:51:06.251885 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:51:06.263903 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:51:06.268060 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:51:06.277518 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Feb 13 19:51:06.284215 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1226) Feb 13 19:51:06.328315 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:51:06.336894 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:51:06.391941 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:06.403642 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:51:06.410154 systemd-networkd[1233]: lo: Link UP Feb 13 19:51:06.410162 systemd-networkd[1233]: lo: Gained carrier Feb 13 19:51:06.410840 systemd-networkd[1233]: Enumeration completed Feb 13 19:51:06.411264 systemd-networkd[1233]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:06.411267 systemd-networkd[1233]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:51:06.413680 systemd-networkd[1233]: eth0: Link UP Feb 13 19:51:06.413683 systemd-networkd[1233]: eth0: Gained carrier Feb 13 19:51:06.413694 systemd-networkd[1233]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:06.414366 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:51:06.415855 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:51:06.420894 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:51:06.431911 lvm[1264]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:51:06.438240 systemd-networkd[1233]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:51:06.453368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:06.465638 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:51:06.466833 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:51:06.481336 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:51:06.484832 lvm[1272]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:51:06.516620 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:51:06.517758 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:51:06.518709 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:51:06.518737 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:51:06.519513 systemd[1]: Reached target machines.target - Containers. Feb 13 19:51:06.521332 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:51:06.531333 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:51:06.533340 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:51:06.534219 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:51:06.535114 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:51:06.537093 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:51:06.539368 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:51:06.541756 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:51:06.547482 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:51:06.555219 kernel: loop0: detected capacity change from 0 to 114432 Feb 13 19:51:06.571442 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:51:06.572229 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:51:06.577207 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:51:06.618263 kernel: loop1: detected capacity change from 0 to 194096 Feb 13 19:51:06.663236 kernel: loop2: detected capacity change from 0 to 114328 Feb 13 19:51:06.710240 kernel: loop3: detected capacity change from 0 to 114432 Feb 13 19:51:06.715206 kernel: loop4: detected capacity change from 0 to 194096 Feb 13 19:51:06.721223 kernel: loop5: detected capacity change from 0 to 114328 Feb 13 19:51:06.729231 (sd-merge)[1295]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:51:06.729632 (sd-merge)[1295]: Merged extensions into '/usr'. Feb 13 19:51:06.733838 systemd[1]: Reloading requested from client PID 1280 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:51:06.733852 systemd[1]: Reloading... Feb 13 19:51:06.771214 zram_generator::config[1324]: No configuration found. Feb 13 19:51:06.805153 ldconfig[1277]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:51:06.868841 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:06.914967 systemd[1]: Reloading finished in 180 ms. Feb 13 19:51:06.934875 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:51:06.936013 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:51:06.949328 systemd[1]: Starting ensure-sysext.service... Feb 13 19:51:06.950931 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:51:06.954331 systemd[1]: Reloading requested from client PID 1365 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:51:06.954342 systemd[1]: Reloading... Feb 13 19:51:06.966092 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:51:06.966388 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:51:06.966994 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:51:06.967227 systemd-tmpfiles[1366]: ACLs are not supported, ignoring. Feb 13 19:51:06.967278 systemd-tmpfiles[1366]: ACLs are not supported, ignoring. Feb 13 19:51:06.969435 systemd-tmpfiles[1366]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:51:06.969448 systemd-tmpfiles[1366]: Skipping /boot Feb 13 19:51:06.978537 systemd-tmpfiles[1366]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:51:06.978553 systemd-tmpfiles[1366]: Skipping /boot Feb 13 19:51:06.993644 zram_generator::config[1397]: No configuration found. Feb 13 19:51:07.087712 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:07.133927 systemd[1]: Reloading finished in 179 ms. Feb 13 19:51:07.148896 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:51:07.160750 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:51:07.162756 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:51:07.165343 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:51:07.168428 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:51:07.177685 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:51:07.187246 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:51:07.191781 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:51:07.194679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:51:07.203474 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:51:07.208436 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:51:07.211404 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:51:07.212338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:51:07.214252 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:51:07.215859 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:51:07.216000 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:51:07.217332 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:51:07.217465 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:51:07.221180 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:51:07.224359 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:51:07.226605 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:51:07.231552 augenrules[1472]: No rules Feb 13 19:51:07.238508 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:51:07.240360 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:51:07.241289 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:51:07.241953 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:51:07.243279 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:51:07.245033 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:51:07.246868 systemd-resolved[1441]: Positive Trust Anchors: Feb 13 19:51:07.246886 systemd-resolved[1441]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:51:07.246918 systemd-resolved[1441]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:51:07.251148 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:51:07.251583 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:51:07.253674 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:51:07.254465 systemd-resolved[1441]: Defaulting to hostname 'linux'. Feb 13 19:51:07.254698 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:51:07.257440 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:51:07.258292 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:51:07.258346 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:51:07.258776 systemd[1]: Finished ensure-sysext.service. Feb 13 19:51:07.259617 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:51:07.259751 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:51:07.261104 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:51:07.261256 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:51:07.264387 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:51:07.265469 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:51:07.265645 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:51:07.267248 systemd[1]: Reached target network.target - Network. Feb 13 19:51:07.267929 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:51:07.268919 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:51:07.268978 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:51:07.286385 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:51:07.328968 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:51:07.329649 systemd-timesyncd[1498]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:51:07.329700 systemd-timesyncd[1498]: Initial clock synchronization to Thu 2025-02-13 19:51:07.474332 UTC. Feb 13 19:51:07.330226 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:51:07.331023 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:51:07.331986 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:51:07.332902 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:51:07.333838 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:51:07.333873 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:51:07.334526 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:51:07.335349 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:51:07.336181 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:51:07.337055 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:51:07.338306 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:51:07.340279 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:51:07.342144 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:51:07.348202 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:51:07.348974 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:51:07.349691 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:51:07.350487 systemd[1]: System is tainted: cgroupsv1 Feb 13 19:51:07.350531 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:51:07.350550 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:51:07.351529 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:51:07.353244 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:51:07.354916 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:51:07.358794 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:51:07.359702 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:51:07.361355 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:51:07.363089 jq[1504]: false Feb 13 19:51:07.364335 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:51:07.368871 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:51:07.373512 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:51:07.376468 dbus-daemon[1503]: [system] SELinux support is enabled Feb 13 19:51:07.376970 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:51:07.379918 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:51:07.381374 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:51:07.385447 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:51:07.387563 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:51:07.390672 extend-filesystems[1506]: Found loop3 Feb 13 19:51:07.390672 extend-filesystems[1506]: Found loop4 Feb 13 19:51:07.390672 extend-filesystems[1506]: Found loop5 Feb 13 19:51:07.390672 extend-filesystems[1506]: Found vda Feb 13 19:51:07.390672 extend-filesystems[1506]: Found vda1 Feb 13 19:51:07.390672 extend-filesystems[1506]: Found vda2 Feb 13 19:51:07.390672 extend-filesystems[1506]: Found vda3 Feb 13 19:51:07.390672 extend-filesystems[1506]: Found usr Feb 13 19:51:07.390672 extend-filesystems[1506]: Found vda4 Feb 13 19:51:07.390672 extend-filesystems[1506]: Found vda6 Feb 13 19:51:07.390672 extend-filesystems[1506]: Found vda7 Feb 13 19:51:07.390672 extend-filesystems[1506]: Found vda9 Feb 13 19:51:07.393839 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:51:07.407874 extend-filesystems[1506]: Checking size of /dev/vda9 Feb 13 19:51:07.394061 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:51:07.408561 jq[1524]: true Feb 13 19:51:07.394308 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:51:07.394500 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:51:07.399749 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:51:07.399956 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:51:07.410641 (ntainerd)[1534]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:51:07.415841 jq[1533]: true Feb 13 19:51:07.415413 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:51:07.415452 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:51:07.418160 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:51:07.418260 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:51:07.433409 tar[1530]: linux-arm64/helm Feb 13 19:51:07.434309 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1238) Feb 13 19:51:07.435507 extend-filesystems[1506]: Resized partition /dev/vda9 Feb 13 19:51:07.437198 extend-filesystems[1553]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:51:07.441238 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:51:07.453195 update_engine[1518]: I20250213 19:51:07.451234 1518 main.cc:92] Flatcar Update Engine starting Feb 13 19:51:07.457243 update_engine[1518]: I20250213 19:51:07.457063 1518 update_check_scheduler.cc:74] Next update check in 7m5s Feb 13 19:51:07.457276 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:51:07.459009 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:51:07.465369 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:51:07.466199 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:51:07.494995 systemd-logind[1515]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:51:07.495277 systemd-logind[1515]: New seat seat0. Feb 13 19:51:07.495853 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:51:07.503946 extend-filesystems[1553]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:51:07.503946 extend-filesystems[1553]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:51:07.503946 extend-filesystems[1553]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:51:07.502293 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:51:07.508262 extend-filesystems[1506]: Resized filesystem in /dev/vda9 Feb 13 19:51:07.502604 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:51:07.510519 bash[1563]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:51:07.512032 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:51:07.514603 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:51:07.557775 locksmithd[1564]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:51:07.639248 containerd[1534]: time="2025-02-13T19:51:07.638108680Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:51:07.668194 containerd[1534]: time="2025-02-13T19:51:07.668149320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:07.669593 containerd[1534]: time="2025-02-13T19:51:07.669556960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:07.669593 containerd[1534]: time="2025-02-13T19:51:07.669591720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:51:07.669640 containerd[1534]: time="2025-02-13T19:51:07.669608360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:51:07.669793 containerd[1534]: time="2025-02-13T19:51:07.669770360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:51:07.669820 containerd[1534]: time="2025-02-13T19:51:07.669795600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:07.669882 containerd[1534]: time="2025-02-13T19:51:07.669863280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:07.669910 containerd[1534]: time="2025-02-13T19:51:07.669880560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:07.670094 containerd[1534]: time="2025-02-13T19:51:07.670071360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:07.670133 containerd[1534]: time="2025-02-13T19:51:07.670092840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:07.670133 containerd[1534]: time="2025-02-13T19:51:07.670111760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:07.670133 containerd[1534]: time="2025-02-13T19:51:07.670121520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:07.670237 containerd[1534]: time="2025-02-13T19:51:07.670207480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:07.670429 containerd[1534]: time="2025-02-13T19:51:07.670406840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:07.670572 containerd[1534]: time="2025-02-13T19:51:07.670551080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:07.670572 containerd[1534]: time="2025-02-13T19:51:07.670570880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:51:07.670657 containerd[1534]: time="2025-02-13T19:51:07.670640560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:51:07.670699 containerd[1534]: time="2025-02-13T19:51:07.670683920Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:51:07.673944 containerd[1534]: time="2025-02-13T19:51:07.673918800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:51:07.673992 containerd[1534]: time="2025-02-13T19:51:07.673967000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:51:07.673992 containerd[1534]: time="2025-02-13T19:51:07.673984600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:51:07.674027 containerd[1534]: time="2025-02-13T19:51:07.674000160Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:51:07.674027 containerd[1534]: time="2025-02-13T19:51:07.674016240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:51:07.674718 containerd[1534]: time="2025-02-13T19:51:07.674152400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:51:07.674718 containerd[1534]: time="2025-02-13T19:51:07.674456680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:51:07.674718 containerd[1534]: time="2025-02-13T19:51:07.674573800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:51:07.674718 containerd[1534]: time="2025-02-13T19:51:07.674589760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:51:07.674718 containerd[1534]: time="2025-02-13T19:51:07.674602600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:51:07.674718 containerd[1534]: time="2025-02-13T19:51:07.674616400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:51:07.674718 containerd[1534]: time="2025-02-13T19:51:07.674628520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:51:07.674718 containerd[1534]: time="2025-02-13T19:51:07.674640240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:51:07.674718 containerd[1534]: time="2025-02-13T19:51:07.674652760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:51:07.674718 containerd[1534]: time="2025-02-13T19:51:07.674665480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:51:07.674718 containerd[1534]: time="2025-02-13T19:51:07.674677160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:51:07.674718 containerd[1534]: time="2025-02-13T19:51:07.674688560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:51:07.674718 containerd[1534]: time="2025-02-13T19:51:07.674700200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:51:07.674718 containerd[1534]: time="2025-02-13T19:51:07.674719960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675029 containerd[1534]: time="2025-02-13T19:51:07.674733760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675029 containerd[1534]: time="2025-02-13T19:51:07.674745800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675029 containerd[1534]: time="2025-02-13T19:51:07.674774400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675029 containerd[1534]: time="2025-02-13T19:51:07.674788680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675029 containerd[1534]: time="2025-02-13T19:51:07.674804280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675029 containerd[1534]: time="2025-02-13T19:51:07.674816040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675029 containerd[1534]: time="2025-02-13T19:51:07.674828480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675029 containerd[1534]: time="2025-02-13T19:51:07.674843680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675029 containerd[1534]: time="2025-02-13T19:51:07.674857840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675029 containerd[1534]: time="2025-02-13T19:51:07.674869520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675029 containerd[1534]: time="2025-02-13T19:51:07.674880640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675029 containerd[1534]: time="2025-02-13T19:51:07.674892440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675029 containerd[1534]: time="2025-02-13T19:51:07.674910800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:51:07.675029 containerd[1534]: time="2025-02-13T19:51:07.674930080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675029 containerd[1534]: time="2025-02-13T19:51:07.674941480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675303 containerd[1534]: time="2025-02-13T19:51:07.674951880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:51:07.675303 containerd[1534]: time="2025-02-13T19:51:07.675065680Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:51:07.675303 containerd[1534]: time="2025-02-13T19:51:07.675082160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:51:07.675303 containerd[1534]: time="2025-02-13T19:51:07.675092680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:51:07.675303 containerd[1534]: time="2025-02-13T19:51:07.675103520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:51:07.675303 containerd[1534]: time="2025-02-13T19:51:07.675112800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675303 containerd[1534]: time="2025-02-13T19:51:07.675124280Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:51:07.675303 containerd[1534]: time="2025-02-13T19:51:07.675133600Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:51:07.675303 containerd[1534]: time="2025-02-13T19:51:07.675143800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:51:07.675562 containerd[1534]: time="2025-02-13T19:51:07.675490040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:51:07.675562 containerd[1534]: time="2025-02-13T19:51:07.675553160Z" level=info msg="Connect containerd service" Feb 13 19:51:07.679062 containerd[1534]: time="2025-02-13T19:51:07.679029960Z" level=info msg="using legacy CRI server" Feb 13 19:51:07.679062 containerd[1534]: time="2025-02-13T19:51:07.679048240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:51:07.679165 containerd[1534]: time="2025-02-13T19:51:07.679126080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:51:07.679735 containerd[1534]: time="2025-02-13T19:51:07.679689840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:51:07.680884 containerd[1534]: time="2025-02-13T19:51:07.680220960Z" level=info msg="Start subscribing containerd event" Feb 13 19:51:07.680884 containerd[1534]: time="2025-02-13T19:51:07.680277440Z" level=info msg="Start recovering state" Feb 13 19:51:07.680884 containerd[1534]: time="2025-02-13T19:51:07.680449080Z" level=info msg="Start event monitor" Feb 13 19:51:07.680884 containerd[1534]: time="2025-02-13T19:51:07.680481400Z" level=info msg="Start snapshots syncer" Feb 13 19:51:07.680884 containerd[1534]: time="2025-02-13T19:51:07.680492640Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:51:07.680884 containerd[1534]: time="2025-02-13T19:51:07.680504560Z" level=info msg="Start streaming server" Feb 13 19:51:07.681057 containerd[1534]: time="2025-02-13T19:51:07.681028640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:51:07.682127 containerd[1534]: time="2025-02-13T19:51:07.681079520Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:51:07.682127 containerd[1534]: time="2025-02-13T19:51:07.681143920Z" level=info msg="containerd successfully booted in 0.046249s" Feb 13 19:51:07.681333 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:51:07.803987 tar[1530]: linux-arm64/LICENSE Feb 13 19:51:07.804235 tar[1530]: linux-arm64/README.md Feb 13 19:51:07.816708 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:51:08.281221 sshd_keygen[1526]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:51:08.290347 systemd-networkd[1233]: eth0: Gained IPv6LL Feb 13 19:51:08.292144 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:51:08.294114 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:51:08.301425 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:51:08.304330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:08.307237 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:51:08.308805 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:51:08.315466 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:51:08.325392 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:51:08.325733 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:51:08.327163 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:51:08.327471 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:51:08.329248 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:51:08.331643 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:51:08.334255 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:51:08.343070 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:51:08.345689 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:51:08.347685 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:51:08.349116 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:51:08.805574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:08.806843 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:51:08.808647 systemd[1]: Startup finished in 5.019s (kernel) + 3.658s (userspace) = 8.678s. Feb 13 19:51:08.809089 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:51:09.288025 kubelet[1639]: E0213 19:51:09.287899 1639 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:51:09.290383 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:51:09.290574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:51:13.291485 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:51:13.303399 systemd[1]: Started sshd@0-10.0.0.116:22-10.0.0.1:60244.service - OpenSSH per-connection server daemon (10.0.0.1:60244). Feb 13 19:51:13.354060 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 60244 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:51:13.355573 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:13.368009 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:51:13.382403 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:51:13.384357 systemd-logind[1515]: New session 1 of user core. Feb 13 19:51:13.392828 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:51:13.395147 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:51:13.400795 (systemd)[1659]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:51:13.466882 systemd[1659]: Queued start job for default target default.target. Feb 13 19:51:13.467217 systemd[1659]: Created slice app.slice - User Application Slice. Feb 13 19:51:13.467238 systemd[1659]: Reached target paths.target - Paths. Feb 13 19:51:13.467249 systemd[1659]: Reached target timers.target - Timers. Feb 13 19:51:13.480325 systemd[1659]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:51:13.485366 systemd[1659]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:51:13.485413 systemd[1659]: Reached target sockets.target - Sockets. Feb 13 19:51:13.485424 systemd[1659]: Reached target basic.target - Basic System. Feb 13 19:51:13.485457 systemd[1659]: Reached target default.target - Main User Target. Feb 13 19:51:13.485478 systemd[1659]: Startup finished in 80ms. Feb 13 19:51:13.485751 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:51:13.487010 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:51:13.542495 systemd[1]: Started sshd@1-10.0.0.116:22-10.0.0.1:60250.service - OpenSSH per-connection server daemon (10.0.0.1:60250). Feb 13 19:51:13.578808 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 60250 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:51:13.580018 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:13.584259 systemd-logind[1515]: New session 2 of user core. Feb 13 19:51:13.592416 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:51:13.643134 sshd[1671]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:13.656413 systemd[1]: Started sshd@2-10.0.0.116:22-10.0.0.1:60266.service - OpenSSH per-connection server daemon (10.0.0.1:60266). Feb 13 19:51:13.656867 systemd[1]: sshd@1-10.0.0.116:22-10.0.0.1:60250.service: Deactivated successfully. Feb 13 19:51:13.658159 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:51:13.659167 systemd-logind[1515]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:51:13.660122 systemd-logind[1515]: Removed session 2. Feb 13 19:51:13.690664 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 60266 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:51:13.691754 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:13.695361 systemd-logind[1515]: New session 3 of user core. Feb 13 19:51:13.706485 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:51:13.754076 sshd[1676]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:13.768389 systemd[1]: Started sshd@3-10.0.0.116:22-10.0.0.1:60268.service - OpenSSH per-connection server daemon (10.0.0.1:60268). Feb 13 19:51:13.768726 systemd[1]: sshd@2-10.0.0.116:22-10.0.0.1:60266.service: Deactivated successfully. Feb 13 19:51:13.770271 systemd-logind[1515]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:51:13.770791 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:51:13.771970 systemd-logind[1515]: Removed session 3. Feb 13 19:51:13.802563 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 60268 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:51:13.803533 sshd[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:13.806667 systemd-logind[1515]: New session 4 of user core. Feb 13 19:51:13.816387 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:51:13.867087 sshd[1684]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:13.878396 systemd[1]: Started sshd@4-10.0.0.116:22-10.0.0.1:60274.service - OpenSSH per-connection server daemon (10.0.0.1:60274). Feb 13 19:51:13.878743 systemd[1]: sshd@3-10.0.0.116:22-10.0.0.1:60268.service: Deactivated successfully. Feb 13 19:51:13.880357 systemd-logind[1515]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:51:13.880859 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:51:13.882215 systemd-logind[1515]: Removed session 4. Feb 13 19:51:13.912551 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 60274 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:51:13.913658 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:13.917280 systemd-logind[1515]: New session 5 of user core. Feb 13 19:51:13.928507 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:51:13.987139 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:51:13.987410 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:51:14.287721 (dockerd)[1717]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:51:14.288136 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:51:14.544863 dockerd[1717]: time="2025-02-13T19:51:14.544510987Z" level=info msg="Starting up" Feb 13 19:51:14.787675 dockerd[1717]: time="2025-02-13T19:51:14.787631409Z" level=info msg="Loading containers: start." Feb 13 19:51:14.870214 kernel: Initializing XFRM netlink socket Feb 13 19:51:14.926832 systemd-networkd[1233]: docker0: Link UP Feb 13 19:51:14.948307 dockerd[1717]: time="2025-02-13T19:51:14.948270977Z" level=info msg="Loading containers: done." Feb 13 19:51:14.961212 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1439433591-merged.mount: Deactivated successfully. Feb 13 19:51:14.962426 dockerd[1717]: time="2025-02-13T19:51:14.962380474Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:51:14.962492 dockerd[1717]: time="2025-02-13T19:51:14.962469552Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 19:51:14.962574 dockerd[1717]: time="2025-02-13T19:51:14.962556896Z" level=info msg="Daemon has completed initialization" Feb 13 19:51:14.987785 dockerd[1717]: time="2025-02-13T19:51:14.987665192Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:51:14.987886 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:51:15.623632 containerd[1534]: time="2025-02-13T19:51:15.623500930Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:51:16.489715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2404588585.mount: Deactivated successfully. Feb 13 19:51:18.755256 containerd[1534]: time="2025-02-13T19:51:18.755208997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:18.756305 containerd[1534]: time="2025-02-13T19:51:18.756061275Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 19:51:18.756927 containerd[1534]: time="2025-02-13T19:51:18.756898964Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:18.759723 containerd[1534]: time="2025-02-13T19:51:18.759675116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:18.761017 containerd[1534]: time="2025-02-13T19:51:18.760885451Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 3.137325681s" Feb 13 19:51:18.761017 containerd[1534]: time="2025-02-13T19:51:18.760949394Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 19:51:18.780135 containerd[1534]: time="2025-02-13T19:51:18.780077939Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:51:19.330856 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:51:19.341356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:19.431010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:19.434867 (kubelet)[1944]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:51:19.477830 kubelet[1944]: E0213 19:51:19.477775 1944 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:51:19.480753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:51:19.480938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:51:20.635320 containerd[1534]: time="2025-02-13T19:51:20.635253974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:20.638665 containerd[1534]: time="2025-02-13T19:51:20.636363443Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 19:51:20.639423 containerd[1534]: time="2025-02-13T19:51:20.639396015Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:20.642726 containerd[1534]: time="2025-02-13T19:51:20.642677812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:20.643465 containerd[1534]: time="2025-02-13T19:51:20.643431630Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.863307083s" Feb 13 19:51:20.643522 containerd[1534]: time="2025-02-13T19:51:20.643466316Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 19:51:20.661346 containerd[1534]: time="2025-02-13T19:51:20.661273339Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:51:22.023440 containerd[1534]: time="2025-02-13T19:51:22.023390917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:22.024742 containerd[1534]: time="2025-02-13T19:51:22.024102336Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 19:51:22.025574 containerd[1534]: time="2025-02-13T19:51:22.025496857Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:22.031330 containerd[1534]: time="2025-02-13T19:51:22.029549774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:22.031330 containerd[1534]: time="2025-02-13T19:51:22.030738602Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.369433676s" Feb 13 19:51:22.031330 containerd[1534]: time="2025-02-13T19:51:22.030766199Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 19:51:22.049165 containerd[1534]: time="2025-02-13T19:51:22.049140648Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:51:23.227588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1360196240.mount: Deactivated successfully. Feb 13 19:51:23.560879 containerd[1534]: time="2025-02-13T19:51:23.560758484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:23.561394 containerd[1534]: time="2025-02-13T19:51:23.561229592Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 19:51:23.562029 containerd[1534]: time="2025-02-13T19:51:23.561997703Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:23.563835 containerd[1534]: time="2025-02-13T19:51:23.563804423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:23.564535 containerd[1534]: time="2025-02-13T19:51:23.564506293Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.515334842s" Feb 13 19:51:23.564571 containerd[1534]: time="2025-02-13T19:51:23.564540737Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:51:23.582546 containerd[1534]: time="2025-02-13T19:51:23.582517200Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:51:24.208856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount966900630.mount: Deactivated successfully. Feb 13 19:51:25.147454 containerd[1534]: time="2025-02-13T19:51:25.147395311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:25.147864 containerd[1534]: time="2025-02-13T19:51:25.147775580Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 19:51:25.150420 containerd[1534]: time="2025-02-13T19:51:25.150378637Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:25.153668 containerd[1534]: time="2025-02-13T19:51:25.153626938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:25.155381 containerd[1534]: time="2025-02-13T19:51:25.155347548Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.572794105s" Feb 13 19:51:25.155423 containerd[1534]: time="2025-02-13T19:51:25.155383535Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:51:25.174286 containerd[1534]: time="2025-02-13T19:51:25.174251861Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:51:25.651598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3083086301.mount: Deactivated successfully. Feb 13 19:51:25.655401 containerd[1534]: time="2025-02-13T19:51:25.655357327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:25.656072 containerd[1534]: time="2025-02-13T19:51:25.656032908Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 19:51:25.656738 containerd[1534]: time="2025-02-13T19:51:25.656702678Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:25.659430 containerd[1534]: time="2025-02-13T19:51:25.659399469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:25.660865 containerd[1534]: time="2025-02-13T19:51:25.660833345Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 486.548743ms" Feb 13 19:51:25.660983 containerd[1534]: time="2025-02-13T19:51:25.660864603Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 19:51:25.679054 containerd[1534]: time="2025-02-13T19:51:25.679023966Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:51:26.237254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1733437709.mount: Deactivated successfully. Feb 13 19:51:29.510957 containerd[1534]: time="2025-02-13T19:51:29.510902459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:29.511426 containerd[1534]: time="2025-02-13T19:51:29.511329526Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 19:51:29.512180 containerd[1534]: time="2025-02-13T19:51:29.512148223Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:29.515223 containerd[1534]: time="2025-02-13T19:51:29.515150869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:29.516430 containerd[1534]: time="2025-02-13T19:51:29.516391307Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.837333044s" Feb 13 19:51:29.516476 containerd[1534]: time="2025-02-13T19:51:29.516428067Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 19:51:29.547327 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:51:29.558519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:29.722155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:29.726067 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:51:29.764770 kubelet[2117]: E0213 19:51:29.764604 2117 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:51:29.767133 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:51:29.767329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:51:35.938047 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:35.949417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:35.964414 systemd[1]: Reloading requested from client PID 2198 ('systemctl') (unit session-5.scope)... Feb 13 19:51:35.964433 systemd[1]: Reloading... Feb 13 19:51:36.029223 zram_generator::config[2241]: No configuration found. Feb 13 19:51:36.261433 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:36.314054 systemd[1]: Reloading finished in 349 ms. Feb 13 19:51:36.347814 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:51:36.347875 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:51:36.348103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:36.350207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:36.436484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:36.441370 (kubelet)[2295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:51:36.478937 kubelet[2295]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:36.478937 kubelet[2295]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:51:36.478937 kubelet[2295]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:36.479799 kubelet[2295]: I0213 19:51:36.479747 2295 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:51:37.571800 kubelet[2295]: I0213 19:51:37.571745 2295 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:51:37.571800 kubelet[2295]: I0213 19:51:37.571785 2295 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:51:37.572147 kubelet[2295]: I0213 19:51:37.571992 2295 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:51:37.618452 kubelet[2295]: E0213 19:51:37.618423 2295 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:51:37.618558 kubelet[2295]: I0213 19:51:37.618526 2295 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:51:37.628503 kubelet[2295]: I0213 19:51:37.628486 2295 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:51:37.630554 kubelet[2295]: I0213 19:51:37.630481 2295 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:51:37.631202 kubelet[2295]: I0213 19:51:37.630712 2295 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:51:37.631202 kubelet[2295]: I0213 19:51:37.630963 2295 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:51:37.631202 kubelet[2295]: I0213 19:51:37.630973 2295 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:51:37.631360 kubelet[2295]: I0213 19:51:37.631229 2295 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:37.632120 kubelet[2295]: I0213 19:51:37.632104 2295 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:51:37.632120 kubelet[2295]: I0213 19:51:37.632121 2295 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:51:37.632719 kubelet[2295]: I0213 19:51:37.632449 2295 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:51:37.632719 kubelet[2295]: I0213 19:51:37.632633 2295 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:51:37.632990 kubelet[2295]: W0213 19:51:37.632921 2295 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:51:37.632990 kubelet[2295]: E0213 19:51:37.632970 2295 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:51:37.633093 kubelet[2295]: W0213 19:51:37.633026 2295 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:51:37.633093 kubelet[2295]: E0213 19:51:37.633064 2295 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:51:37.633653 kubelet[2295]: I0213 19:51:37.633625 2295 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:51:37.633998 kubelet[2295]: I0213 19:51:37.633971 2295 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:51:37.634092 kubelet[2295]: W0213 19:51:37.634075 2295 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:51:37.638850 kubelet[2295]: I0213 19:51:37.634877 2295 server.go:1264] "Started kubelet" Feb 13 19:51:37.638850 kubelet[2295]: I0213 19:51:37.637566 2295 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:51:37.638850 kubelet[2295]: I0213 19:51:37.638517 2295 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:51:37.638850 kubelet[2295]: I0213 19:51:37.638629 2295 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:51:37.638850 kubelet[2295]: I0213 19:51:37.638728 2295 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:51:37.641341 kubelet[2295]: I0213 19:51:37.640250 2295 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:51:37.641665 kubelet[2295]: E0213 19:51:37.641437 2295 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.116:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.116:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dc79ffbcb3f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:51:37.634853881 +0000 UTC m=+1.189922672,LastTimestamp:2025-02-13 19:51:37.634853881 +0000 UTC m=+1.189922672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:51:37.642937 kubelet[2295]: I0213 19:51:37.642500 2295 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:51:37.642937 kubelet[2295]: I0213 19:51:37.642576 2295 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:51:37.646582 kubelet[2295]: I0213 19:51:37.643525 2295 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:51:37.646582 kubelet[2295]: W0213 19:51:37.643771 2295 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:51:37.646582 kubelet[2295]: E0213 19:51:37.643819 2295 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:51:37.646582 kubelet[2295]: E0213 19:51:37.644314 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="200ms" Feb 13 19:51:37.648931 kubelet[2295]: I0213 19:51:37.648904 2295 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:51:37.648931 kubelet[2295]: I0213 19:51:37.648923 2295 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:51:37.649013 kubelet[2295]: I0213 19:51:37.648990 2295 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:51:37.656113 kubelet[2295]: I0213 19:51:37.656071 2295 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:51:37.657070 kubelet[2295]: I0213 19:51:37.657023 2295 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:51:37.657200 kubelet[2295]: I0213 19:51:37.657175 2295 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:51:37.657241 kubelet[2295]: I0213 19:51:37.657216 2295 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:51:37.657286 kubelet[2295]: E0213 19:51:37.657262 2295 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:51:37.661219 kubelet[2295]: W0213 19:51:37.658896 2295 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:51:37.661219 kubelet[2295]: E0213 19:51:37.658945 2295 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:51:37.661219 kubelet[2295]: E0213 19:51:37.660327 2295 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:51:37.667055 kubelet[2295]: I0213 19:51:37.667033 2295 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:51:37.667055 kubelet[2295]: I0213 19:51:37.667052 2295 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:51:37.667055 kubelet[2295]: I0213 19:51:37.667069 2295 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:37.729280 kubelet[2295]: I0213 19:51:37.729248 2295 policy_none.go:49] "None policy: Start" Feb 13 19:51:37.730044 kubelet[2295]: I0213 19:51:37.730015 2295 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:51:37.730044 kubelet[2295]: I0213 19:51:37.730049 2295 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:51:37.734034 kubelet[2295]: I0213 19:51:37.733992 2295 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:51:37.734608 kubelet[2295]: I0213 19:51:37.734170 2295 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:51:37.734608 kubelet[2295]: I0213 19:51:37.734292 2295 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:51:37.735707 kubelet[2295]: E0213 19:51:37.735684 2295 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:51:37.743858 kubelet[2295]: I0213 19:51:37.743840 2295 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:51:37.746094 kubelet[2295]: E0213 19:51:37.746070 2295 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Feb 13 19:51:37.758343 kubelet[2295]: I0213 19:51:37.758318 2295 topology_manager.go:215] "Topology Admit Handler" podUID="b265e1c3b1a1ef565fd1115d8e883f92" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:51:37.759222 kubelet[2295]: I0213 19:51:37.759179 2295 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:51:37.760054 kubelet[2295]: I0213 19:51:37.760026 2295 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:51:37.844084 kubelet[2295]: I0213 19:51:37.843990 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:51:37.844084 kubelet[2295]: I0213 19:51:37.844021 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b265e1c3b1a1ef565fd1115d8e883f92-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b265e1c3b1a1ef565fd1115d8e883f92\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:51:37.844084 kubelet[2295]: I0213 19:51:37.844042 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:51:37.844084 kubelet[2295]: I0213 19:51:37.844059 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:51:37.844084 kubelet[2295]: I0213 19:51:37.844075 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:51:37.844269 kubelet[2295]: I0213 19:51:37.844092 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:51:37.844269 kubelet[2295]: I0213 19:51:37.844119 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b265e1c3b1a1ef565fd1115d8e883f92-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b265e1c3b1a1ef565fd1115d8e883f92\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:51:37.844269 kubelet[2295]: I0213 19:51:37.844137 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b265e1c3b1a1ef565fd1115d8e883f92-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b265e1c3b1a1ef565fd1115d8e883f92\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:51:37.844269 kubelet[2295]: I0213 19:51:37.844154 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:51:37.845300 kubelet[2295]: E0213 19:51:37.845260 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="400ms" Feb 13 19:51:37.947525 kubelet[2295]: I0213 19:51:37.947284 2295 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:51:37.947587 kubelet[2295]: E0213 19:51:37.947529 2295 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Feb 13 19:51:38.063704 kubelet[2295]: E0213 19:51:38.063653 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:38.064083 kubelet[2295]: E0213 19:51:38.063988 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:38.064907 kubelet[2295]: E0213 19:51:38.064425 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:38.065019 containerd[1534]: time="2025-02-13T19:51:38.064448111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b265e1c3b1a1ef565fd1115d8e883f92,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:38.065019 containerd[1534]: time="2025-02-13T19:51:38.064879773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:38.065436 containerd[1534]: time="2025-02-13T19:51:38.065130576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:38.245791 kubelet[2295]: E0213 19:51:38.245701 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="800ms" Feb 13 19:51:38.348934 kubelet[2295]: I0213 19:51:38.348908 2295 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:51:38.349230 kubelet[2295]: E0213 19:51:38.349171 2295 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Feb 13 19:51:38.497001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1449331831.mount: Deactivated successfully. Feb 13 19:51:38.501289 containerd[1534]: time="2025-02-13T19:51:38.501213076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:38.502518 containerd[1534]: time="2025-02-13T19:51:38.502489777Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:51:38.503118 containerd[1534]: time="2025-02-13T19:51:38.503072049Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:38.504379 containerd[1534]: time="2025-02-13T19:51:38.504353271Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:51:38.505024 containerd[1534]: time="2025-02-13T19:51:38.504999804Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:38.506174 containerd[1534]: time="2025-02-13T19:51:38.506113490Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:38.506777 containerd[1534]: time="2025-02-13T19:51:38.506743258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:51:38.509238 containerd[1534]: time="2025-02-13T19:51:38.508981675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:38.511448 containerd[1534]: time="2025-02-13T19:51:38.511216731Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 446.240286ms" Feb 13 19:51:38.513859 containerd[1534]: time="2025-02-13T19:51:38.513827391Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 449.278087ms" Feb 13 19:51:38.516535 containerd[1534]: time="2025-02-13T19:51:38.516502392Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 451.277585ms" Feb 13 19:51:38.635808 containerd[1534]: time="2025-02-13T19:51:38.635707811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:38.635808 containerd[1534]: time="2025-02-13T19:51:38.635760549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:38.635808 containerd[1534]: time="2025-02-13T19:51:38.635775954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:38.635994 containerd[1534]: time="2025-02-13T19:51:38.635867184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:38.636421 containerd[1534]: time="2025-02-13T19:51:38.636267396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:38.636421 containerd[1534]: time="2025-02-13T19:51:38.636331977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:38.636421 containerd[1534]: time="2025-02-13T19:51:38.636346582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:38.636421 containerd[1534]: time="2025-02-13T19:51:38.635514868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:38.636421 containerd[1534]: time="2025-02-13T19:51:38.636324895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:38.636587 containerd[1534]: time="2025-02-13T19:51:38.636428329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:38.636645 containerd[1534]: time="2025-02-13T19:51:38.636593263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:38.637130 containerd[1534]: time="2025-02-13T19:51:38.637084985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:38.639215 kubelet[2295]: W0213 19:51:38.639074 2295 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:51:38.639215 kubelet[2295]: E0213 19:51:38.639152 2295 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:51:38.659016 kubelet[2295]: W0213 19:51:38.658958 2295 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:51:38.659016 kubelet[2295]: E0213 19:51:38.659020 2295 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:51:38.687487 containerd[1534]: time="2025-02-13T19:51:38.687448772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"838f53333c76a53343c435b3c88d3479508e6b19d2a0147c26270c95ae8b2d77\"" Feb 13 19:51:38.688780 kubelet[2295]: E0213 19:51:38.688740 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:38.689375 containerd[1534]: time="2025-02-13T19:51:38.689337274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8ddc7118cd2c28fa917b16db5110c15f225b2a1033d9c6d34a733e5890bd35a\"" Feb 13 19:51:38.690527 kubelet[2295]: E0213 19:51:38.690292 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:38.692773 containerd[1534]: time="2025-02-13T19:51:38.692726110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b265e1c3b1a1ef565fd1115d8e883f92,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6f564f790fe1b5718072574a0e8b62f5e7fdddf1056df6b6814890683dcdf90\"" Feb 13 19:51:38.693616 kubelet[2295]: E0213 19:51:38.693593 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:38.693669 containerd[1534]: time="2025-02-13T19:51:38.693592395Z" level=info msg="CreateContainer within sandbox \"a8ddc7118cd2c28fa917b16db5110c15f225b2a1033d9c6d34a733e5890bd35a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:51:38.694367 containerd[1534]: time="2025-02-13T19:51:38.693611001Z" level=info msg="CreateContainer within sandbox \"838f53333c76a53343c435b3c88d3479508e6b19d2a0147c26270c95ae8b2d77\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:51:38.695808 containerd[1534]: time="2025-02-13T19:51:38.695780276Z" level=info msg="CreateContainer within sandbox \"b6f564f790fe1b5718072574a0e8b62f5e7fdddf1056df6b6814890683dcdf90\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:51:38.710628 containerd[1534]: time="2025-02-13T19:51:38.710586952Z" level=info msg="CreateContainer within sandbox \"838f53333c76a53343c435b3c88d3479508e6b19d2a0147c26270c95ae8b2d77\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"be43509523525b1e410070dcaa74531a7f0f951e79ccd5ad7c0989896f526e99\"" Feb 13 19:51:38.711236 containerd[1534]: time="2025-02-13T19:51:38.711209797Z" level=info msg="StartContainer for \"be43509523525b1e410070dcaa74531a7f0f951e79ccd5ad7c0989896f526e99\"" Feb 13 19:51:38.712666 containerd[1534]: time="2025-02-13T19:51:38.712562363Z" level=info msg="CreateContainer within sandbox \"a8ddc7118cd2c28fa917b16db5110c15f225b2a1033d9c6d34a733e5890bd35a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c27fda7b5af3c747ef7c8c29ef33dbad774255c9d4d78f1960de96b2a03b6f2b\"" Feb 13 19:51:38.712973 containerd[1534]: time="2025-02-13T19:51:38.712949370Z" level=info msg="StartContainer for \"c27fda7b5af3c747ef7c8c29ef33dbad774255c9d4d78f1960de96b2a03b6f2b\"" Feb 13 19:51:38.714562 containerd[1534]: time="2025-02-13T19:51:38.714515286Z" level=info msg="CreateContainer within sandbox \"b6f564f790fe1b5718072574a0e8b62f5e7fdddf1056df6b6814890683dcdf90\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e33699dc2bbd184bb14353324826e4dd8307bff62cd2cbf97e440e20e546e1e5\"" Feb 13 19:51:38.714953 containerd[1534]: time="2025-02-13T19:51:38.714926261Z" level=info msg="StartContainer for \"e33699dc2bbd184bb14353324826e4dd8307bff62cd2cbf97e440e20e546e1e5\"" Feb 13 19:51:38.717718 kubelet[2295]: W0213 19:51:38.717634 2295 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:51:38.717718 kubelet[2295]: E0213 19:51:38.717698 2295 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:51:38.764580 containerd[1534]: time="2025-02-13T19:51:38.764467417Z" level=info msg="StartContainer for \"be43509523525b1e410070dcaa74531a7f0f951e79ccd5ad7c0989896f526e99\" returns successfully" Feb 13 19:51:38.764580 containerd[1534]: time="2025-02-13T19:51:38.764571412Z" level=info msg="StartContainer for \"e33699dc2bbd184bb14353324826e4dd8307bff62cd2cbf97e440e20e546e1e5\" returns successfully" Feb 13 19:51:38.790463 containerd[1534]: time="2025-02-13T19:51:38.786226263Z" level=info msg="StartContainer for \"c27fda7b5af3c747ef7c8c29ef33dbad774255c9d4d78f1960de96b2a03b6f2b\" returns successfully" Feb 13 19:51:39.154174 kubelet[2295]: I0213 19:51:39.154138 2295 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:51:39.671491 kubelet[2295]: E0213 19:51:39.671463 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:39.673959 kubelet[2295]: E0213 19:51:39.673881 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:39.676654 kubelet[2295]: E0213 19:51:39.676621 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:40.675391 kubelet[2295]: E0213 19:51:40.675365 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:41.188834 kubelet[2295]: E0213 19:51:41.188776 2295 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:51:41.339954 kubelet[2295]: I0213 19:51:41.339910 2295 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:51:41.635290 kubelet[2295]: I0213 19:51:41.635261 2295 apiserver.go:52] "Watching apiserver" Feb 13 19:51:41.643586 kubelet[2295]: I0213 19:51:41.643529 2295 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:51:43.392313 systemd[1]: Reloading requested from client PID 2578 ('systemctl') (unit session-5.scope)... Feb 13 19:51:43.392328 systemd[1]: Reloading... Feb 13 19:51:43.454261 zram_generator::config[2620]: No configuration found. Feb 13 19:51:43.541770 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:43.601526 systemd[1]: Reloading finished in 208 ms. Feb 13 19:51:43.625681 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:43.625807 kubelet[2295]: E0213 19:51:43.625552 2295 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.1823dc79ffbcb3f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:51:37.634853881 +0000 UTC m=+1.189922672,LastTimestamp:2025-02-13 19:51:37.634853881 +0000 UTC m=+1.189922672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:51:43.626528 kubelet[2295]: I0213 19:51:43.626496 2295 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:51:43.636141 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:51:43.636476 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:43.643526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:43.737170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:43.740731 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:51:43.791580 kubelet[2669]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:43.791580 kubelet[2669]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:51:43.791580 kubelet[2669]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:43.792553 kubelet[2669]: I0213 19:51:43.791591 2669 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:51:43.796231 kubelet[2669]: I0213 19:51:43.795773 2669 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:51:43.796231 kubelet[2669]: I0213 19:51:43.795801 2669 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:51:43.796231 kubelet[2669]: I0213 19:51:43.795973 2669 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:51:43.797382 kubelet[2669]: I0213 19:51:43.797358 2669 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:51:43.798962 kubelet[2669]: I0213 19:51:43.798829 2669 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:51:43.805718 kubelet[2669]: I0213 19:51:43.805687 2669 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:51:43.806366 kubelet[2669]: I0213 19:51:43.806325 2669 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:51:43.806523 kubelet[2669]: I0213 19:51:43.806364 2669 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:51:43.806610 kubelet[2669]: I0213 19:51:43.806530 2669 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:51:43.806610 kubelet[2669]: I0213 19:51:43.806540 2669 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:51:43.806610 kubelet[2669]: I0213 19:51:43.806572 2669 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:43.806677 kubelet[2669]: I0213 19:51:43.806668 2669 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:51:43.806697 kubelet[2669]: I0213 19:51:43.806680 2669 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:51:43.806733 kubelet[2669]: I0213 19:51:43.806717 2669 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:51:43.807065 kubelet[2669]: I0213 19:51:43.806739 2669 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:51:43.809641 kubelet[2669]: I0213 19:51:43.807451 2669 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:51:43.809641 kubelet[2669]: I0213 19:51:43.807718 2669 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:51:43.809641 kubelet[2669]: I0213 19:51:43.808122 2669 server.go:1264] "Started kubelet" Feb 13 19:51:43.809641 kubelet[2669]: I0213 19:51:43.808978 2669 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:51:43.809641 kubelet[2669]: I0213 19:51:43.809242 2669 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:51:43.809641 kubelet[2669]: I0213 19:51:43.809279 2669 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:51:43.809641 kubelet[2669]: I0213 19:51:43.809531 2669 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:51:43.814205 kubelet[2669]: I0213 19:51:43.810321 2669 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:51:43.814205 kubelet[2669]: I0213 19:51:43.812130 2669 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:51:43.814205 kubelet[2669]: I0213 19:51:43.812254 2669 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:51:43.814205 kubelet[2669]: I0213 19:51:43.812404 2669 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:51:43.815435 kubelet[2669]: E0213 19:51:43.815410 2669 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:51:43.827270 kubelet[2669]: I0213 19:51:43.826423 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:51:43.827270 kubelet[2669]: I0213 19:51:43.827274 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:51:43.827417 kubelet[2669]: I0213 19:51:43.827305 2669 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:51:43.827417 kubelet[2669]: I0213 19:51:43.827328 2669 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:51:43.827417 kubelet[2669]: E0213 19:51:43.827366 2669 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:51:43.833088 kubelet[2669]: I0213 19:51:43.833037 2669 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:51:43.833158 kubelet[2669]: I0213 19:51:43.833127 2669 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:51:43.835690 kubelet[2669]: I0213 19:51:43.835670 2669 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:51:43.874410 kubelet[2669]: I0213 19:51:43.874378 2669 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:51:43.874410 kubelet[2669]: I0213 19:51:43.874399 2669 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:51:43.874410 kubelet[2669]: I0213 19:51:43.874418 2669 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:43.874574 kubelet[2669]: I0213 19:51:43.874555 2669 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:51:43.874600 kubelet[2669]: I0213 19:51:43.874572 2669 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:51:43.874600 kubelet[2669]: I0213 19:51:43.874592 2669 policy_none.go:49] "None policy: Start" Feb 13 19:51:43.875270 kubelet[2669]: I0213 19:51:43.875172 2669 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:51:43.875343 kubelet[2669]: I0213 19:51:43.875283 2669 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:51:43.875457 kubelet[2669]: I0213 19:51:43.875437 2669 state_mem.go:75] "Updated machine memory state" Feb 13 19:51:43.876539 kubelet[2669]: I0213 19:51:43.876503 2669 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:51:43.877424 kubelet[2669]: I0213 19:51:43.877374 2669 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:51:43.877505 kubelet[2669]: I0213 19:51:43.877472 2669 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:51:43.916635 kubelet[2669]: I0213 19:51:43.916043 2669 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:51:43.922905 kubelet[2669]: I0213 19:51:43.922860 2669 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 19:51:43.922988 kubelet[2669]: I0213 19:51:43.922934 2669 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:51:43.927532 kubelet[2669]: I0213 19:51:43.927442 2669 topology_manager.go:215] "Topology Admit Handler" podUID="b265e1c3b1a1ef565fd1115d8e883f92" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:51:43.927605 kubelet[2669]: I0213 19:51:43.927547 2669 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:51:43.927605 kubelet[2669]: I0213 19:51:43.927580 2669 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:51:44.114021 kubelet[2669]: I0213 19:51:44.113976 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:51:44.114021 kubelet[2669]: I0213 19:51:44.114018 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:51:44.114146 kubelet[2669]: I0213 19:51:44.114038 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:51:44.114146 kubelet[2669]: I0213 19:51:44.114055 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:51:44.114146 kubelet[2669]: I0213 19:51:44.114074 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b265e1c3b1a1ef565fd1115d8e883f92-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b265e1c3b1a1ef565fd1115d8e883f92\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:51:44.114146 kubelet[2669]: I0213 19:51:44.114089 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b265e1c3b1a1ef565fd1115d8e883f92-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b265e1c3b1a1ef565fd1115d8e883f92\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:51:44.114146 kubelet[2669]: I0213 19:51:44.114104 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b265e1c3b1a1ef565fd1115d8e883f92-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b265e1c3b1a1ef565fd1115d8e883f92\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:51:44.114308 kubelet[2669]: I0213 19:51:44.114119 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:51:44.114308 kubelet[2669]: I0213 19:51:44.114135 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:51:44.246950 kubelet[2669]: E0213 19:51:44.246746 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:44.246950 kubelet[2669]: E0213 19:51:44.246791 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:44.247354 kubelet[2669]: E0213 19:51:44.247288 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:44.807567 kubelet[2669]: I0213 19:51:44.807230 2669 apiserver.go:52] "Watching apiserver" Feb 13 19:51:44.812978 kubelet[2669]: I0213 19:51:44.812956 2669 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:51:44.850153 kubelet[2669]: E0213 19:51:44.849977 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:44.850153 kubelet[2669]: E0213 19:51:44.850098 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:44.850283 kubelet[2669]: E0213 19:51:44.850160 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:44.866890 kubelet[2669]: I0213 19:51:44.866841 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8668277039999999 podStartE2EDuration="1.866827704s" podCreationTimestamp="2025-02-13 19:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:44.866622588 +0000 UTC m=+1.122395184" watchObservedRunningTime="2025-02-13 19:51:44.866827704 +0000 UTC m=+1.122600300" Feb 13 19:51:44.873064 kubelet[2669]: I0213 19:51:44.873016 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.872999109 podStartE2EDuration="1.872999109s" podCreationTimestamp="2025-02-13 19:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:44.872813036 +0000 UTC m=+1.128585632" watchObservedRunningTime="2025-02-13 19:51:44.872999109 +0000 UTC m=+1.128771705" Feb 13 19:51:44.879769 kubelet[2669]: I0213 19:51:44.879640 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8796279550000001 podStartE2EDuration="1.879627955s" podCreationTimestamp="2025-02-13 19:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:44.87954578 +0000 UTC m=+1.135318376" watchObservedRunningTime="2025-02-13 19:51:44.879627955 +0000 UTC m=+1.135400551" Feb 13 19:51:45.157522 sudo[1699]: pam_unix(sudo:session): session closed for user root Feb 13 19:51:45.159047 sshd[1692]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:45.162448 systemd[1]: sshd@4-10.0.0.116:22-10.0.0.1:60274.service: Deactivated successfully. Feb 13 19:51:45.164409 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:51:45.166142 systemd-logind[1515]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:51:45.167057 systemd-logind[1515]: Removed session 5. Feb 13 19:51:45.851425 kubelet[2669]: E0213 19:51:45.851391 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:51.145926 kubelet[2669]: E0213 19:51:51.145840 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:51.859291 kubelet[2669]: E0213 19:51:51.859155 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:53.139004 update_engine[1518]: I20250213 19:51:53.138934 1518 update_attempter.cc:509] Updating boot flags... Feb 13 19:51:53.168206 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2740) Feb 13 19:51:53.196973 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2742) Feb 13 19:51:53.464223 kubelet[2669]: E0213 19:51:53.464042 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:53.603214 kubelet[2669]: E0213 19:51:53.603112 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:53.861078 kubelet[2669]: E0213 19:51:53.861043 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:53.861618 kubelet[2669]: E0213 19:51:53.861595 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:54.862275 kubelet[2669]: E0213 19:51:54.862211 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:57.398365 kubelet[2669]: I0213 19:51:57.398315 2669 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:51:57.399969 containerd[1534]: time="2025-02-13T19:51:57.399916552Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:51:57.400510 kubelet[2669]: I0213 19:51:57.400488 2669 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:51:58.367524 kubelet[2669]: I0213 19:51:58.363573 2669 topology_manager.go:215] "Topology Admit Handler" podUID="2be29815-8cdc-4c8a-b8ef-d1286bf624f7" podNamespace="kube-system" podName="kube-proxy-9rm8n" Feb 13 19:51:58.372703 kubelet[2669]: I0213 19:51:58.372631 2669 topology_manager.go:215] "Topology Admit Handler" podUID="dd2c35dd-7362-466f-acaf-7fd1f4f832dc" podNamespace="kube-flannel" podName="kube-flannel-ds-l2r7d" Feb 13 19:51:58.408591 kubelet[2669]: I0213 19:51:58.408510 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2be29815-8cdc-4c8a-b8ef-d1286bf624f7-xtables-lock\") pod \"kube-proxy-9rm8n\" (UID: \"2be29815-8cdc-4c8a-b8ef-d1286bf624f7\") " pod="kube-system/kube-proxy-9rm8n" Feb 13 19:51:58.408591 kubelet[2669]: I0213 19:51:58.408571 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd2c35dd-7362-466f-acaf-7fd1f4f832dc-xtables-lock\") pod \"kube-flannel-ds-l2r7d\" (UID: \"dd2c35dd-7362-466f-acaf-7fd1f4f832dc\") " pod="kube-flannel/kube-flannel-ds-l2r7d" Feb 13 19:51:58.408591 kubelet[2669]: I0213 19:51:58.408596 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2be29815-8cdc-4c8a-b8ef-d1286bf624f7-kube-proxy\") pod \"kube-proxy-9rm8n\" (UID: \"2be29815-8cdc-4c8a-b8ef-d1286bf624f7\") " pod="kube-system/kube-proxy-9rm8n" Feb 13 19:51:58.409035 kubelet[2669]: I0213 19:51:58.408611 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/dd2c35dd-7362-466f-acaf-7fd1f4f832dc-cni\") pod \"kube-flannel-ds-l2r7d\" (UID: \"dd2c35dd-7362-466f-acaf-7fd1f4f832dc\") " pod="kube-flannel/kube-flannel-ds-l2r7d" Feb 13 19:51:58.409035 kubelet[2669]: I0213 19:51:58.408627 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2be29815-8cdc-4c8a-b8ef-d1286bf624f7-lib-modules\") pod \"kube-proxy-9rm8n\" (UID: \"2be29815-8cdc-4c8a-b8ef-d1286bf624f7\") " pod="kube-system/kube-proxy-9rm8n" Feb 13 19:51:58.409035 kubelet[2669]: I0213 19:51:58.408643 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8lx2\" (UniqueName: \"kubernetes.io/projected/2be29815-8cdc-4c8a-b8ef-d1286bf624f7-kube-api-access-w8lx2\") pod \"kube-proxy-9rm8n\" (UID: \"2be29815-8cdc-4c8a-b8ef-d1286bf624f7\") " pod="kube-system/kube-proxy-9rm8n" Feb 13 19:51:58.409035 kubelet[2669]: I0213 19:51:58.408659 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb84h\" (UniqueName: \"kubernetes.io/projected/dd2c35dd-7362-466f-acaf-7fd1f4f832dc-kube-api-access-gb84h\") pod \"kube-flannel-ds-l2r7d\" (UID: \"dd2c35dd-7362-466f-acaf-7fd1f4f832dc\") " pod="kube-flannel/kube-flannel-ds-l2r7d" Feb 13 19:51:58.409301 kubelet[2669]: I0213 19:51:58.408708 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dd2c35dd-7362-466f-acaf-7fd1f4f832dc-run\") pod \"kube-flannel-ds-l2r7d\" (UID: \"dd2c35dd-7362-466f-acaf-7fd1f4f832dc\") " pod="kube-flannel/kube-flannel-ds-l2r7d" Feb 13 19:51:58.409301 kubelet[2669]: I0213 19:51:58.409224 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/dd2c35dd-7362-466f-acaf-7fd1f4f832dc-cni-plugin\") pod \"kube-flannel-ds-l2r7d\" (UID: \"dd2c35dd-7362-466f-acaf-7fd1f4f832dc\") " pod="kube-flannel/kube-flannel-ds-l2r7d" Feb 13 19:51:58.409301 kubelet[2669]: I0213 19:51:58.409247 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/dd2c35dd-7362-466f-acaf-7fd1f4f832dc-flannel-cfg\") pod \"kube-flannel-ds-l2r7d\" (UID: \"dd2c35dd-7362-466f-acaf-7fd1f4f832dc\") " pod="kube-flannel/kube-flannel-ds-l2r7d" Feb 13 19:51:58.667611 kubelet[2669]: E0213 19:51:58.667174 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:58.668406 containerd[1534]: time="2025-02-13T19:51:58.667878134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rm8n,Uid:2be29815-8cdc-4c8a-b8ef-d1286bf624f7,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:58.677266 kubelet[2669]: E0213 19:51:58.677014 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:58.678953 containerd[1534]: time="2025-02-13T19:51:58.678653578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-l2r7d,Uid:dd2c35dd-7362-466f-acaf-7fd1f4f832dc,Namespace:kube-flannel,Attempt:0,}" Feb 13 19:51:58.689247 containerd[1534]: time="2025-02-13T19:51:58.688893057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:58.689247 containerd[1534]: time="2025-02-13T19:51:58.689037670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:58.689247 containerd[1534]: time="2025-02-13T19:51:58.689116436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:58.689522 containerd[1534]: time="2025-02-13T19:51:58.689296652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:58.695907 containerd[1534]: time="2025-02-13T19:51:58.695696641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:58.695907 containerd[1534]: time="2025-02-13T19:51:58.695771647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:58.696211 containerd[1534]: time="2025-02-13T19:51:58.696091115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:58.696274 containerd[1534]: time="2025-02-13T19:51:58.696231007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:58.728524 containerd[1534]: time="2025-02-13T19:51:58.727910085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rm8n,Uid:2be29815-8cdc-4c8a-b8ef-d1286bf624f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"be9a4882a34db204cbddf975c12d846e7a456c2070505ee22337fe9fbfe0817c\"" Feb 13 19:51:58.731257 kubelet[2669]: E0213 19:51:58.730985 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:58.738985 containerd[1534]: time="2025-02-13T19:51:58.738931671Z" level=info msg="CreateContainer within sandbox \"be9a4882a34db204cbddf975c12d846e7a456c2070505ee22337fe9fbfe0817c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:51:58.742093 containerd[1534]: time="2025-02-13T19:51:58.742050499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-l2r7d,Uid:dd2c35dd-7362-466f-acaf-7fd1f4f832dc,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"966dc9f57b535275bc62b2a2ec6dfd1819b116f57b0f13a8f0d823725db6f4b1\"" Feb 13 19:51:58.742734 kubelet[2669]: E0213 19:51:58.742699 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:58.753471 containerd[1534]: time="2025-02-13T19:51:58.753428875Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 19:51:58.757526 containerd[1534]: time="2025-02-13T19:51:58.757479503Z" level=info msg="CreateContainer within sandbox \"be9a4882a34db204cbddf975c12d846e7a456c2070505ee22337fe9fbfe0817c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fbe1a5cdf8820838ad3512e2e24aa76283c651cf3e52bebb955e1d86f5747629\"" Feb 13 19:51:58.758091 containerd[1534]: time="2025-02-13T19:51:58.757992547Z" level=info msg="StartContainer for \"fbe1a5cdf8820838ad3512e2e24aa76283c651cf3e52bebb955e1d86f5747629\"" Feb 13 19:51:58.802203 containerd[1534]: time="2025-02-13T19:51:58.802150136Z" level=info msg="StartContainer for \"fbe1a5cdf8820838ad3512e2e24aa76283c651cf3e52bebb955e1d86f5747629\" returns successfully" Feb 13 19:51:58.872604 kubelet[2669]: E0213 19:51:58.872567 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:00.085088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3867437861.mount: Deactivated successfully. Feb 13 19:52:00.111115 containerd[1534]: time="2025-02-13T19:52:00.111065814Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:00.111567 containerd[1534]: time="2025-02-13T19:52:00.111524970Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 19:52:00.112384 containerd[1534]: time="2025-02-13T19:52:00.112345114Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:00.114525 containerd[1534]: time="2025-02-13T19:52:00.114489882Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:00.115339 containerd[1534]: time="2025-02-13T19:52:00.115305226Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.361830748s" Feb 13 19:52:00.115389 containerd[1534]: time="2025-02-13T19:52:00.115338989Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 19:52:00.117666 containerd[1534]: time="2025-02-13T19:52:00.117639129Z" level=info msg="CreateContainer within sandbox \"966dc9f57b535275bc62b2a2ec6dfd1819b116f57b0f13a8f0d823725db6f4b1\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 19:52:00.126569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166944782.mount: Deactivated successfully. Feb 13 19:52:00.127311 containerd[1534]: time="2025-02-13T19:52:00.126786646Z" level=info msg="CreateContainer within sandbox \"966dc9f57b535275bc62b2a2ec6dfd1819b116f57b0f13a8f0d823725db6f4b1\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"1d80beeea6722b00a926e22f14474ce58557a24e5b28d8e52dd347b2d3ac18c9\"" Feb 13 19:52:00.127377 containerd[1534]: time="2025-02-13T19:52:00.127336729Z" level=info msg="StartContainer for \"1d80beeea6722b00a926e22f14474ce58557a24e5b28d8e52dd347b2d3ac18c9\"" Feb 13 19:52:00.168232 containerd[1534]: time="2025-02-13T19:52:00.167547041Z" level=info msg="StartContainer for \"1d80beeea6722b00a926e22f14474ce58557a24e5b28d8e52dd347b2d3ac18c9\" returns successfully" Feb 13 19:52:00.203947 containerd[1534]: time="2025-02-13T19:52:00.203850007Z" level=info msg="shim disconnected" id=1d80beeea6722b00a926e22f14474ce58557a24e5b28d8e52dd347b2d3ac18c9 namespace=k8s.io Feb 13 19:52:00.203947 containerd[1534]: time="2025-02-13T19:52:00.203929373Z" level=warning msg="cleaning up after shim disconnected" id=1d80beeea6722b00a926e22f14474ce58557a24e5b28d8e52dd347b2d3ac18c9 namespace=k8s.io Feb 13 19:52:00.203947 containerd[1534]: time="2025-02-13T19:52:00.203940614Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:52:00.875628 kubelet[2669]: E0213 19:52:00.875597 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:00.877050 containerd[1534]: time="2025-02-13T19:52:00.876955209Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 19:52:00.888379 kubelet[2669]: I0213 19:52:00.888323 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9rm8n" podStartSLOduration=2.888306538 podStartE2EDuration="2.888306538s" podCreationTimestamp="2025-02-13 19:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:58.88552545 +0000 UTC m=+15.141298006" watchObservedRunningTime="2025-02-13 19:52:00.888306538 +0000 UTC m=+17.144079134" Feb 13 19:52:02.177877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3167075401.mount: Deactivated successfully. Feb 13 19:52:02.823162 containerd[1534]: time="2025-02-13T19:52:02.823107471Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:02.823618 containerd[1534]: time="2025-02-13T19:52:02.823567824Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 19:52:02.824487 containerd[1534]: time="2025-02-13T19:52:02.824456088Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:02.827623 containerd[1534]: time="2025-02-13T19:52:02.827570712Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:02.829673 containerd[1534]: time="2025-02-13T19:52:02.829645301Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.95265233s" Feb 13 19:52:02.829733 containerd[1534]: time="2025-02-13T19:52:02.829675543Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 19:52:02.835158 containerd[1534]: time="2025-02-13T19:52:02.835116254Z" level=info msg="CreateContainer within sandbox \"966dc9f57b535275bc62b2a2ec6dfd1819b116f57b0f13a8f0d823725db6f4b1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:52:02.843270 containerd[1534]: time="2025-02-13T19:52:02.843241638Z" level=info msg="CreateContainer within sandbox \"966dc9f57b535275bc62b2a2ec6dfd1819b116f57b0f13a8f0d823725db6f4b1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c1ccec8aafcd1c1425be097da44de52eac0291891329e493d2625baf0fa26dbd\"" Feb 13 19:52:02.843621 containerd[1534]: time="2025-02-13T19:52:02.843597344Z" level=info msg="StartContainer for \"c1ccec8aafcd1c1425be097da44de52eac0291891329e493d2625baf0fa26dbd\"" Feb 13 19:52:02.890503 containerd[1534]: time="2025-02-13T19:52:02.890466752Z" level=info msg="StartContainer for \"c1ccec8aafcd1c1425be097da44de52eac0291891329e493d2625baf0fa26dbd\" returns successfully" Feb 13 19:52:02.926251 kubelet[2669]: I0213 19:52:02.924248 2669 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:52:02.950771 kubelet[2669]: I0213 19:52:02.950729 2669 topology_manager.go:215] "Topology Admit Handler" podUID="7f734f8e-90e1-4965-bdff-46d04c1203c1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jxqhs" Feb 13 19:52:02.959372 kubelet[2669]: I0213 19:52:02.951328 2669 topology_manager.go:215] "Topology Admit Handler" podUID="fcdf435b-222b-4463-b11d-e10a00adc872" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vxtk4" Feb 13 19:52:03.013888 containerd[1534]: time="2025-02-13T19:52:03.013826659Z" level=info msg="shim disconnected" id=c1ccec8aafcd1c1425be097da44de52eac0291891329e493d2625baf0fa26dbd namespace=k8s.io Feb 13 19:52:03.013888 containerd[1534]: time="2025-02-13T19:52:03.013881823Z" level=warning msg="cleaning up after shim disconnected" id=c1ccec8aafcd1c1425be097da44de52eac0291891329e493d2625baf0fa26dbd namespace=k8s.io Feb 13 19:52:03.013888 containerd[1534]: time="2025-02-13T19:52:03.013893104Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:52:03.024257 containerd[1534]: time="2025-02-13T19:52:03.024219575Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:52:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:52:03.044438 kubelet[2669]: I0213 19:52:03.044363 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fcdf435b-222b-4463-b11d-e10a00adc872-config-volume\") pod \"coredns-7db6d8ff4d-vxtk4\" (UID: \"fcdf435b-222b-4463-b11d-e10a00adc872\") " pod="kube-system/coredns-7db6d8ff4d-vxtk4" Feb 13 19:52:03.044438 kubelet[2669]: I0213 19:52:03.044401 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f734f8e-90e1-4965-bdff-46d04c1203c1-config-volume\") pod \"coredns-7db6d8ff4d-jxqhs\" (UID: \"7f734f8e-90e1-4965-bdff-46d04c1203c1\") " pod="kube-system/coredns-7db6d8ff4d-jxqhs" Feb 13 19:52:03.044567 kubelet[2669]: I0213 19:52:03.044469 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsndt\" (UniqueName: \"kubernetes.io/projected/fcdf435b-222b-4463-b11d-e10a00adc872-kube-api-access-fsndt\") pod \"coredns-7db6d8ff4d-vxtk4\" (UID: \"fcdf435b-222b-4463-b11d-e10a00adc872\") " pod="kube-system/coredns-7db6d8ff4d-vxtk4" Feb 13 19:52:03.044567 kubelet[2669]: I0213 19:52:03.044500 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8whgk\" (UniqueName: \"kubernetes.io/projected/7f734f8e-90e1-4965-bdff-46d04c1203c1-kube-api-access-8whgk\") pod \"coredns-7db6d8ff4d-jxqhs\" (UID: \"7f734f8e-90e1-4965-bdff-46d04c1203c1\") " pod="kube-system/coredns-7db6d8ff4d-jxqhs" Feb 13 19:52:03.123015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1ccec8aafcd1c1425be097da44de52eac0291891329e493d2625baf0fa26dbd-rootfs.mount: Deactivated successfully. Feb 13 19:52:03.260767 kubelet[2669]: E0213 19:52:03.260731 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:03.260864 kubelet[2669]: E0213 19:52:03.260798 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:03.261609 containerd[1534]: time="2025-02-13T19:52:03.261302190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vxtk4,Uid:fcdf435b-222b-4463-b11d-e10a00adc872,Namespace:kube-system,Attempt:0,}" Feb 13 19:52:03.261609 containerd[1534]: time="2025-02-13T19:52:03.261354994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jxqhs,Uid:7f734f8e-90e1-4965-bdff-46d04c1203c1,Namespace:kube-system,Attempt:0,}" Feb 13 19:52:03.337068 containerd[1534]: time="2025-02-13T19:52:03.337000085Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jxqhs,Uid:7f734f8e-90e1-4965-bdff-46d04c1203c1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"34ae009f8c4445e24a8337c807e23f6a6bcc0a86cbaf6b834d5e76569c899ad2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:52:03.337321 kubelet[2669]: E0213 19:52:03.337238 2669 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34ae009f8c4445e24a8337c807e23f6a6bcc0a86cbaf6b834d5e76569c899ad2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:52:03.337321 kubelet[2669]: E0213 19:52:03.337310 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34ae009f8c4445e24a8337c807e23f6a6bcc0a86cbaf6b834d5e76569c899ad2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-jxqhs" Feb 13 19:52:03.337467 kubelet[2669]: E0213 19:52:03.337330 2669 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34ae009f8c4445e24a8337c807e23f6a6bcc0a86cbaf6b834d5e76569c899ad2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-jxqhs" Feb 13 19:52:03.337467 kubelet[2669]: E0213 19:52:03.337371 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jxqhs_kube-system(7f734f8e-90e1-4965-bdff-46d04c1203c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jxqhs_kube-system(7f734f8e-90e1-4965-bdff-46d04c1203c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34ae009f8c4445e24a8337c807e23f6a6bcc0a86cbaf6b834d5e76569c899ad2\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-jxqhs" podUID="7f734f8e-90e1-4965-bdff-46d04c1203c1" Feb 13 19:52:03.338350 containerd[1534]: time="2025-02-13T19:52:03.338288014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vxtk4,Uid:fcdf435b-222b-4463-b11d-e10a00adc872,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88706d2d38b2bead9a49299fdf724199fb6c0495032f34cd6b461d9e17093fb3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:52:03.338518 kubelet[2669]: E0213 19:52:03.338493 2669 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88706d2d38b2bead9a49299fdf724199fb6c0495032f34cd6b461d9e17093fb3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:52:03.338563 kubelet[2669]: E0213 19:52:03.338537 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88706d2d38b2bead9a49299fdf724199fb6c0495032f34cd6b461d9e17093fb3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-vxtk4" Feb 13 19:52:03.338563 kubelet[2669]: E0213 19:52:03.338554 2669 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88706d2d38b2bead9a49299fdf724199fb6c0495032f34cd6b461d9e17093fb3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-vxtk4" Feb 13 19:52:03.338608 kubelet[2669]: E0213 19:52:03.338587 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vxtk4_kube-system(fcdf435b-222b-4463-b11d-e10a00adc872)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vxtk4_kube-system(fcdf435b-222b-4463-b11d-e10a00adc872)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88706d2d38b2bead9a49299fdf724199fb6c0495032f34cd6b461d9e17093fb3\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-vxtk4" podUID="fcdf435b-222b-4463-b11d-e10a00adc872" Feb 13 19:52:03.895156 kubelet[2669]: E0213 19:52:03.895117 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:03.898552 containerd[1534]: time="2025-02-13T19:52:03.898514654Z" level=info msg="CreateContainer within sandbox \"966dc9f57b535275bc62b2a2ec6dfd1819b116f57b0f13a8f0d823725db6f4b1\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 19:52:03.910413 containerd[1534]: time="2025-02-13T19:52:03.910379431Z" level=info msg="CreateContainer within sandbox \"966dc9f57b535275bc62b2a2ec6dfd1819b116f57b0f13a8f0d823725db6f4b1\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"217d81b184fb7624f791df07830e332d97392c45727a4212d3a73c02bbe17056\"" Feb 13 19:52:03.911441 containerd[1534]: time="2025-02-13T19:52:03.910867145Z" level=info msg="StartContainer for \"217d81b184fb7624f791df07830e332d97392c45727a4212d3a73c02bbe17056\"" Feb 13 19:52:03.969332 containerd[1534]: time="2025-02-13T19:52:03.969287450Z" level=info msg="StartContainer for \"217d81b184fb7624f791df07830e332d97392c45727a4212d3a73c02bbe17056\" returns successfully" Feb 13 19:52:04.124004 systemd[1]: run-netns-cni\x2d6018f731\x2d0dc4\x2d16d7\x2db498\x2d17154927afaa.mount: Deactivated successfully. Feb 13 19:52:04.124144 systemd[1]: run-netns-cni\x2dbb92e7cc\x2d7cd6\x2df0dc\x2d4e6e\x2d779de795d758.mount: Deactivated successfully. Feb 13 19:52:04.124239 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34ae009f8c4445e24a8337c807e23f6a6bcc0a86cbaf6b834d5e76569c899ad2-shm.mount: Deactivated successfully. Feb 13 19:52:04.124356 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88706d2d38b2bead9a49299fdf724199fb6c0495032f34cd6b461d9e17093fb3-shm.mount: Deactivated successfully. Feb 13 19:52:04.899296 kubelet[2669]: E0213 19:52:04.899246 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:04.908697 kubelet[2669]: I0213 19:52:04.908625 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-l2r7d" podStartSLOduration=2.820494306 podStartE2EDuration="6.908608328s" podCreationTimestamp="2025-02-13 19:51:58 +0000 UTC" firstStartedPulling="2025-02-13 19:51:58.743329968 +0000 UTC m=+14.999102564" lastFinishedPulling="2025-02-13 19:52:02.83144399 +0000 UTC m=+19.087216586" observedRunningTime="2025-02-13 19:52:04.907991407 +0000 UTC m=+21.163764003" watchObservedRunningTime="2025-02-13 19:52:04.908608328 +0000 UTC m=+21.164380924" Feb 13 19:52:05.047241 systemd-networkd[1233]: flannel.1: Link UP Feb 13 19:52:05.047248 systemd-networkd[1233]: flannel.1: Gained carrier Feb 13 19:52:05.900790 kubelet[2669]: E0213 19:52:05.900748 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:06.913391 systemd-networkd[1233]: flannel.1: Gained IPv6LL Feb 13 19:52:08.925530 systemd[1]: Started sshd@5-10.0.0.116:22-10.0.0.1:41828.service - OpenSSH per-connection server daemon (10.0.0.1:41828). Feb 13 19:52:08.963539 sshd[3317]: Accepted publickey for core from 10.0.0.1 port 41828 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:52:08.966480 sshd[3317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:08.970039 systemd-logind[1515]: New session 6 of user core. Feb 13 19:52:08.975439 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:52:09.091348 sshd[3317]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:09.095017 systemd[1]: sshd@5-10.0.0.116:22-10.0.0.1:41828.service: Deactivated successfully. Feb 13 19:52:09.097695 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:52:09.098333 systemd-logind[1515]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:52:09.099091 systemd-logind[1515]: Removed session 6. Feb 13 19:52:14.107405 systemd[1]: Started sshd@6-10.0.0.116:22-10.0.0.1:58016.service - OpenSSH per-connection server daemon (10.0.0.1:58016). Feb 13 19:52:14.144517 sshd[3356]: Accepted publickey for core from 10.0.0.1 port 58016 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:52:14.145705 sshd[3356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:14.149894 systemd-logind[1515]: New session 7 of user core. Feb 13 19:52:14.156452 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:52:14.276156 sshd[3356]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:14.280872 systemd[1]: sshd@6-10.0.0.116:22-10.0.0.1:58016.service: Deactivated successfully. Feb 13 19:52:14.282782 systemd-logind[1515]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:52:14.282918 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:52:14.283961 systemd-logind[1515]: Removed session 7. Feb 13 19:52:15.830706 kubelet[2669]: E0213 19:52:15.829344 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:15.830706 kubelet[2669]: E0213 19:52:15.829949 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:15.831112 containerd[1534]: time="2025-02-13T19:52:15.830321319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vxtk4,Uid:fcdf435b-222b-4463-b11d-e10a00adc872,Namespace:kube-system,Attempt:0,}" Feb 13 19:52:15.831112 containerd[1534]: time="2025-02-13T19:52:15.830953827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jxqhs,Uid:7f734f8e-90e1-4965-bdff-46d04c1203c1,Namespace:kube-system,Attempt:0,}" Feb 13 19:52:15.877610 systemd-networkd[1233]: cni0: Link UP Feb 13 19:52:15.877614 systemd-networkd[1233]: cni0: Gained carrier Feb 13 19:52:15.881354 systemd-networkd[1233]: cni0: Lost carrier Feb 13 19:52:15.898262 systemd-networkd[1233]: veth70286c42: Link UP Feb 13 19:52:15.902140 systemd-networkd[1233]: vethc56ef6db: Link UP Feb 13 19:52:15.905279 kernel: cni0: port 1(veth70286c42) entered blocking state Feb 13 19:52:15.905340 kernel: cni0: port 1(veth70286c42) entered disabled state Feb 13 19:52:15.905357 kernel: veth70286c42: entered allmulticast mode Feb 13 19:52:15.905380 kernel: veth70286c42: entered promiscuous mode Feb 13 19:52:15.905394 kernel: cni0: port 1(veth70286c42) entered blocking state Feb 13 19:52:15.905407 kernel: cni0: port 1(veth70286c42) entered forwarding state Feb 13 19:52:15.906223 kernel: cni0: port 1(veth70286c42) entered disabled state Feb 13 19:52:15.908437 kernel: cni0: port 2(vethc56ef6db) entered blocking state Feb 13 19:52:15.908500 kernel: cni0: port 2(vethc56ef6db) entered disabled state Feb 13 19:52:15.908542 kernel: vethc56ef6db: entered allmulticast mode Feb 13 19:52:15.909396 kernel: vethc56ef6db: entered promiscuous mode Feb 13 19:52:15.915657 kernel: cni0: port 1(veth70286c42) entered blocking state Feb 13 19:52:15.915737 kernel: cni0: port 1(veth70286c42) entered forwarding state Feb 13 19:52:15.918792 kernel: cni0: port 2(vethc56ef6db) entered blocking state Feb 13 19:52:15.918841 kernel: cni0: port 2(vethc56ef6db) entered forwarding state Feb 13 19:52:15.918866 systemd-networkd[1233]: veth70286c42: Gained carrier Feb 13 19:52:15.920717 systemd-networkd[1233]: cni0: Gained carrier Feb 13 19:52:15.920905 systemd-networkd[1233]: vethc56ef6db: Gained carrier Feb 13 19:52:15.924275 containerd[1534]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} Feb 13 19:52:15.924275 containerd[1534]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:52:15.926260 containerd[1534]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Feb 13 19:52:15.926260 containerd[1534]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000016938), "name":"cbr0", "type":"bridge"} Feb 13 19:52:15.926260 containerd[1534]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:52:15.943049 containerd[1534]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T19:52:15.942953023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:15.943049 containerd[1534]: time="2025-02-13T19:52:15.943014946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:15.943049 containerd[1534]: time="2025-02-13T19:52:15.943030386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:15.943265 containerd[1534]: time="2025-02-13T19:52:15.943143192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:15.948857 containerd[1534]: time="2025-02-13T19:52:15.948721482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:15.948857 containerd[1534]: time="2025-02-13T19:52:15.948765404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:15.948857 containerd[1534]: time="2025-02-13T19:52:15.948776045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:15.948974 containerd[1534]: time="2025-02-13T19:52:15.948854928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:15.970260 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:52:15.972061 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:52:15.992423 containerd[1534]: time="2025-02-13T19:52:15.992383886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jxqhs,Uid:7f734f8e-90e1-4965-bdff-46d04c1203c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ac188aa647a398081ddcb735515f319fa30bdb6d37d7246317dfd46d87fef2d\"" Feb 13 19:52:15.992553 containerd[1534]: time="2025-02-13T19:52:15.992384166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vxtk4,Uid:fcdf435b-222b-4463-b11d-e10a00adc872,Namespace:kube-system,Attempt:0,} returns sandbox id \"88fa07bb76e4349234fc7ff92468c56a2eb17cd003ad0ad6173d9323a0299df7\"" Feb 13 19:52:15.992974 kubelet[2669]: E0213 19:52:15.992951 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:15.993388 kubelet[2669]: E0213 19:52:15.993368 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:15.996055 containerd[1534]: time="2025-02-13T19:52:15.995988608Z" level=info msg="CreateContainer within sandbox \"88fa07bb76e4349234fc7ff92468c56a2eb17cd003ad0ad6173d9323a0299df7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:52:15.996628 containerd[1534]: time="2025-02-13T19:52:15.996554233Z" level=info msg="CreateContainer within sandbox \"9ac188aa647a398081ddcb735515f319fa30bdb6d37d7246317dfd46d87fef2d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:52:16.010431 containerd[1534]: time="2025-02-13T19:52:16.010372923Z" level=info msg="CreateContainer within sandbox \"9ac188aa647a398081ddcb735515f319fa30bdb6d37d7246317dfd46d87fef2d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7d8c495f256c7c9a37a821c8e4e896224aee64ab984448a12543413c63347dc8\"" Feb 13 19:52:16.011852 containerd[1534]: time="2025-02-13T19:52:16.011825346Z" level=info msg="StartContainer for \"7d8c495f256c7c9a37a821c8e4e896224aee64ab984448a12543413c63347dc8\"" Feb 13 19:52:16.030219 containerd[1534]: time="2025-02-13T19:52:16.030159547Z" level=info msg="CreateContainer within sandbox \"88fa07bb76e4349234fc7ff92468c56a2eb17cd003ad0ad6173d9323a0299df7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c0b805d53706602bec5002b183173f43a2ddf9f1fc02b1a353ffc13be287bfa\"" Feb 13 19:52:16.031223 containerd[1534]: time="2025-02-13T19:52:16.030722612Z" level=info msg="StartContainer for \"5c0b805d53706602bec5002b183173f43a2ddf9f1fc02b1a353ffc13be287bfa\"" Feb 13 19:52:16.060465 containerd[1534]: time="2025-02-13T19:52:16.060404228Z" level=info msg="StartContainer for \"7d8c495f256c7c9a37a821c8e4e896224aee64ab984448a12543413c63347dc8\" returns successfully" Feb 13 19:52:16.085360 containerd[1534]: time="2025-02-13T19:52:16.085256714Z" level=info msg="StartContainer for \"5c0b805d53706602bec5002b183173f43a2ddf9f1fc02b1a353ffc13be287bfa\" returns successfully" Feb 13 19:52:16.926696 kubelet[2669]: E0213 19:52:16.926564 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:16.930866 kubelet[2669]: E0213 19:52:16.930840 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:16.936345 kubelet[2669]: I0213 19:52:16.936183 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vxtk4" podStartSLOduration=18.936169085 podStartE2EDuration="18.936169085s" podCreationTimestamp="2025-02-13 19:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:52:16.935523736 +0000 UTC m=+33.191296372" watchObservedRunningTime="2025-02-13 19:52:16.936169085 +0000 UTC m=+33.191941681" Feb 13 19:52:16.943697 kubelet[2669]: I0213 19:52:16.943617 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jxqhs" podStartSLOduration=18.943604609 podStartE2EDuration="18.943604609s" podCreationTimestamp="2025-02-13 19:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:52:16.943583889 +0000 UTC m=+33.199356485" watchObservedRunningTime="2025-02-13 19:52:16.943604609 +0000 UTC m=+33.199377205" Feb 13 19:52:17.281421 systemd-networkd[1233]: vethc56ef6db: Gained IPv6LL Feb 13 19:52:17.537410 systemd-networkd[1233]: veth70286c42: Gained IPv6LL Feb 13 19:52:17.601339 systemd-networkd[1233]: cni0: Gained IPv6LL Feb 13 19:52:17.932470 kubelet[2669]: E0213 19:52:17.932435 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:17.932771 kubelet[2669]: E0213 19:52:17.932614 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:18.934442 kubelet[2669]: E0213 19:52:18.934408 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:18.934902 kubelet[2669]: E0213 19:52:18.934487 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:19.286936 systemd[1]: Started sshd@7-10.0.0.116:22-10.0.0.1:58026.service - OpenSSH per-connection server daemon (10.0.0.1:58026). Feb 13 19:52:19.322016 sshd[3627]: Accepted publickey for core from 10.0.0.1 port 58026 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:52:19.323339 sshd[3627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:19.327248 systemd-logind[1515]: New session 8 of user core. Feb 13 19:52:19.337403 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:52:19.441992 sshd[3627]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:19.445560 systemd-logind[1515]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:52:19.445923 systemd[1]: sshd@7-10.0.0.116:22-10.0.0.1:58026.service: Deactivated successfully. Feb 13 19:52:19.447590 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:52:19.449037 systemd-logind[1515]: Removed session 8. Feb 13 19:52:24.452558 systemd[1]: Started sshd@8-10.0.0.116:22-10.0.0.1:44474.service - OpenSSH per-connection server daemon (10.0.0.1:44474). Feb 13 19:52:24.491828 sshd[3665]: Accepted publickey for core from 10.0.0.1 port 44474 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:52:24.492338 sshd[3665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:24.496791 systemd-logind[1515]: New session 9 of user core. Feb 13 19:52:24.506565 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:52:24.622243 sshd[3665]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:24.625033 systemd[1]: sshd@8-10.0.0.116:22-10.0.0.1:44474.service: Deactivated successfully. Feb 13 19:52:24.627844 systemd-logind[1515]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:52:24.628488 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:52:24.629768 systemd-logind[1515]: Removed session 9. Feb 13 19:52:29.633471 systemd[1]: Started sshd@9-10.0.0.116:22-10.0.0.1:44478.service - OpenSSH per-connection server daemon (10.0.0.1:44478). Feb 13 19:52:29.672863 sshd[3704]: Accepted publickey for core from 10.0.0.1 port 44478 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:52:29.674229 sshd[3704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:29.678539 systemd-logind[1515]: New session 10 of user core. Feb 13 19:52:29.688530 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:52:29.817619 sshd[3704]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:29.821068 systemd[1]: sshd@9-10.0.0.116:22-10.0.0.1:44478.service: Deactivated successfully. Feb 13 19:52:29.823041 systemd-logind[1515]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:52:29.823109 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:52:29.824824 systemd-logind[1515]: Removed session 10. Feb 13 19:52:34.832464 systemd[1]: Started sshd@10-10.0.0.116:22-10.0.0.1:51844.service - OpenSSH per-connection server daemon (10.0.0.1:51844). Feb 13 19:52:34.868804 sshd[3744]: Accepted publickey for core from 10.0.0.1 port 51844 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:52:34.870105 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:34.873903 systemd-logind[1515]: New session 11 of user core. Feb 13 19:52:34.884574 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:52:34.990900 sshd[3744]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:34.993807 systemd[1]: sshd@10-10.0.0.116:22-10.0.0.1:51844.service: Deactivated successfully. Feb 13 19:52:34.996600 systemd-logind[1515]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:52:34.997622 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:52:34.998784 systemd-logind[1515]: Removed session 11. Feb 13 19:52:40.006409 systemd[1]: Started sshd@11-10.0.0.116:22-10.0.0.1:51856.service - OpenSSH per-connection server daemon (10.0.0.1:51856). Feb 13 19:52:40.042794 sshd[3782]: Accepted publickey for core from 10.0.0.1 port 51856 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:52:40.043929 sshd[3782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:40.047571 systemd-logind[1515]: New session 12 of user core. Feb 13 19:52:40.059442 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:52:40.166046 sshd[3782]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:40.169791 systemd[1]: sshd@11-10.0.0.116:22-10.0.0.1:51856.service: Deactivated successfully. Feb 13 19:52:40.171680 systemd-logind[1515]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:52:40.171737 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:52:40.172471 systemd-logind[1515]: Removed session 12. Feb 13 19:52:45.176419 systemd[1]: Started sshd@12-10.0.0.116:22-10.0.0.1:33784.service - OpenSSH per-connection server daemon (10.0.0.1:33784). Feb 13 19:52:45.211140 sshd[3827]: Accepted publickey for core from 10.0.0.1 port 33784 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:52:45.212376 sshd[3827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:45.216137 systemd-logind[1515]: New session 13 of user core. Feb 13 19:52:45.224452 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:52:45.327248 sshd[3827]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:45.330842 systemd[1]: sshd@12-10.0.0.116:22-10.0.0.1:33784.service: Deactivated successfully. Feb 13 19:52:45.332688 systemd-logind[1515]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:52:45.332749 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:52:45.333657 systemd-logind[1515]: Removed session 13. Feb 13 19:52:50.338417 systemd[1]: Started sshd@13-10.0.0.116:22-10.0.0.1:33794.service - OpenSSH per-connection server daemon (10.0.0.1:33794). Feb 13 19:52:50.380254 sshd[3880]: Accepted publickey for core from 10.0.0.1 port 33794 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:52:50.381401 sshd[3880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:50.385407 systemd-logind[1515]: New session 14 of user core. Feb 13 19:52:50.395462 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:52:50.509723 sshd[3880]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:50.512094 systemd[1]: sshd@13-10.0.0.116:22-10.0.0.1:33794.service: Deactivated successfully. Feb 13 19:52:50.517925 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:52:50.519828 systemd-logind[1515]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:52:50.520772 systemd-logind[1515]: Removed session 14. Feb 13 19:52:55.529412 systemd[1]: Started sshd@14-10.0.0.116:22-10.0.0.1:51748.service - OpenSSH per-connection server daemon (10.0.0.1:51748). Feb 13 19:52:55.565055 sshd[3918]: Accepted publickey for core from 10.0.0.1 port 51748 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:52:55.566224 sshd[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:55.570683 systemd-logind[1515]: New session 15 of user core. Feb 13 19:52:55.583419 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:52:55.701390 sshd[3918]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:55.704943 systemd[1]: sshd@14-10.0.0.116:22-10.0.0.1:51748.service: Deactivated successfully. Feb 13 19:52:55.710185 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:52:55.711001 systemd-logind[1515]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:52:55.711886 systemd-logind[1515]: Removed session 15. Feb 13 19:53:00.712518 systemd[1]: Started sshd@15-10.0.0.116:22-10.0.0.1:51760.service - OpenSSH per-connection server daemon (10.0.0.1:51760). Feb 13 19:53:00.751135 sshd[3958]: Accepted publickey for core from 10.0.0.1 port 51760 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:53:00.751605 sshd[3958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:00.755135 systemd-logind[1515]: New session 16 of user core. Feb 13 19:53:00.765471 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:53:00.872593 sshd[3958]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:00.876277 systemd[1]: sshd@15-10.0.0.116:22-10.0.0.1:51760.service: Deactivated successfully. Feb 13 19:53:00.878990 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:53:00.880120 systemd-logind[1515]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:53:00.881344 systemd-logind[1515]: Removed session 16. Feb 13 19:53:01.830903 kubelet[2669]: E0213 19:53:01.830873 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:05.886409 systemd[1]: Started sshd@16-10.0.0.116:22-10.0.0.1:43142.service - OpenSSH per-connection server daemon (10.0.0.1:43142). Feb 13 19:53:05.927213 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 43142 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:53:05.927628 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:05.932486 systemd-logind[1515]: New session 17 of user core. Feb 13 19:53:05.941497 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:53:06.050938 sshd[3996]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:06.054368 systemd[1]: sshd@16-10.0.0.116:22-10.0.0.1:43142.service: Deactivated successfully. Feb 13 19:53:06.056077 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:53:06.057810 systemd-logind[1515]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:53:06.058653 systemd-logind[1515]: Removed session 17. Feb 13 19:53:11.062396 systemd[1]: Started sshd@17-10.0.0.116:22-10.0.0.1:43154.service - OpenSSH per-connection server daemon (10.0.0.1:43154). Feb 13 19:53:11.096694 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 43154 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:53:11.097807 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:11.101812 systemd-logind[1515]: New session 18 of user core. Feb 13 19:53:11.108521 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:53:11.212942 sshd[4034]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:11.216267 systemd[1]: sshd@17-10.0.0.116:22-10.0.0.1:43154.service: Deactivated successfully. Feb 13 19:53:11.218557 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:53:11.219561 systemd-logind[1515]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:53:11.220407 systemd-logind[1515]: Removed session 18. Feb 13 19:53:11.828642 kubelet[2669]: E0213 19:53:11.828601 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:14.828238 kubelet[2669]: E0213 19:53:14.828169 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:16.227501 systemd[1]: Started sshd@18-10.0.0.116:22-10.0.0.1:44176.service - OpenSSH per-connection server daemon (10.0.0.1:44176). Feb 13 19:53:16.296500 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 44176 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:53:16.297805 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:16.302057 systemd-logind[1515]: New session 19 of user core. Feb 13 19:53:16.311436 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:53:16.415707 sshd[4071]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:16.418530 systemd-logind[1515]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:53:16.418661 systemd[1]: sshd@18-10.0.0.116:22-10.0.0.1:44176.service: Deactivated successfully. Feb 13 19:53:16.421002 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:53:16.422230 systemd-logind[1515]: Removed session 19. Feb 13 19:53:21.426457 systemd[1]: Started sshd@19-10.0.0.116:22-10.0.0.1:44180.service - OpenSSH per-connection server daemon (10.0.0.1:44180). Feb 13 19:53:21.461065 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 44180 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:53:21.462177 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:21.465498 systemd-logind[1515]: New session 20 of user core. Feb 13 19:53:21.471466 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:53:21.576748 sshd[4108]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:21.579954 systemd[1]: sshd@19-10.0.0.116:22-10.0.0.1:44180.service: Deactivated successfully. Feb 13 19:53:21.581881 systemd-logind[1515]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:53:21.581944 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:53:21.582802 systemd-logind[1515]: Removed session 20. Feb 13 19:53:23.829215 kubelet[2669]: E0213 19:53:23.828822 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:23.829215 kubelet[2669]: E0213 19:53:23.828909 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:26.594405 systemd[1]: Started sshd@20-10.0.0.116:22-10.0.0.1:42776.service - OpenSSH per-connection server daemon (10.0.0.1:42776). Feb 13 19:53:26.629323 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 42776 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:53:26.630465 sshd[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:26.633764 systemd-logind[1515]: New session 21 of user core. Feb 13 19:53:26.644394 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:53:26.748598 sshd[4145]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:26.751359 systemd[1]: sshd@20-10.0.0.116:22-10.0.0.1:42776.service: Deactivated successfully. Feb 13 19:53:26.753269 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:53:26.753271 systemd-logind[1515]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:53:26.754926 systemd-logind[1515]: Removed session 21. Feb 13 19:53:26.827933 kubelet[2669]: E0213 19:53:26.827907 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:28.829015 kubelet[2669]: E0213 19:53:28.828963 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:31.767414 systemd[1]: Started sshd@21-10.0.0.116:22-10.0.0.1:42778.service - OpenSSH per-connection server daemon (10.0.0.1:42778). Feb 13 19:53:31.801687 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 42778 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:53:31.802778 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:31.806874 systemd-logind[1515]: New session 22 of user core. Feb 13 19:53:31.820416 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:53:31.922930 sshd[4184]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:31.926577 systemd[1]: sshd@21-10.0.0.116:22-10.0.0.1:42778.service: Deactivated successfully. Feb 13 19:53:31.929666 systemd-logind[1515]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:53:31.929945 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:53:31.931135 systemd-logind[1515]: Removed session 22. Feb 13 19:53:36.935443 systemd[1]: Started sshd@22-10.0.0.116:22-10.0.0.1:37774.service - OpenSSH per-connection server daemon (10.0.0.1:37774). Feb 13 19:53:36.969682 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 37774 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:53:36.970778 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:36.974503 systemd-logind[1515]: New session 23 of user core. Feb 13 19:53:36.986423 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:53:37.093693 sshd[4224]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:37.096525 systemd[1]: sshd@22-10.0.0.116:22-10.0.0.1:37774.service: Deactivated successfully. Feb 13 19:53:37.099405 systemd-logind[1515]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:53:37.100513 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:53:37.102455 systemd-logind[1515]: Removed session 23. Feb 13 19:53:42.108517 systemd[1]: Started sshd@23-10.0.0.116:22-10.0.0.1:37780.service - OpenSSH per-connection server daemon (10.0.0.1:37780). Feb 13 19:53:42.144046 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 37780 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:53:42.145146 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:42.148642 systemd-logind[1515]: New session 24 of user core. Feb 13 19:53:42.160408 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:53:42.263205 sshd[4262]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:42.266345 systemd[1]: sshd@23-10.0.0.116:22-10.0.0.1:37780.service: Deactivated successfully. Feb 13 19:53:42.268824 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:53:42.269491 systemd-logind[1515]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:53:42.270649 systemd-logind[1515]: Removed session 24. Feb 13 19:53:47.283475 systemd[1]: Started sshd@24-10.0.0.116:22-10.0.0.1:40778.service - OpenSSH per-connection server daemon (10.0.0.1:40778). Feb 13 19:53:47.318013 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 40778 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:53:47.319280 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:47.322823 systemd-logind[1515]: New session 25 of user core. Feb 13 19:53:47.338528 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:53:47.444540 sshd[4302]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:47.447261 systemd[1]: sshd@24-10.0.0.116:22-10.0.0.1:40778.service: Deactivated successfully. Feb 13 19:53:47.449964 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:53:47.450149 systemd-logind[1515]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:53:47.452292 systemd-logind[1515]: Removed session 25. Feb 13 19:53:52.460456 systemd[1]: Started sshd@25-10.0.0.116:22-10.0.0.1:40784.service - OpenSSH per-connection server daemon (10.0.0.1:40784). Feb 13 19:53:52.496895 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 40784 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:53:52.498315 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:52.502945 systemd-logind[1515]: New session 26 of user core. Feb 13 19:53:52.515482 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:53:52.622489 sshd[4340]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:52.625561 systemd[1]: sshd@25-10.0.0.116:22-10.0.0.1:40784.service: Deactivated successfully. Feb 13 19:53:52.627533 systemd-logind[1515]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:53:52.627604 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:53:52.628958 systemd-logind[1515]: Removed session 26. Feb 13 19:53:57.632432 systemd[1]: Started sshd@26-10.0.0.116:22-10.0.0.1:57668.service - OpenSSH per-connection server daemon (10.0.0.1:57668). Feb 13 19:53:57.666813 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 57668 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:53:57.668034 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:57.672288 systemd-logind[1515]: New session 27 of user core. Feb 13 19:53:57.684487 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:53:57.792395 sshd[4377]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:57.795974 systemd[1]: sshd@26-10.0.0.116:22-10.0.0.1:57668.service: Deactivated successfully. Feb 13 19:53:57.797962 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:53:57.797964 systemd-logind[1515]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:53:57.799666 systemd-logind[1515]: Removed session 27. Feb 13 19:54:02.810461 systemd[1]: Started sshd@27-10.0.0.116:22-10.0.0.1:44262.service - OpenSSH per-connection server daemon (10.0.0.1:44262). Feb 13 19:54:02.845438 sshd[4417]: Accepted publickey for core from 10.0.0.1 port 44262 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:54:02.846797 sshd[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:02.850848 systemd-logind[1515]: New session 28 of user core. Feb 13 19:54:02.862434 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:54:02.967880 sshd[4417]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:02.971185 systemd[1]: sshd@27-10.0.0.116:22-10.0.0.1:44262.service: Deactivated successfully. Feb 13 19:54:02.973355 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:54:02.973635 systemd-logind[1515]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:54:02.974761 systemd-logind[1515]: Removed session 28. Feb 13 19:54:07.978416 systemd[1]: Started sshd@28-10.0.0.116:22-10.0.0.1:44266.service - OpenSSH per-connection server daemon (10.0.0.1:44266). Feb 13 19:54:08.015281 sshd[4455]: Accepted publickey for core from 10.0.0.1 port 44266 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:54:08.016499 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:08.020290 systemd-logind[1515]: New session 29 of user core. Feb 13 19:54:08.030543 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 19:54:08.136403 sshd[4455]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:08.139602 systemd[1]: sshd@28-10.0.0.116:22-10.0.0.1:44266.service: Deactivated successfully. Feb 13 19:54:08.141630 systemd-logind[1515]: Session 29 logged out. Waiting for processes to exit. Feb 13 19:54:08.141686 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 19:54:08.142722 systemd-logind[1515]: Removed session 29. Feb 13 19:54:13.147446 systemd[1]: Started sshd@29-10.0.0.116:22-10.0.0.1:36026.service - OpenSSH per-connection server daemon (10.0.0.1:36026). Feb 13 19:54:13.182605 sshd[4496]: Accepted publickey for core from 10.0.0.1 port 36026 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:54:13.183905 sshd[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:13.188220 systemd-logind[1515]: New session 30 of user core. Feb 13 19:54:13.200447 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 19:54:13.307920 sshd[4496]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:13.311391 systemd[1]: sshd@29-10.0.0.116:22-10.0.0.1:36026.service: Deactivated successfully. Feb 13 19:54:13.314098 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 19:54:13.315177 systemd-logind[1515]: Session 30 logged out. Waiting for processes to exit. Feb 13 19:54:13.316141 systemd-logind[1515]: Removed session 30. Feb 13 19:54:18.321423 systemd[1]: Started sshd@30-10.0.0.116:22-10.0.0.1:36042.service - OpenSSH per-connection server daemon (10.0.0.1:36042). Feb 13 19:54:18.356073 sshd[4534]: Accepted publickey for core from 10.0.0.1 port 36042 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:54:18.357366 sshd[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:18.361393 systemd-logind[1515]: New session 31 of user core. Feb 13 19:54:18.372407 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 19:54:18.474078 sshd[4534]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:18.477372 systemd[1]: sshd@30-10.0.0.116:22-10.0.0.1:36042.service: Deactivated successfully. Feb 13 19:54:18.479306 systemd-logind[1515]: Session 31 logged out. Waiting for processes to exit. Feb 13 19:54:18.479371 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 19:54:18.480354 systemd-logind[1515]: Removed session 31. Feb 13 19:54:18.828891 kubelet[2669]: E0213 19:54:18.828854 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:23.485486 systemd[1]: Started sshd@31-10.0.0.116:22-10.0.0.1:34928.service - OpenSSH per-connection server daemon (10.0.0.1:34928). Feb 13 19:54:23.520374 sshd[4572]: Accepted publickey for core from 10.0.0.1 port 34928 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:54:23.521517 sshd[4572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:23.525006 systemd-logind[1515]: New session 32 of user core. Feb 13 19:54:23.540488 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 19:54:23.641628 sshd[4572]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:23.644803 systemd-logind[1515]: Session 32 logged out. Waiting for processes to exit. Feb 13 19:54:23.644894 systemd[1]: sshd@31-10.0.0.116:22-10.0.0.1:34928.service: Deactivated successfully. Feb 13 19:54:23.646893 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 19:54:23.647644 systemd-logind[1515]: Removed session 32. Feb 13 19:54:28.654422 systemd[1]: Started sshd@32-10.0.0.116:22-10.0.0.1:34936.service - OpenSSH per-connection server daemon (10.0.0.1:34936). Feb 13 19:54:28.698785 sshd[4610]: Accepted publickey for core from 10.0.0.1 port 34936 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:54:28.699247 sshd[4610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:28.703641 systemd-logind[1515]: New session 33 of user core. Feb 13 19:54:28.713471 systemd[1]: Started session-33.scope - Session 33 of User core. Feb 13 19:54:28.819656 sshd[4610]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:28.822568 systemd[1]: sshd@32-10.0.0.116:22-10.0.0.1:34936.service: Deactivated successfully. Feb 13 19:54:28.826395 systemd-logind[1515]: Session 33 logged out. Waiting for processes to exit. Feb 13 19:54:28.826430 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 19:54:28.827662 systemd-logind[1515]: Removed session 33. Feb 13 19:54:30.828734 kubelet[2669]: E0213 19:54:30.828699 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:33.834481 systemd[1]: Started sshd@33-10.0.0.116:22-10.0.0.1:59228.service - OpenSSH per-connection server daemon (10.0.0.1:59228). Feb 13 19:54:33.869943 sshd[4650]: Accepted publickey for core from 10.0.0.1 port 59228 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:54:33.871044 sshd[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:33.874703 systemd-logind[1515]: New session 34 of user core. Feb 13 19:54:33.886450 systemd[1]: Started session-34.scope - Session 34 of User core. Feb 13 19:54:33.993956 sshd[4650]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:34.001329 systemd-logind[1515]: Session 34 logged out. Waiting for processes to exit. Feb 13 19:54:34.001513 systemd[1]: sshd@33-10.0.0.116:22-10.0.0.1:59228.service: Deactivated successfully. Feb 13 19:54:34.004840 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 19:54:34.006741 systemd-logind[1515]: Removed session 34. Feb 13 19:54:34.828740 kubelet[2669]: E0213 19:54:34.828702 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:39.008397 systemd[1]: Started sshd@34-10.0.0.116:22-10.0.0.1:59234.service - OpenSSH per-connection server daemon (10.0.0.1:59234). Feb 13 19:54:39.044348 sshd[4687]: Accepted publickey for core from 10.0.0.1 port 59234 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:54:39.045554 sshd[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:39.049655 systemd-logind[1515]: New session 35 of user core. Feb 13 19:54:39.061514 systemd[1]: Started session-35.scope - Session 35 of User core. Feb 13 19:54:39.177951 sshd[4687]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:39.181043 systemd[1]: sshd@34-10.0.0.116:22-10.0.0.1:59234.service: Deactivated successfully. Feb 13 19:54:39.183756 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 19:54:39.185109 systemd-logind[1515]: Session 35 logged out. Waiting for processes to exit. Feb 13 19:54:39.186038 systemd-logind[1515]: Removed session 35. Feb 13 19:54:42.828646 kubelet[2669]: E0213 19:54:42.828567 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:44.188427 systemd[1]: Started sshd@35-10.0.0.116:22-10.0.0.1:49606.service - OpenSSH per-connection server daemon (10.0.0.1:49606). Feb 13 19:54:44.234121 sshd[4726]: Accepted publickey for core from 10.0.0.1 port 49606 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:54:44.235360 sshd[4726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:44.239070 systemd-logind[1515]: New session 36 of user core. Feb 13 19:54:44.247443 systemd[1]: Started session-36.scope - Session 36 of User core. Feb 13 19:54:44.365161 sshd[4726]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:44.368820 systemd[1]: sshd@35-10.0.0.116:22-10.0.0.1:49606.service: Deactivated successfully. Feb 13 19:54:44.371145 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 19:54:44.371796 systemd-logind[1515]: Session 36 logged out. Waiting for processes to exit. Feb 13 19:54:44.372722 systemd-logind[1515]: Removed session 36. Feb 13 19:54:45.828870 kubelet[2669]: E0213 19:54:45.828832 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:49.376412 systemd[1]: Started sshd@36-10.0.0.116:22-10.0.0.1:49622.service - OpenSSH per-connection server daemon (10.0.0.1:49622). Feb 13 19:54:49.413001 sshd[4763]: Accepted publickey for core from 10.0.0.1 port 49622 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:54:49.414285 sshd[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:49.418616 systemd-logind[1515]: New session 37 of user core. Feb 13 19:54:49.429398 systemd[1]: Started session-37.scope - Session 37 of User core. Feb 13 19:54:49.536902 sshd[4763]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:49.539657 systemd[1]: sshd@36-10.0.0.116:22-10.0.0.1:49622.service: Deactivated successfully. Feb 13 19:54:49.541855 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 19:54:49.547035 systemd-logind[1515]: Session 37 logged out. Waiting for processes to exit. Feb 13 19:54:49.548622 systemd-logind[1515]: Removed session 37. Feb 13 19:54:54.550409 systemd[1]: Started sshd@37-10.0.0.116:22-10.0.0.1:40672.service - OpenSSH per-connection server daemon (10.0.0.1:40672). Feb 13 19:54:54.601430 sshd[4803]: Accepted publickey for core from 10.0.0.1 port 40672 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:54:54.602859 sshd[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:54.607723 systemd-logind[1515]: New session 38 of user core. Feb 13 19:54:54.619455 systemd[1]: Started session-38.scope - Session 38 of User core. Feb 13 19:54:54.738384 sshd[4803]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:54.740842 systemd[1]: sshd@37-10.0.0.116:22-10.0.0.1:40672.service: Deactivated successfully. Feb 13 19:54:54.746992 systemd-logind[1515]: Session 38 logged out. Waiting for processes to exit. Feb 13 19:54:54.747424 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 19:54:54.749536 systemd-logind[1515]: Removed session 38. Feb 13 19:54:54.828615 kubelet[2669]: E0213 19:54:54.828518 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:56.828580 kubelet[2669]: E0213 19:54:56.828541 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:59.757404 systemd[1]: Started sshd@38-10.0.0.116:22-10.0.0.1:40688.service - OpenSSH per-connection server daemon (10.0.0.1:40688). Feb 13 19:54:59.793482 sshd[4843]: Accepted publickey for core from 10.0.0.1 port 40688 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:54:59.794570 sshd[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:59.798131 systemd-logind[1515]: New session 39 of user core. Feb 13 19:54:59.811468 systemd[1]: Started session-39.scope - Session 39 of User core. Feb 13 19:54:59.913888 sshd[4843]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:59.916835 systemd[1]: sshd@38-10.0.0.116:22-10.0.0.1:40688.service: Deactivated successfully. Feb 13 19:54:59.918689 systemd-logind[1515]: Session 39 logged out. Waiting for processes to exit. Feb 13 19:54:59.918762 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 19:54:59.919557 systemd-logind[1515]: Removed session 39. Feb 13 19:55:04.925436 systemd[1]: Started sshd@39-10.0.0.116:22-10.0.0.1:47984.service - OpenSSH per-connection server daemon (10.0.0.1:47984). Feb 13 19:55:04.960809 sshd[4881]: Accepted publickey for core from 10.0.0.1 port 47984 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:04.961968 sshd[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:04.966214 systemd-logind[1515]: New session 40 of user core. Feb 13 19:55:04.976417 systemd[1]: Started session-40.scope - Session 40 of User core. Feb 13 19:55:05.078864 sshd[4881]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:05.082412 systemd[1]: sshd@39-10.0.0.116:22-10.0.0.1:47984.service: Deactivated successfully. Feb 13 19:55:05.084470 systemd-logind[1515]: Session 40 logged out. Waiting for processes to exit. Feb 13 19:55:05.084555 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 19:55:05.085527 systemd-logind[1515]: Removed session 40. Feb 13 19:55:10.088465 systemd[1]: Started sshd@40-10.0.0.116:22-10.0.0.1:48000.service - OpenSSH per-connection server daemon (10.0.0.1:48000). Feb 13 19:55:10.124811 sshd[4919]: Accepted publickey for core from 10.0.0.1 port 48000 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:10.126011 sshd[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:10.132268 systemd-logind[1515]: New session 41 of user core. Feb 13 19:55:10.140438 systemd[1]: Started session-41.scope - Session 41 of User core. Feb 13 19:55:10.258674 sshd[4919]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:10.267448 systemd[1]: Started sshd@41-10.0.0.116:22-10.0.0.1:48014.service - OpenSSH per-connection server daemon (10.0.0.1:48014). Feb 13 19:55:10.267853 systemd[1]: sshd@40-10.0.0.116:22-10.0.0.1:48000.service: Deactivated successfully. Feb 13 19:55:10.269332 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 19:55:10.273263 systemd-logind[1515]: Session 41 logged out. Waiting for processes to exit. Feb 13 19:55:10.274563 systemd-logind[1515]: Removed session 41. Feb 13 19:55:10.305431 sshd[4933]: Accepted publickey for core from 10.0.0.1 port 48014 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:10.306633 sshd[4933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:10.310779 systemd-logind[1515]: New session 42 of user core. Feb 13 19:55:10.318443 systemd[1]: Started session-42.scope - Session 42 of User core. Feb 13 19:55:10.457368 sshd[4933]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:10.467470 systemd[1]: Started sshd@42-10.0.0.116:22-10.0.0.1:48016.service - OpenSSH per-connection server daemon (10.0.0.1:48016). Feb 13 19:55:10.468157 systemd[1]: sshd@41-10.0.0.116:22-10.0.0.1:48014.service: Deactivated successfully. Feb 13 19:55:10.469753 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 19:55:10.477572 systemd-logind[1515]: Session 42 logged out. Waiting for processes to exit. Feb 13 19:55:10.480684 systemd-logind[1515]: Removed session 42. Feb 13 19:55:10.506476 sshd[4952]: Accepted publickey for core from 10.0.0.1 port 48016 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:10.507635 sshd[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:10.511795 systemd-logind[1515]: New session 43 of user core. Feb 13 19:55:10.517436 systemd[1]: Started session-43.scope - Session 43 of User core. Feb 13 19:55:10.622101 sshd[4952]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:10.625025 systemd[1]: sshd@42-10.0.0.116:22-10.0.0.1:48016.service: Deactivated successfully. Feb 13 19:55:10.627100 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 19:55:10.627106 systemd-logind[1515]: Session 43 logged out. Waiting for processes to exit. Feb 13 19:55:10.629288 systemd-logind[1515]: Removed session 43. Feb 13 19:55:15.636421 systemd[1]: Started sshd@43-10.0.0.116:22-10.0.0.1:34712.service - OpenSSH per-connection server daemon (10.0.0.1:34712). Feb 13 19:55:15.671068 sshd[4991]: Accepted publickey for core from 10.0.0.1 port 34712 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:15.672219 sshd[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:15.675740 systemd-logind[1515]: New session 44 of user core. Feb 13 19:55:15.683431 systemd[1]: Started session-44.scope - Session 44 of User core. Feb 13 19:55:15.790501 sshd[4991]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:15.793181 systemd[1]: sshd@43-10.0.0.116:22-10.0.0.1:34712.service: Deactivated successfully. Feb 13 19:55:15.796350 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 19:55:15.797296 systemd-logind[1515]: Session 44 logged out. Waiting for processes to exit. Feb 13 19:55:15.798165 systemd-logind[1515]: Removed session 44. Feb 13 19:55:20.801405 systemd[1]: Started sshd@44-10.0.0.116:22-10.0.0.1:34714.service - OpenSSH per-connection server daemon (10.0.0.1:34714). Feb 13 19:55:20.835630 sshd[5043]: Accepted publickey for core from 10.0.0.1 port 34714 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:20.836736 sshd[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:20.840269 systemd-logind[1515]: New session 45 of user core. Feb 13 19:55:20.848613 systemd[1]: Started session-45.scope - Session 45 of User core. Feb 13 19:55:20.954657 sshd[5043]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:20.957270 systemd[1]: sshd@44-10.0.0.116:22-10.0.0.1:34714.service: Deactivated successfully. Feb 13 19:55:20.959801 systemd-logind[1515]: Session 45 logged out. Waiting for processes to exit. Feb 13 19:55:20.960335 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 19:55:20.961268 systemd-logind[1515]: Removed session 45. Feb 13 19:55:25.975429 systemd[1]: Started sshd@45-10.0.0.116:22-10.0.0.1:55476.service - OpenSSH per-connection server daemon (10.0.0.1:55476). Feb 13 19:55:26.009497 sshd[5079]: Accepted publickey for core from 10.0.0.1 port 55476 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:26.010603 sshd[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:26.016276 systemd-logind[1515]: New session 46 of user core. Feb 13 19:55:26.026401 systemd[1]: Started session-46.scope - Session 46 of User core. Feb 13 19:55:26.135815 sshd[5079]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:26.139335 systemd[1]: sshd@45-10.0.0.116:22-10.0.0.1:55476.service: Deactivated successfully. Feb 13 19:55:26.141248 systemd-logind[1515]: Session 46 logged out. Waiting for processes to exit. Feb 13 19:55:26.141256 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 19:55:26.142641 systemd-logind[1515]: Removed session 46. Feb 13 19:55:31.151442 systemd[1]: Started sshd@46-10.0.0.116:22-10.0.0.1:55478.service - OpenSSH per-connection server daemon (10.0.0.1:55478). Feb 13 19:55:31.191512 sshd[5117]: Accepted publickey for core from 10.0.0.1 port 55478 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:31.191979 sshd[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:31.196055 systemd-logind[1515]: New session 47 of user core. Feb 13 19:55:31.205462 systemd[1]: Started session-47.scope - Session 47 of User core. Feb 13 19:55:31.334326 sshd[5117]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:31.337639 systemd[1]: sshd@46-10.0.0.116:22-10.0.0.1:55478.service: Deactivated successfully. Feb 13 19:55:31.342337 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 19:55:31.343478 systemd-logind[1515]: Session 47 logged out. Waiting for processes to exit. Feb 13 19:55:31.347613 systemd-logind[1515]: Removed session 47. Feb 13 19:55:36.344419 systemd[1]: Started sshd@47-10.0.0.116:22-10.0.0.1:60658.service - OpenSSH per-connection server daemon (10.0.0.1:60658). Feb 13 19:55:36.381831 sshd[5153]: Accepted publickey for core from 10.0.0.1 port 60658 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:36.382987 sshd[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:36.387454 systemd-logind[1515]: New session 48 of user core. Feb 13 19:55:36.397476 systemd[1]: Started session-48.scope - Session 48 of User core. Feb 13 19:55:36.513760 sshd[5153]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:36.517047 systemd[1]: sshd@47-10.0.0.116:22-10.0.0.1:60658.service: Deactivated successfully. Feb 13 19:55:36.519357 systemd-logind[1515]: Session 48 logged out. Waiting for processes to exit. Feb 13 19:55:36.519516 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 19:55:36.520975 systemd-logind[1515]: Removed session 48. Feb 13 19:55:41.535439 systemd[1]: Started sshd@48-10.0.0.116:22-10.0.0.1:60672.service - OpenSSH per-connection server daemon (10.0.0.1:60672). Feb 13 19:55:41.570065 sshd[5190]: Accepted publickey for core from 10.0.0.1 port 60672 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:41.571280 sshd[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:41.577012 systemd-logind[1515]: New session 49 of user core. Feb 13 19:55:41.583423 systemd[1]: Started session-49.scope - Session 49 of User core. Feb 13 19:55:41.694334 sshd[5190]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:41.697492 systemd[1]: sshd@48-10.0.0.116:22-10.0.0.1:60672.service: Deactivated successfully. Feb 13 19:55:41.699590 systemd-logind[1515]: Session 49 logged out. Waiting for processes to exit. Feb 13 19:55:41.699692 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 19:55:41.700470 systemd-logind[1515]: Removed session 49. Feb 13 19:55:42.828582 kubelet[2669]: E0213 19:55:42.828551 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:55:42.829004 kubelet[2669]: E0213 19:55:42.828780 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:55:46.705429 systemd[1]: Started sshd@49-10.0.0.116:22-10.0.0.1:59094.service - OpenSSH per-connection server daemon (10.0.0.1:59094). Feb 13 19:55:46.750832 sshd[5229]: Accepted publickey for core from 10.0.0.1 port 59094 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:46.751350 sshd[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:46.754963 systemd-logind[1515]: New session 50 of user core. Feb 13 19:55:46.761403 systemd[1]: Started session-50.scope - Session 50 of User core. Feb 13 19:55:46.885330 sshd[5229]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:46.890009 systemd[1]: sshd@49-10.0.0.116:22-10.0.0.1:59094.service: Deactivated successfully. Feb 13 19:55:46.893277 systemd-logind[1515]: Session 50 logged out. Waiting for processes to exit. Feb 13 19:55:46.893463 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 19:55:46.895309 systemd-logind[1515]: Removed session 50. Feb 13 19:55:50.828398 kubelet[2669]: E0213 19:55:50.828354 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:55:51.895444 systemd[1]: Started sshd@50-10.0.0.116:22-10.0.0.1:59110.service - OpenSSH per-connection server daemon (10.0.0.1:59110). Feb 13 19:55:51.935254 sshd[5266]: Accepted publickey for core from 10.0.0.1 port 59110 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:51.936948 sshd[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:51.941480 systemd-logind[1515]: New session 51 of user core. Feb 13 19:55:51.952491 systemd[1]: Started session-51.scope - Session 51 of User core. Feb 13 19:55:52.065395 sshd[5266]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:52.069639 systemd[1]: sshd@50-10.0.0.116:22-10.0.0.1:59110.service: Deactivated successfully. Feb 13 19:55:52.071848 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 19:55:52.071905 systemd-logind[1515]: Session 51 logged out. Waiting for processes to exit. Feb 13 19:55:52.073274 systemd-logind[1515]: Removed session 51. Feb 13 19:55:55.828907 kubelet[2669]: E0213 19:55:55.828678 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:55:57.075401 systemd[1]: Started sshd@51-10.0.0.116:22-10.0.0.1:53568.service - OpenSSH per-connection server daemon (10.0.0.1:53568). Feb 13 19:55:57.113238 sshd[5302]: Accepted publickey for core from 10.0.0.1 port 53568 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:57.114123 sshd[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:57.118028 systemd-logind[1515]: New session 52 of user core. Feb 13 19:55:57.128434 systemd[1]: Started session-52.scope - Session 52 of User core. Feb 13 19:55:57.240201 sshd[5302]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:57.243723 systemd[1]: sshd@51-10.0.0.116:22-10.0.0.1:53568.service: Deactivated successfully. Feb 13 19:55:57.245717 systemd-logind[1515]: Session 52 logged out. Waiting for processes to exit. Feb 13 19:55:57.246061 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 19:55:57.247881 systemd-logind[1515]: Removed session 52. Feb 13 19:55:59.828910 kubelet[2669]: E0213 19:55:59.828859 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:02.260423 systemd[1]: Started sshd@52-10.0.0.116:22-10.0.0.1:53580.service - OpenSSH per-connection server daemon (10.0.0.1:53580). Feb 13 19:56:02.295230 sshd[5340]: Accepted publickey for core from 10.0.0.1 port 53580 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:02.296401 sshd[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:02.301428 systemd-logind[1515]: New session 53 of user core. Feb 13 19:56:02.312416 systemd[1]: Started session-53.scope - Session 53 of User core. Feb 13 19:56:02.427026 sshd[5340]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:02.433050 systemd-logind[1515]: Session 53 logged out. Waiting for processes to exit. Feb 13 19:56:02.433447 systemd[1]: sshd@52-10.0.0.116:22-10.0.0.1:53580.service: Deactivated successfully. Feb 13 19:56:02.435049 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 19:56:02.436799 systemd-logind[1515]: Removed session 53. Feb 13 19:56:06.828593 kubelet[2669]: E0213 19:56:06.828552 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:07.448467 systemd[1]: Started sshd@53-10.0.0.116:22-10.0.0.1:45036.service - OpenSSH per-connection server daemon (10.0.0.1:45036). Feb 13 19:56:07.485959 sshd[5377]: Accepted publickey for core from 10.0.0.1 port 45036 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:07.487264 sshd[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:07.491264 systemd-logind[1515]: New session 54 of user core. Feb 13 19:56:07.499430 systemd[1]: Started session-54.scope - Session 54 of User core. Feb 13 19:56:07.611346 sshd[5377]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:07.614800 systemd[1]: sshd@53-10.0.0.116:22-10.0.0.1:45036.service: Deactivated successfully. Feb 13 19:56:07.616994 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 19:56:07.617263 systemd-logind[1515]: Session 54 logged out. Waiting for processes to exit. Feb 13 19:56:07.620168 systemd-logind[1515]: Removed session 54. Feb 13 19:56:12.630420 systemd[1]: Started sshd@54-10.0.0.116:22-10.0.0.1:55018.service - OpenSSH per-connection server daemon (10.0.0.1:55018). Feb 13 19:56:12.665418 sshd[5414]: Accepted publickey for core from 10.0.0.1 port 55018 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:12.666589 sshd[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:12.669978 systemd-logind[1515]: New session 55 of user core. Feb 13 19:56:12.681456 systemd[1]: Started session-55.scope - Session 55 of User core. Feb 13 19:56:12.784765 sshd[5414]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:12.787262 systemd[1]: sshd@54-10.0.0.116:22-10.0.0.1:55018.service: Deactivated successfully. Feb 13 19:56:12.790131 systemd-logind[1515]: Session 55 logged out. Waiting for processes to exit. Feb 13 19:56:12.790593 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 19:56:12.791552 systemd-logind[1515]: Removed session 55. Feb 13 19:56:17.796454 systemd[1]: Started sshd@55-10.0.0.116:22-10.0.0.1:55020.service - OpenSSH per-connection server daemon (10.0.0.1:55020). Feb 13 19:56:17.831575 sshd[5451]: Accepted publickey for core from 10.0.0.1 port 55020 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:17.832739 sshd[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:17.836844 systemd-logind[1515]: New session 56 of user core. Feb 13 19:56:17.843458 systemd[1]: Started session-56.scope - Session 56 of User core. Feb 13 19:56:17.949365 sshd[5451]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:17.952007 systemd[1]: sshd@55-10.0.0.116:22-10.0.0.1:55020.service: Deactivated successfully. Feb 13 19:56:17.954587 systemd-logind[1515]: Session 56 logged out. Waiting for processes to exit. Feb 13 19:56:17.955741 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 19:56:17.957359 systemd-logind[1515]: Removed session 56. Feb 13 19:56:18.828234 kubelet[2669]: E0213 19:56:18.828179 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:22.967393 systemd[1]: Started sshd@56-10.0.0.116:22-10.0.0.1:39244.service - OpenSSH per-connection server daemon (10.0.0.1:39244). Feb 13 19:56:23.002433 sshd[5488]: Accepted publickey for core from 10.0.0.1 port 39244 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:23.003824 sshd[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:23.007553 systemd-logind[1515]: New session 57 of user core. Feb 13 19:56:23.017441 systemd[1]: Started session-57.scope - Session 57 of User core. Feb 13 19:56:23.123618 sshd[5488]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:23.126843 systemd[1]: sshd@56-10.0.0.116:22-10.0.0.1:39244.service: Deactivated successfully. Feb 13 19:56:23.128973 systemd-logind[1515]: Session 57 logged out. Waiting for processes to exit. Feb 13 19:56:23.129520 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 19:56:23.130335 systemd-logind[1515]: Removed session 57. Feb 13 19:56:28.141633 systemd[1]: Started sshd@57-10.0.0.116:22-10.0.0.1:39252.service - OpenSSH per-connection server daemon (10.0.0.1:39252). Feb 13 19:56:28.178867 sshd[5525]: Accepted publickey for core from 10.0.0.1 port 39252 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:28.180633 sshd[5525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:28.184737 systemd-logind[1515]: New session 58 of user core. Feb 13 19:56:28.193557 systemd[1]: Started session-58.scope - Session 58 of User core. Feb 13 19:56:28.307999 sshd[5525]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:28.311141 systemd[1]: sshd@57-10.0.0.116:22-10.0.0.1:39252.service: Deactivated successfully. Feb 13 19:56:28.313110 systemd-logind[1515]: Session 58 logged out. Waiting for processes to exit. Feb 13 19:56:28.313147 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 19:56:28.315411 systemd-logind[1515]: Removed session 58. Feb 13 19:56:33.319406 systemd[1]: Started sshd@58-10.0.0.116:22-10.0.0.1:51544.service - OpenSSH per-connection server daemon (10.0.0.1:51544). Feb 13 19:56:33.355897 sshd[5563]: Accepted publickey for core from 10.0.0.1 port 51544 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:33.356405 sshd[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:33.360946 systemd-logind[1515]: New session 59 of user core. Feb 13 19:56:33.368444 systemd[1]: Started session-59.scope - Session 59 of User core. Feb 13 19:56:33.471949 sshd[5563]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:33.474445 systemd[1]: sshd@58-10.0.0.116:22-10.0.0.1:51544.service: Deactivated successfully. Feb 13 19:56:33.477630 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 19:56:33.478599 systemd-logind[1515]: Session 59 logged out. Waiting for processes to exit. Feb 13 19:56:33.479613 systemd-logind[1515]: Removed session 59. Feb 13 19:56:38.483421 systemd[1]: Started sshd@59-10.0.0.116:22-10.0.0.1:51548.service - OpenSSH per-connection server daemon (10.0.0.1:51548). Feb 13 19:56:38.518784 sshd[5600]: Accepted publickey for core from 10.0.0.1 port 51548 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:38.520410 sshd[5600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:38.524160 systemd-logind[1515]: New session 60 of user core. Feb 13 19:56:38.537496 systemd[1]: Started session-60.scope - Session 60 of User core. Feb 13 19:56:38.645692 sshd[5600]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:38.649092 systemd[1]: sshd@59-10.0.0.116:22-10.0.0.1:51548.service: Deactivated successfully. Feb 13 19:56:38.651223 systemd-logind[1515]: Session 60 logged out. Waiting for processes to exit. Feb 13 19:56:38.651298 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 19:56:38.652358 systemd-logind[1515]: Removed session 60. Feb 13 19:56:43.663420 systemd[1]: Started sshd@60-10.0.0.116:22-10.0.0.1:35530.service - OpenSSH per-connection server daemon (10.0.0.1:35530). Feb 13 19:56:43.697779 sshd[5636]: Accepted publickey for core from 10.0.0.1 port 35530 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:43.699010 sshd[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:43.702388 systemd-logind[1515]: New session 61 of user core. Feb 13 19:56:43.709427 systemd[1]: Started session-61.scope - Session 61 of User core. Feb 13 19:56:43.814321 sshd[5636]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:43.818338 systemd[1]: sshd@60-10.0.0.116:22-10.0.0.1:35530.service: Deactivated successfully. Feb 13 19:56:43.820739 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 19:56:43.821439 systemd-logind[1515]: Session 61 logged out. Waiting for processes to exit. Feb 13 19:56:43.822691 systemd-logind[1515]: Removed session 61. Feb 13 19:56:47.828647 kubelet[2669]: E0213 19:56:47.828607 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:48.828401 systemd[1]: Started sshd@61-10.0.0.116:22-10.0.0.1:35538.service - OpenSSH per-connection server daemon (10.0.0.1:35538). Feb 13 19:56:48.863556 sshd[5675]: Accepted publickey for core from 10.0.0.1 port 35538 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:48.864932 sshd[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:48.868651 systemd-logind[1515]: New session 62 of user core. Feb 13 19:56:48.878418 systemd[1]: Started session-62.scope - Session 62 of User core. Feb 13 19:56:48.980928 sshd[5675]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:48.983579 systemd[1]: sshd@61-10.0.0.116:22-10.0.0.1:35538.service: Deactivated successfully. Feb 13 19:56:48.986168 systemd-logind[1515]: Session 62 logged out. Waiting for processes to exit. Feb 13 19:56:48.986867 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 19:56:48.988147 systemd-logind[1515]: Removed session 62. Feb 13 19:56:53.999422 systemd[1]: Started sshd@62-10.0.0.116:22-10.0.0.1:32934.service - OpenSSH per-connection server daemon (10.0.0.1:32934). Feb 13 19:56:54.034870 sshd[5713]: Accepted publickey for core from 10.0.0.1 port 32934 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:54.035675 sshd[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:54.040384 systemd-logind[1515]: New session 63 of user core. Feb 13 19:56:54.049439 systemd[1]: Started session-63.scope - Session 63 of User core. Feb 13 19:56:54.156391 sshd[5713]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:54.158882 systemd[1]: sshd@62-10.0.0.116:22-10.0.0.1:32934.service: Deactivated successfully. Feb 13 19:56:54.161356 systemd-logind[1515]: Session 63 logged out. Waiting for processes to exit. Feb 13 19:56:54.161477 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 19:56:54.164258 systemd-logind[1515]: Removed session 63. Feb 13 19:56:59.168414 systemd[1]: Started sshd@63-10.0.0.116:22-10.0.0.1:32936.service - OpenSSH per-connection server daemon (10.0.0.1:32936). Feb 13 19:56:59.203314 sshd[5754]: Accepted publickey for core from 10.0.0.1 port 32936 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:59.204517 sshd[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:59.208986 systemd-logind[1515]: New session 64 of user core. Feb 13 19:56:59.222495 systemd[1]: Started session-64.scope - Session 64 of User core. Feb 13 19:56:59.327255 sshd[5754]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:59.330849 systemd[1]: sshd@63-10.0.0.116:22-10.0.0.1:32936.service: Deactivated successfully. Feb 13 19:56:59.330858 systemd-logind[1515]: Session 64 logged out. Waiting for processes to exit. Feb 13 19:56:59.332897 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 19:56:59.333901 systemd-logind[1515]: Removed session 64. Feb 13 19:57:04.338422 systemd[1]: Started sshd@64-10.0.0.116:22-10.0.0.1:49070.service - OpenSSH per-connection server daemon (10.0.0.1:49070). Feb 13 19:57:04.373163 sshd[5790]: Accepted publickey for core from 10.0.0.1 port 49070 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:04.374397 sshd[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:04.377994 systemd-logind[1515]: New session 65 of user core. Feb 13 19:57:04.385422 systemd[1]: Started session-65.scope - Session 65 of User core. Feb 13 19:57:04.492651 sshd[5790]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:04.495314 systemd[1]: sshd@64-10.0.0.116:22-10.0.0.1:49070.service: Deactivated successfully. Feb 13 19:57:04.497783 systemd-logind[1515]: Session 65 logged out. Waiting for processes to exit. Feb 13 19:57:04.497946 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 19:57:04.499413 systemd-logind[1515]: Removed session 65. Feb 13 19:57:06.829121 kubelet[2669]: E0213 19:57:06.829021 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:07.828825 kubelet[2669]: E0213 19:57:07.828785 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:09.504546 systemd[1]: Started sshd@65-10.0.0.116:22-10.0.0.1:49084.service - OpenSSH per-connection server daemon (10.0.0.1:49084). Feb 13 19:57:09.538857 sshd[5827]: Accepted publickey for core from 10.0.0.1 port 49084 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:09.540027 sshd[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:09.543499 systemd-logind[1515]: New session 66 of user core. Feb 13 19:57:09.555473 systemd[1]: Started session-66.scope - Session 66 of User core. Feb 13 19:57:09.659548 sshd[5827]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:09.662419 systemd[1]: sshd@65-10.0.0.116:22-10.0.0.1:49084.service: Deactivated successfully. Feb 13 19:57:09.665251 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 19:57:09.665409 systemd-logind[1515]: Session 66 logged out. Waiting for processes to exit. Feb 13 19:57:09.666795 systemd-logind[1515]: Removed session 66. Feb 13 19:57:10.828622 kubelet[2669]: E0213 19:57:10.828582 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:14.672414 systemd[1]: Started sshd@66-10.0.0.116:22-10.0.0.1:34688.service - OpenSSH per-connection server daemon (10.0.0.1:34688). Feb 13 19:57:14.706870 sshd[5864]: Accepted publickey for core from 10.0.0.1 port 34688 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:14.707975 sshd[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:14.713221 systemd-logind[1515]: New session 67 of user core. Feb 13 19:57:14.720445 systemd[1]: Started session-67.scope - Session 67 of User core. Feb 13 19:57:14.822242 sshd[5864]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:14.825507 systemd[1]: sshd@66-10.0.0.116:22-10.0.0.1:34688.service: Deactivated successfully. Feb 13 19:57:14.828115 systemd-logind[1515]: Session 67 logged out. Waiting for processes to exit. Feb 13 19:57:14.828296 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 19:57:14.829905 systemd-logind[1515]: Removed session 67. Feb 13 19:57:19.844486 systemd[1]: Started sshd@67-10.0.0.116:22-10.0.0.1:34692.service - OpenSSH per-connection server daemon (10.0.0.1:34692). Feb 13 19:57:19.879695 sshd[5900]: Accepted publickey for core from 10.0.0.1 port 34692 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:19.880971 sshd[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:19.884400 systemd-logind[1515]: New session 68 of user core. Feb 13 19:57:19.898515 systemd[1]: Started session-68.scope - Session 68 of User core. Feb 13 19:57:20.000376 sshd[5900]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:20.003455 systemd[1]: sshd@67-10.0.0.116:22-10.0.0.1:34692.service: Deactivated successfully. Feb 13 19:57:20.005283 systemd-logind[1515]: Session 68 logged out. Waiting for processes to exit. Feb 13 19:57:20.005432 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 19:57:20.006965 systemd-logind[1515]: Removed session 68. Feb 13 19:57:20.828058 kubelet[2669]: E0213 19:57:20.828022 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:24.828369 kubelet[2669]: E0213 19:57:24.828338 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:25.011494 systemd[1]: Started sshd@68-10.0.0.116:22-10.0.0.1:43080.service - OpenSSH per-connection server daemon (10.0.0.1:43080). Feb 13 19:57:25.046034 sshd[5937]: Accepted publickey for core from 10.0.0.1 port 43080 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:25.047180 sshd[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:25.051207 systemd-logind[1515]: New session 69 of user core. Feb 13 19:57:25.059428 systemd[1]: Started session-69.scope - Session 69 of User core. Feb 13 19:57:25.162871 sshd[5937]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:25.166170 systemd[1]: sshd@68-10.0.0.116:22-10.0.0.1:43080.service: Deactivated successfully. Feb 13 19:57:25.168174 systemd-logind[1515]: Session 69 logged out. Waiting for processes to exit. Feb 13 19:57:25.168510 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 19:57:25.169497 systemd-logind[1515]: Removed session 69. Feb 13 19:57:30.174456 systemd[1]: Started sshd@69-10.0.0.116:22-10.0.0.1:43086.service - OpenSSH per-connection server daemon (10.0.0.1:43086). Feb 13 19:57:30.209447 sshd[5975]: Accepted publickey for core from 10.0.0.1 port 43086 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:30.210632 sshd[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:30.214606 systemd-logind[1515]: New session 70 of user core. Feb 13 19:57:30.221411 systemd[1]: Started session-70.scope - Session 70 of User core. Feb 13 19:57:30.326524 sshd[5975]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:30.329441 systemd-logind[1515]: Session 70 logged out. Waiting for processes to exit. Feb 13 19:57:30.329588 systemd[1]: sshd@69-10.0.0.116:22-10.0.0.1:43086.service: Deactivated successfully. Feb 13 19:57:30.331922 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 19:57:30.332910 systemd-logind[1515]: Removed session 70. Feb 13 19:57:35.343554 systemd[1]: Started sshd@70-10.0.0.116:22-10.0.0.1:40748.service - OpenSSH per-connection server daemon (10.0.0.1:40748). Feb 13 19:57:35.378012 sshd[6011]: Accepted publickey for core from 10.0.0.1 port 40748 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:35.379222 sshd[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:35.382517 systemd-logind[1515]: New session 71 of user core. Feb 13 19:57:35.392549 systemd[1]: Started session-71.scope - Session 71 of User core. Feb 13 19:57:35.494717 sshd[6011]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:35.497828 systemd[1]: sshd@70-10.0.0.116:22-10.0.0.1:40748.service: Deactivated successfully. Feb 13 19:57:35.500801 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 19:57:35.501534 systemd-logind[1515]: Session 71 logged out. Waiting for processes to exit. Feb 13 19:57:35.502296 systemd-logind[1515]: Removed session 71. Feb 13 19:57:40.505423 systemd[1]: Started sshd@71-10.0.0.116:22-10.0.0.1:40752.service - OpenSSH per-connection server daemon (10.0.0.1:40752). Feb 13 19:57:40.540290 sshd[6047]: Accepted publickey for core from 10.0.0.1 port 40752 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:40.541545 sshd[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:40.545113 systemd-logind[1515]: New session 72 of user core. Feb 13 19:57:40.558473 systemd[1]: Started session-72.scope - Session 72 of User core. Feb 13 19:57:40.661595 sshd[6047]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:40.664855 systemd[1]: sshd@71-10.0.0.116:22-10.0.0.1:40752.service: Deactivated successfully. Feb 13 19:57:40.666723 systemd-logind[1515]: Session 72 logged out. Waiting for processes to exit. Feb 13 19:57:40.666795 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 19:57:40.667698 systemd-logind[1515]: Removed session 72. Feb 13 19:57:45.673562 systemd[1]: Started sshd@72-10.0.0.116:22-10.0.0.1:60278.service - OpenSSH per-connection server daemon (10.0.0.1:60278). Feb 13 19:57:45.708797 sshd[6086]: Accepted publickey for core from 10.0.0.1 port 60278 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:45.710040 sshd[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:45.713620 systemd-logind[1515]: New session 73 of user core. Feb 13 19:57:45.722425 systemd[1]: Started session-73.scope - Session 73 of User core. Feb 13 19:57:45.827087 sshd[6086]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:45.830720 systemd[1]: sshd@72-10.0.0.116:22-10.0.0.1:60278.service: Deactivated successfully. Feb 13 19:57:45.833341 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 19:57:45.833618 systemd-logind[1515]: Session 73 logged out. Waiting for processes to exit. Feb 13 19:57:45.835169 systemd-logind[1515]: Removed session 73. Feb 13 19:57:48.828811 kubelet[2669]: E0213 19:57:48.828771 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:50.838475 systemd[1]: Started sshd@73-10.0.0.116:22-10.0.0.1:60290.service - OpenSSH per-connection server daemon (10.0.0.1:60290). Feb 13 19:57:50.873035 sshd[6129]: Accepted publickey for core from 10.0.0.1 port 60290 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:50.874225 sshd[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:50.877945 systemd-logind[1515]: New session 74 of user core. Feb 13 19:57:50.888409 systemd[1]: Started session-74.scope - Session 74 of User core. Feb 13 19:57:50.991624 sshd[6129]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:50.994656 systemd[1]: sshd@73-10.0.0.116:22-10.0.0.1:60290.service: Deactivated successfully. Feb 13 19:57:50.996652 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 19:57:50.996674 systemd-logind[1515]: Session 74 logged out. Waiting for processes to exit. Feb 13 19:57:50.998163 systemd-logind[1515]: Removed session 74. Feb 13 19:57:56.002640 systemd[1]: Started sshd@74-10.0.0.116:22-10.0.0.1:45658.service - OpenSSH per-connection server daemon (10.0.0.1:45658). Feb 13 19:57:56.037668 sshd[6167]: Accepted publickey for core from 10.0.0.1 port 45658 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:56.038835 sshd[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:56.042229 systemd-logind[1515]: New session 75 of user core. Feb 13 19:57:56.053453 systemd[1]: Started session-75.scope - Session 75 of User core. Feb 13 19:57:56.159995 sshd[6167]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:56.163346 systemd[1]: sshd@74-10.0.0.116:22-10.0.0.1:45658.service: Deactivated successfully. Feb 13 19:57:56.165207 systemd-logind[1515]: Session 75 logged out. Waiting for processes to exit. Feb 13 19:57:56.165275 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 19:57:56.166794 systemd-logind[1515]: Removed session 75. Feb 13 19:57:57.828403 kubelet[2669]: E0213 19:57:57.828363 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:01.171410 systemd[1]: Started sshd@75-10.0.0.116:22-10.0.0.1:45664.service - OpenSSH per-connection server daemon (10.0.0.1:45664). Feb 13 19:58:01.206302 sshd[6206]: Accepted publickey for core from 10.0.0.1 port 45664 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:01.207435 sshd[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:01.210952 systemd-logind[1515]: New session 76 of user core. Feb 13 19:58:01.217520 systemd[1]: Started session-76.scope - Session 76 of User core. Feb 13 19:58:01.320983 sshd[6206]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:01.324300 systemd[1]: sshd@75-10.0.0.116:22-10.0.0.1:45664.service: Deactivated successfully. Feb 13 19:58:01.326334 systemd-logind[1515]: Session 76 logged out. Waiting for processes to exit. Feb 13 19:58:01.326895 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 19:58:01.327651 systemd-logind[1515]: Removed session 76. Feb 13 19:58:06.327409 systemd[1]: Started sshd@76-10.0.0.116:22-10.0.0.1:58472.service - OpenSSH per-connection server daemon (10.0.0.1:58472). Feb 13 19:58:06.363858 sshd[6246]: Accepted publickey for core from 10.0.0.1 port 58472 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:06.365014 sshd[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:06.369102 systemd-logind[1515]: New session 77 of user core. Feb 13 19:58:06.377418 systemd[1]: Started session-77.scope - Session 77 of User core. Feb 13 19:58:06.482783 sshd[6246]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:06.485937 systemd[1]: sshd@76-10.0.0.116:22-10.0.0.1:58472.service: Deactivated successfully. Feb 13 19:58:06.489157 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 19:58:06.489755 systemd-logind[1515]: Session 77 logged out. Waiting for processes to exit. Feb 13 19:58:06.490700 systemd-logind[1515]: Removed session 77. Feb 13 19:58:11.499621 systemd[1]: Started sshd@77-10.0.0.116:22-10.0.0.1:58476.service - OpenSSH per-connection server daemon (10.0.0.1:58476). Feb 13 19:58:11.534100 sshd[6295]: Accepted publickey for core from 10.0.0.1 port 58476 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:11.535266 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:11.538918 systemd-logind[1515]: New session 78 of user core. Feb 13 19:58:11.549472 systemd[1]: Started session-78.scope - Session 78 of User core. Feb 13 19:58:11.654801 sshd[6295]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:11.664426 systemd[1]: Started sshd@78-10.0.0.116:22-10.0.0.1:58482.service - OpenSSH per-connection server daemon (10.0.0.1:58482). Feb 13 19:58:11.664814 systemd[1]: sshd@77-10.0.0.116:22-10.0.0.1:58476.service: Deactivated successfully. Feb 13 19:58:11.667402 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 19:58:11.667967 systemd-logind[1515]: Session 78 logged out. Waiting for processes to exit. Feb 13 19:58:11.668933 systemd-logind[1515]: Removed session 78. Feb 13 19:58:11.699844 sshd[6308]: Accepted publickey for core from 10.0.0.1 port 58482 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:11.701267 sshd[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:11.707732 systemd-logind[1515]: New session 79 of user core. Feb 13 19:58:11.715453 systemd[1]: Started session-79.scope - Session 79 of User core. Feb 13 19:58:11.938721 sshd[6308]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:11.953844 systemd[1]: Started sshd@79-10.0.0.116:22-10.0.0.1:58488.service - OpenSSH per-connection server daemon (10.0.0.1:58488). Feb 13 19:58:11.954242 systemd[1]: sshd@78-10.0.0.116:22-10.0.0.1:58482.service: Deactivated successfully. Feb 13 19:58:11.957514 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 19:58:11.957816 systemd-logind[1515]: Session 79 logged out. Waiting for processes to exit. Feb 13 19:58:11.959439 systemd-logind[1515]: Removed session 79. Feb 13 19:58:11.992226 sshd[6322]: Accepted publickey for core from 10.0.0.1 port 58488 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:11.993232 sshd[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:11.997244 systemd-logind[1515]: New session 80 of user core. Feb 13 19:58:12.007481 systemd[1]: Started session-80.scope - Session 80 of User core. Feb 13 19:58:13.091229 sshd[6322]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:13.102218 systemd[1]: Started sshd@80-10.0.0.116:22-10.0.0.1:45008.service - OpenSSH per-connection server daemon (10.0.0.1:45008). Feb 13 19:58:13.103454 systemd[1]: sshd@79-10.0.0.116:22-10.0.0.1:58488.service: Deactivated successfully. Feb 13 19:58:13.106878 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 19:58:13.110612 systemd-logind[1515]: Session 80 logged out. Waiting for processes to exit. Feb 13 19:58:13.111981 systemd-logind[1515]: Removed session 80. Feb 13 19:58:13.130544 update_engine[1518]: I20250213 19:58:13.130494 1518 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 19:58:13.131296 update_engine[1518]: I20250213 19:58:13.131259 1518 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 19:58:13.131523 update_engine[1518]: I20250213 19:58:13.131496 1518 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 19:58:13.131865 update_engine[1518]: I20250213 19:58:13.131836 1518 omaha_request_params.cc:62] Current group set to lts Feb 13 19:58:13.132571 locksmithd[1564]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 19:58:13.132787 update_engine[1518]: I20250213 19:58:13.132601 1518 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 19:58:13.132787 update_engine[1518]: I20250213 19:58:13.132624 1518 update_attempter.cc:643] Scheduling an action processor start. Feb 13 19:58:13.132787 update_engine[1518]: I20250213 19:58:13.132642 1518 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:58:13.132787 update_engine[1518]: I20250213 19:58:13.132673 1518 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 19:58:13.132787 update_engine[1518]: I20250213 19:58:13.132740 1518 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:58:13.132787 update_engine[1518]: I20250213 19:58:13.132751 1518 omaha_request_action.cc:272] Request: Feb 13 19:58:13.132787 update_engine[1518]: Feb 13 19:58:13.132787 update_engine[1518]: Feb 13 19:58:13.132787 update_engine[1518]: Feb 13 19:58:13.132787 update_engine[1518]: Feb 13 19:58:13.132787 update_engine[1518]: Feb 13 19:58:13.132787 update_engine[1518]: Feb 13 19:58:13.132787 update_engine[1518]: Feb 13 19:58:13.132787 update_engine[1518]: Feb 13 19:58:13.132787 update_engine[1518]: I20250213 19:58:13.132756 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:58:13.136515 update_engine[1518]: I20250213 19:58:13.136469 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:58:13.136840 update_engine[1518]: I20250213 19:58:13.136703 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:58:13.145771 sshd[6346]: Accepted publickey for core from 10.0.0.1 port 45008 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:13.147038 sshd[6346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:13.151042 systemd-logind[1515]: New session 81 of user core. Feb 13 19:58:13.158903 update_engine[1518]: E20250213 19:58:13.158855 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:58:13.158959 update_engine[1518]: I20250213 19:58:13.158931 1518 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 19:58:13.160428 systemd[1]: Started session-81.scope - Session 81 of User core. Feb 13 19:58:13.367702 sshd[6346]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:13.374437 systemd[1]: Started sshd@81-10.0.0.116:22-10.0.0.1:45020.service - OpenSSH per-connection server daemon (10.0.0.1:45020). Feb 13 19:58:13.374961 systemd[1]: sshd@80-10.0.0.116:22-10.0.0.1:45008.service: Deactivated successfully. Feb 13 19:58:13.378713 systemd-logind[1515]: Session 81 logged out. Waiting for processes to exit. Feb 13 19:58:13.378870 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 19:58:13.381660 systemd-logind[1515]: Removed session 81. Feb 13 19:58:13.412718 sshd[6358]: Accepted publickey for core from 10.0.0.1 port 45020 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:13.413965 sshd[6358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:13.417815 systemd-logind[1515]: New session 82 of user core. Feb 13 19:58:13.432495 systemd[1]: Started session-82.scope - Session 82 of User core. Feb 13 19:58:13.536850 sshd[6358]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:13.540055 systemd[1]: sshd@81-10.0.0.116:22-10.0.0.1:45020.service: Deactivated successfully. Feb 13 19:58:13.542050 systemd-logind[1515]: Session 82 logged out. Waiting for processes to exit. Feb 13 19:58:13.542132 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 19:58:13.543868 systemd-logind[1515]: Removed session 82. Feb 13 19:58:15.829068 kubelet[2669]: E0213 19:58:15.828979 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:18.553494 systemd[1]: Started sshd@82-10.0.0.116:22-10.0.0.1:45028.service - OpenSSH per-connection server daemon (10.0.0.1:45028). Feb 13 19:58:18.589817 sshd[6398]: Accepted publickey for core from 10.0.0.1 port 45028 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:18.591042 sshd[6398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:18.594786 systemd-logind[1515]: New session 83 of user core. Feb 13 19:58:18.609417 systemd[1]: Started session-83.scope - Session 83 of User core. Feb 13 19:58:18.712938 sshd[6398]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:18.716093 systemd[1]: sshd@82-10.0.0.116:22-10.0.0.1:45028.service: Deactivated successfully. Feb 13 19:58:18.718078 systemd-logind[1515]: Session 83 logged out. Waiting for processes to exit. Feb 13 19:58:18.718149 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 19:58:18.719215 systemd-logind[1515]: Removed session 83. Feb 13 19:58:23.134910 update_engine[1518]: I20250213 19:58:23.134829 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:58:23.135316 update_engine[1518]: I20250213 19:58:23.135114 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:58:23.135341 update_engine[1518]: I20250213 19:58:23.135313 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:58:23.178563 update_engine[1518]: E20250213 19:58:23.178499 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:58:23.178754 update_engine[1518]: I20250213 19:58:23.178581 1518 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 19:58:23.727421 systemd[1]: Started sshd@83-10.0.0.116:22-10.0.0.1:39060.service - OpenSSH per-connection server daemon (10.0.0.1:39060). Feb 13 19:58:23.762121 sshd[6435]: Accepted publickey for core from 10.0.0.1 port 39060 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:23.763321 sshd[6435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:23.766776 systemd-logind[1515]: New session 84 of user core. Feb 13 19:58:23.778425 systemd[1]: Started session-84.scope - Session 84 of User core. Feb 13 19:58:23.882216 sshd[6435]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:23.886054 systemd[1]: sshd@83-10.0.0.116:22-10.0.0.1:39060.service: Deactivated successfully. Feb 13 19:58:23.887938 systemd-logind[1515]: Session 84 logged out. Waiting for processes to exit. Feb 13 19:58:23.888016 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 19:58:23.888950 systemd-logind[1515]: Removed session 84. Feb 13 19:58:24.828689 kubelet[2669]: E0213 19:58:24.828637 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:28.893420 systemd[1]: Started sshd@84-10.0.0.116:22-10.0.0.1:39066.service - OpenSSH per-connection server daemon (10.0.0.1:39066). Feb 13 19:58:28.928742 sshd[6471]: Accepted publickey for core from 10.0.0.1 port 39066 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:28.930052 sshd[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:28.934288 systemd-logind[1515]: New session 85 of user core. Feb 13 19:58:28.948426 systemd[1]: Started session-85.scope - Session 85 of User core. Feb 13 19:58:29.051556 sshd[6471]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:29.055585 systemd-logind[1515]: Session 85 logged out. Waiting for processes to exit. Feb 13 19:58:29.055714 systemd[1]: sshd@84-10.0.0.116:22-10.0.0.1:39066.service: Deactivated successfully. Feb 13 19:58:29.057675 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 19:58:29.058144 systemd-logind[1515]: Removed session 85. Feb 13 19:58:33.130840 update_engine[1518]: I20250213 19:58:33.130750 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:58:33.131384 update_engine[1518]: I20250213 19:58:33.131049 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:58:33.131384 update_engine[1518]: I20250213 19:58:33.131239 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:58:33.135069 update_engine[1518]: E20250213 19:58:33.135022 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:58:33.135132 update_engine[1518]: I20250213 19:58:33.135090 1518 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 19:58:34.062474 systemd[1]: Started sshd@85-10.0.0.116:22-10.0.0.1:51736.service - OpenSSH per-connection server daemon (10.0.0.1:51736). Feb 13 19:58:34.097634 sshd[6510]: Accepted publickey for core from 10.0.0.1 port 51736 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:34.098099 sshd[6510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:34.102319 systemd-logind[1515]: New session 86 of user core. Feb 13 19:58:34.109453 systemd[1]: Started session-86.scope - Session 86 of User core. Feb 13 19:58:34.211415 sshd[6510]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:34.213829 systemd[1]: sshd@85-10.0.0.116:22-10.0.0.1:51736.service: Deactivated successfully. Feb 13 19:58:34.216392 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 19:58:34.217097 systemd-logind[1515]: Session 86 logged out. Waiting for processes to exit. Feb 13 19:58:34.218080 systemd-logind[1515]: Removed session 86. Feb 13 19:58:34.828905 kubelet[2669]: E0213 19:58:34.828870 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:39.225418 systemd[1]: Started sshd@86-10.0.0.116:22-10.0.0.1:51750.service - OpenSSH per-connection server daemon (10.0.0.1:51750). Feb 13 19:58:39.260243 sshd[6547]: Accepted publickey for core from 10.0.0.1 port 51750 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:39.261152 sshd[6547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:39.265091 systemd-logind[1515]: New session 87 of user core. Feb 13 19:58:39.278635 systemd[1]: Started session-87.scope - Session 87 of User core. Feb 13 19:58:39.382568 sshd[6547]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:39.385708 systemd[1]: sshd@86-10.0.0.116:22-10.0.0.1:51750.service: Deactivated successfully. Feb 13 19:58:39.387736 systemd-logind[1515]: Session 87 logged out. Waiting for processes to exit. Feb 13 19:58:39.387822 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 19:58:39.389223 systemd-logind[1515]: Removed session 87. Feb 13 19:58:43.134768 update_engine[1518]: I20250213 19:58:43.134241 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:58:43.134768 update_engine[1518]: I20250213 19:58:43.134560 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:58:43.134768 update_engine[1518]: I20250213 19:58:43.134723 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:58:43.157240 update_engine[1518]: E20250213 19:58:43.157058 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:58:43.157240 update_engine[1518]: I20250213 19:58:43.157126 1518 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 19:58:43.157240 update_engine[1518]: I20250213 19:58:43.157135 1518 omaha_request_action.cc:617] Omaha request response: Feb 13 19:58:43.157240 update_engine[1518]: E20250213 19:58:43.157232 1518 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 19:58:43.157385 update_engine[1518]: I20250213 19:58:43.157251 1518 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 19:58:43.157385 update_engine[1518]: I20250213 19:58:43.157256 1518 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:58:43.157385 update_engine[1518]: I20250213 19:58:43.157261 1518 update_attempter.cc:306] Processing Done. Feb 13 19:58:43.157385 update_engine[1518]: E20250213 19:58:43.157275 1518 update_attempter.cc:619] Update failed. Feb 13 19:58:43.157385 update_engine[1518]: I20250213 19:58:43.157280 1518 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 19:58:43.157385 update_engine[1518]: I20250213 19:58:43.157284 1518 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 19:58:43.157385 update_engine[1518]: I20250213 19:58:43.157289 1518 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 19:58:43.157697 update_engine[1518]: I20250213 19:58:43.157497 1518 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:58:43.157697 update_engine[1518]: I20250213 19:58:43.157533 1518 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:58:43.157697 update_engine[1518]: I20250213 19:58:43.157538 1518 omaha_request_action.cc:272] Request: Feb 13 19:58:43.157697 update_engine[1518]: Feb 13 19:58:43.157697 update_engine[1518]: Feb 13 19:58:43.157697 update_engine[1518]: Feb 13 19:58:43.157697 update_engine[1518]: Feb 13 19:58:43.157697 update_engine[1518]: Feb 13 19:58:43.157697 update_engine[1518]: Feb 13 19:58:43.157697 update_engine[1518]: I20250213 19:58:43.157544 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:58:43.157927 locksmithd[1564]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 19:58:43.158160 update_engine[1518]: I20250213 19:58:43.157732 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:58:43.158160 update_engine[1518]: I20250213 19:58:43.157856 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:58:43.167482 update_engine[1518]: E20250213 19:58:43.167436 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:58:43.167547 update_engine[1518]: I20250213 19:58:43.167488 1518 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 19:58:43.167547 update_engine[1518]: I20250213 19:58:43.167496 1518 omaha_request_action.cc:617] Omaha request response: Feb 13 19:58:43.167547 update_engine[1518]: I20250213 19:58:43.167501 1518 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:58:43.167547 update_engine[1518]: I20250213 19:58:43.167506 1518 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:58:43.167547 update_engine[1518]: I20250213 19:58:43.167510 1518 update_attempter.cc:306] Processing Done. Feb 13 19:58:43.167547 update_engine[1518]: I20250213 19:58:43.167515 1518 update_attempter.cc:310] Error event sent. Feb 13 19:58:43.167547 update_engine[1518]: I20250213 19:58:43.167522 1518 update_check_scheduler.cc:74] Next update check in 49m52s Feb 13 19:58:43.167775 locksmithd[1564]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 19:58:44.392537 systemd[1]: Started sshd@87-10.0.0.116:22-10.0.0.1:36834.service - OpenSSH per-connection server daemon (10.0.0.1:36834). Feb 13 19:58:44.427246 sshd[6586]: Accepted publickey for core from 10.0.0.1 port 36834 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:44.428410 sshd[6586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:44.432230 systemd-logind[1515]: New session 88 of user core. Feb 13 19:58:44.441435 systemd[1]: Started session-88.scope - Session 88 of User core. Feb 13 19:58:44.543995 sshd[6586]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:44.546559 systemd[1]: sshd@87-10.0.0.116:22-10.0.0.1:36834.service: Deactivated successfully. Feb 13 19:58:44.549445 systemd-logind[1515]: Session 88 logged out. Waiting for processes to exit. Feb 13 19:58:44.550109 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 19:58:44.550987 systemd-logind[1515]: Removed session 88. Feb 13 19:58:44.828758 kubelet[2669]: E0213 19:58:44.828641 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:47.828769 kubelet[2669]: E0213 19:58:47.828660 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:49.557404 systemd[1]: Started sshd@88-10.0.0.116:22-10.0.0.1:36850.service - OpenSSH per-connection server daemon (10.0.0.1:36850). Feb 13 19:58:49.591900 sshd[6622]: Accepted publickey for core from 10.0.0.1 port 36850 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:49.593255 sshd[6622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:49.596716 systemd-logind[1515]: New session 89 of user core. Feb 13 19:58:49.605405 systemd[1]: Started session-89.scope - Session 89 of User core. Feb 13 19:58:49.706648 sshd[6622]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:49.709823 systemd[1]: sshd@88-10.0.0.116:22-10.0.0.1:36850.service: Deactivated successfully. Feb 13 19:58:49.711737 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 19:58:49.712124 systemd-logind[1515]: Session 89 logged out. Waiting for processes to exit. Feb 13 19:58:49.713222 systemd-logind[1515]: Removed session 89. Feb 13 19:58:54.721483 systemd[1]: Started sshd@89-10.0.0.116:22-10.0.0.1:37506.service - OpenSSH per-connection server daemon (10.0.0.1:37506). Feb 13 19:58:54.756408 sshd[6658]: Accepted publickey for core from 10.0.0.1 port 37506 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:54.757568 sshd[6658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:54.761493 systemd-logind[1515]: New session 90 of user core. Feb 13 19:58:54.772427 systemd[1]: Started session-90.scope - Session 90 of User core. Feb 13 19:58:54.873251 sshd[6658]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:54.876367 systemd[1]: sshd@89-10.0.0.116:22-10.0.0.1:37506.service: Deactivated successfully. Feb 13 19:58:54.878259 systemd-logind[1515]: Session 90 logged out. Waiting for processes to exit. Feb 13 19:58:54.878340 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 19:58:54.879701 systemd-logind[1515]: Removed session 90. Feb 13 19:58:59.889426 systemd[1]: Started sshd@90-10.0.0.116:22-10.0.0.1:37514.service - OpenSSH per-connection server daemon (10.0.0.1:37514). Feb 13 19:58:59.924120 sshd[6696]: Accepted publickey for core from 10.0.0.1 port 37514 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:59.925348 sshd[6696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:59.929641 systemd-logind[1515]: New session 91 of user core. Feb 13 19:58:59.939427 systemd[1]: Started session-91.scope - Session 91 of User core. Feb 13 19:59:00.042661 sshd[6696]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:00.046014 systemd[1]: sshd@90-10.0.0.116:22-10.0.0.1:37514.service: Deactivated successfully. Feb 13 19:59:00.047863 systemd-logind[1515]: Session 91 logged out. Waiting for processes to exit. Feb 13 19:59:00.047930 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 19:59:00.049125 systemd-logind[1515]: Removed session 91. Feb 13 19:59:00.829080 kubelet[2669]: E0213 19:59:00.828975 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:05.063708 systemd[1]: Started sshd@91-10.0.0.116:22-10.0.0.1:37264.service - OpenSSH per-connection server daemon (10.0.0.1:37264). Feb 13 19:59:05.098358 sshd[6732]: Accepted publickey for core from 10.0.0.1 port 37264 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:05.099679 sshd[6732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:05.103751 systemd-logind[1515]: New session 92 of user core. Feb 13 19:59:05.112488 systemd[1]: Started session-92.scope - Session 92 of User core. Feb 13 19:59:05.215982 sshd[6732]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:05.219138 systemd[1]: sshd@91-10.0.0.116:22-10.0.0.1:37264.service: Deactivated successfully. Feb 13 19:59:05.221118 systemd-logind[1515]: Session 92 logged out. Waiting for processes to exit. Feb 13 19:59:05.221164 systemd[1]: session-92.scope: Deactivated successfully. Feb 13 19:59:05.222395 systemd-logind[1515]: Removed session 92. Feb 13 19:59:10.230409 systemd[1]: Started sshd@92-10.0.0.116:22-10.0.0.1:37270.service - OpenSSH per-connection server daemon (10.0.0.1:37270). Feb 13 19:59:10.265038 sshd[6768]: Accepted publickey for core from 10.0.0.1 port 37270 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:10.266202 sshd[6768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:10.269602 systemd-logind[1515]: New session 93 of user core. Feb 13 19:59:10.280426 systemd[1]: Started session-93.scope - Session 93 of User core. Feb 13 19:59:10.387115 sshd[6768]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:10.390074 systemd-logind[1515]: Session 93 logged out. Waiting for processes to exit. Feb 13 19:59:10.390953 systemd[1]: sshd@92-10.0.0.116:22-10.0.0.1:37270.service: Deactivated successfully. Feb 13 19:59:10.393934 systemd[1]: session-93.scope: Deactivated successfully. Feb 13 19:59:10.396169 systemd-logind[1515]: Removed session 93. Feb 13 19:59:15.402477 systemd[1]: Started sshd@93-10.0.0.116:22-10.0.0.1:47748.service - OpenSSH per-connection server daemon (10.0.0.1:47748). Feb 13 19:59:15.436880 sshd[6805]: Accepted publickey for core from 10.0.0.1 port 47748 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:15.438144 sshd[6805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:15.442371 systemd-logind[1515]: New session 94 of user core. Feb 13 19:59:15.465458 systemd[1]: Started session-94.scope - Session 94 of User core. Feb 13 19:59:15.566857 sshd[6805]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:15.569781 systemd[1]: sshd@93-10.0.0.116:22-10.0.0.1:47748.service: Deactivated successfully. Feb 13 19:59:15.571784 systemd[1]: session-94.scope: Deactivated successfully. Feb 13 19:59:15.571796 systemd-logind[1515]: Session 94 logged out. Waiting for processes to exit. Feb 13 19:59:15.573049 systemd-logind[1515]: Removed session 94. Feb 13 19:59:20.579507 systemd[1]: Started sshd@94-10.0.0.116:22-10.0.0.1:47752.service - OpenSSH per-connection server daemon (10.0.0.1:47752). Feb 13 19:59:20.613982 sshd[6843]: Accepted publickey for core from 10.0.0.1 port 47752 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:20.615133 sshd[6843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:20.618934 systemd-logind[1515]: New session 95 of user core. Feb 13 19:59:20.629499 systemd[1]: Started session-95.scope - Session 95 of User core. Feb 13 19:59:20.734817 sshd[6843]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:20.737364 systemd[1]: sshd@94-10.0.0.116:22-10.0.0.1:47752.service: Deactivated successfully. Feb 13 19:59:20.740021 systemd[1]: session-95.scope: Deactivated successfully. Feb 13 19:59:20.740074 systemd-logind[1515]: Session 95 logged out. Waiting for processes to exit. Feb 13 19:59:20.741553 systemd-logind[1515]: Removed session 95. Feb 13 19:59:21.828369 kubelet[2669]: E0213 19:59:21.828270 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:25.747460 systemd[1]: Started sshd@95-10.0.0.116:22-10.0.0.1:42722.service - OpenSSH per-connection server daemon (10.0.0.1:42722). Feb 13 19:59:25.781823 sshd[6879]: Accepted publickey for core from 10.0.0.1 port 42722 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:25.783044 sshd[6879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:25.787109 systemd-logind[1515]: New session 96 of user core. Feb 13 19:59:25.800442 systemd[1]: Started session-96.scope - Session 96 of User core. Feb 13 19:59:25.903950 sshd[6879]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:25.907048 systemd[1]: sshd@95-10.0.0.116:22-10.0.0.1:42722.service: Deactivated successfully. Feb 13 19:59:25.909140 systemd-logind[1515]: Session 96 logged out. Waiting for processes to exit. Feb 13 19:59:25.909252 systemd[1]: session-96.scope: Deactivated successfully. Feb 13 19:59:25.910077 systemd-logind[1515]: Removed session 96. Feb 13 19:59:30.914428 systemd[1]: Started sshd@96-10.0.0.116:22-10.0.0.1:42734.service - OpenSSH per-connection server daemon (10.0.0.1:42734). Feb 13 19:59:30.952250 sshd[6925]: Accepted publickey for core from 10.0.0.1 port 42734 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:30.953521 sshd[6925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:30.957245 systemd-logind[1515]: New session 97 of user core. Feb 13 19:59:30.966430 systemd[1]: Started session-97.scope - Session 97 of User core. Feb 13 19:59:31.070746 sshd[6925]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:31.073842 systemd[1]: sshd@96-10.0.0.116:22-10.0.0.1:42734.service: Deactivated successfully. Feb 13 19:59:31.076007 systemd-logind[1515]: Session 97 logged out. Waiting for processes to exit. Feb 13 19:59:31.076067 systemd[1]: session-97.scope: Deactivated successfully. Feb 13 19:59:31.077468 systemd-logind[1515]: Removed session 97. Feb 13 19:59:36.085450 systemd[1]: Started sshd@97-10.0.0.116:22-10.0.0.1:44022.service - OpenSSH per-connection server daemon (10.0.0.1:44022). Feb 13 19:59:36.120422 sshd[6961]: Accepted publickey for core from 10.0.0.1 port 44022 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:36.121755 sshd[6961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:36.125331 systemd-logind[1515]: New session 98 of user core. Feb 13 19:59:36.136485 systemd[1]: Started session-98.scope - Session 98 of User core. Feb 13 19:59:36.242011 sshd[6961]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:36.244852 systemd[1]: sshd@97-10.0.0.116:22-10.0.0.1:44022.service: Deactivated successfully. Feb 13 19:59:36.248042 systemd-logind[1515]: Session 98 logged out. Waiting for processes to exit. Feb 13 19:59:36.248615 systemd[1]: session-98.scope: Deactivated successfully. Feb 13 19:59:36.250840 systemd-logind[1515]: Removed session 98. Feb 13 19:59:36.828586 kubelet[2669]: E0213 19:59:36.828548 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:40.828583 kubelet[2669]: E0213 19:59:40.828551 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:41.253411 systemd[1]: Started sshd@98-10.0.0.116:22-10.0.0.1:44032.service - OpenSSH per-connection server daemon (10.0.0.1:44032). Feb 13 19:59:41.288885 sshd[6997]: Accepted publickey for core from 10.0.0.1 port 44032 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:41.290037 sshd[6997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:41.294745 systemd-logind[1515]: New session 99 of user core. Feb 13 19:59:41.300442 systemd[1]: Started session-99.scope - Session 99 of User core. Feb 13 19:59:41.404568 sshd[6997]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:41.408252 systemd[1]: sshd@98-10.0.0.116:22-10.0.0.1:44032.service: Deactivated successfully. Feb 13 19:59:41.412485 systemd[1]: session-99.scope: Deactivated successfully. Feb 13 19:59:41.415042 systemd-logind[1515]: Session 99 logged out. Waiting for processes to exit. Feb 13 19:59:41.416050 systemd-logind[1515]: Removed session 99. Feb 13 19:59:46.415450 systemd[1]: Started sshd@99-10.0.0.116:22-10.0.0.1:56882.service - OpenSSH per-connection server daemon (10.0.0.1:56882). Feb 13 19:59:46.449972 sshd[7035]: Accepted publickey for core from 10.0.0.1 port 56882 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:46.451096 sshd[7035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:46.454949 systemd-logind[1515]: New session 100 of user core. Feb 13 19:59:46.461420 systemd[1]: Started session-100.scope - Session 100 of User core. Feb 13 19:59:46.563659 sshd[7035]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:46.566847 systemd[1]: sshd@99-10.0.0.116:22-10.0.0.1:56882.service: Deactivated successfully. Feb 13 19:59:46.568806 systemd[1]: session-100.scope: Deactivated successfully. Feb 13 19:59:46.569278 systemd-logind[1515]: Session 100 logged out. Waiting for processes to exit. Feb 13 19:59:46.570145 systemd-logind[1515]: Removed session 100. Feb 13 19:59:48.828642 kubelet[2669]: E0213 19:59:48.828590 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:51.575411 systemd[1]: Started sshd@100-10.0.0.116:22-10.0.0.1:56888.service - OpenSSH per-connection server daemon (10.0.0.1:56888). Feb 13 19:59:51.610067 sshd[7072]: Accepted publickey for core from 10.0.0.1 port 56888 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:51.611218 sshd[7072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:51.615007 systemd-logind[1515]: New session 101 of user core. Feb 13 19:59:51.620545 systemd[1]: Started session-101.scope - Session 101 of User core. Feb 13 19:59:51.735255 sshd[7072]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:51.738684 systemd[1]: sshd@100-10.0.0.116:22-10.0.0.1:56888.service: Deactivated successfully. Feb 13 19:59:51.740660 systemd-logind[1515]: Session 101 logged out. Waiting for processes to exit. Feb 13 19:59:51.740718 systemd[1]: session-101.scope: Deactivated successfully. Feb 13 19:59:51.741738 systemd-logind[1515]: Removed session 101. Feb 13 19:59:54.828949 kubelet[2669]: E0213 19:59:54.828610 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:56.747411 systemd[1]: Started sshd@101-10.0.0.116:22-10.0.0.1:54468.service - OpenSSH per-connection server daemon (10.0.0.1:54468). Feb 13 19:59:56.782263 sshd[7123]: Accepted publickey for core from 10.0.0.1 port 54468 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:56.783553 sshd[7123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:56.788008 systemd-logind[1515]: New session 102 of user core. Feb 13 19:59:56.797445 systemd[1]: Started session-102.scope - Session 102 of User core. Feb 13 19:59:56.903542 sshd[7123]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:56.906041 systemd[1]: sshd@101-10.0.0.116:22-10.0.0.1:54468.service: Deactivated successfully. Feb 13 19:59:56.909229 systemd[1]: session-102.scope: Deactivated successfully. Feb 13 19:59:56.909907 systemd-logind[1515]: Session 102 logged out. Waiting for processes to exit. Feb 13 19:59:56.910852 systemd-logind[1515]: Removed session 102. Feb 13 19:59:58.828954 kubelet[2669]: E0213 19:59:58.828914 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:00:01.914402 systemd[1]: Started sshd@102-10.0.0.116:22-10.0.0.1:54472.service - OpenSSH per-connection server daemon (10.0.0.1:54472). Feb 13 20:00:01.950005 sshd[7163]: Accepted publickey for core from 10.0.0.1 port 54472 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:01.951119 sshd[7163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:01.954540 systemd-logind[1515]: New session 103 of user core. Feb 13 20:00:01.966418 systemd[1]: Started session-103.scope - Session 103 of User core. Feb 13 20:00:02.069522 sshd[7163]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:02.072100 systemd[1]: sshd@102-10.0.0.116:22-10.0.0.1:54472.service: Deactivated successfully. Feb 13 20:00:02.074635 systemd-logind[1515]: Session 103 logged out. Waiting for processes to exit. Feb 13 20:00:02.075341 systemd[1]: session-103.scope: Deactivated successfully. Feb 13 20:00:02.076282 systemd-logind[1515]: Removed session 103. Feb 13 20:00:07.081421 systemd[1]: Started sshd@103-10.0.0.116:22-10.0.0.1:44258.service - OpenSSH per-connection server daemon (10.0.0.1:44258). Feb 13 20:00:07.116509 sshd[7201]: Accepted publickey for core from 10.0.0.1 port 44258 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:07.118265 sshd[7201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:07.122423 systemd-logind[1515]: New session 104 of user core. Feb 13 20:00:07.128409 systemd[1]: Started session-104.scope - Session 104 of User core. Feb 13 20:00:07.229489 sshd[7201]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:07.232566 systemd[1]: sshd@103-10.0.0.116:22-10.0.0.1:44258.service: Deactivated successfully. Feb 13 20:00:07.235356 systemd-logind[1515]: Session 104 logged out. Waiting for processes to exit. Feb 13 20:00:07.235449 systemd[1]: session-104.scope: Deactivated successfully. Feb 13 20:00:07.236340 systemd-logind[1515]: Removed session 104. Feb 13 20:00:12.241445 systemd[1]: Started sshd@104-10.0.0.116:22-10.0.0.1:44266.service - OpenSSH per-connection server daemon (10.0.0.1:44266). Feb 13 20:00:12.276410 sshd[7238]: Accepted publickey for core from 10.0.0.1 port 44266 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:12.277641 sshd[7238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:12.281205 systemd-logind[1515]: New session 105 of user core. Feb 13 20:00:12.295405 systemd[1]: Started session-105.scope - Session 105 of User core. Feb 13 20:00:12.400829 sshd[7238]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:12.403635 systemd[1]: sshd@104-10.0.0.116:22-10.0.0.1:44266.service: Deactivated successfully. Feb 13 20:00:12.405447 systemd-logind[1515]: Session 105 logged out. Waiting for processes to exit. Feb 13 20:00:12.405520 systemd[1]: session-105.scope: Deactivated successfully. Feb 13 20:00:12.406568 systemd-logind[1515]: Removed session 105. Feb 13 20:00:17.415400 systemd[1]: Started sshd@105-10.0.0.116:22-10.0.0.1:48912.service - OpenSSH per-connection server daemon (10.0.0.1:48912). Feb 13 20:00:17.451057 sshd[7275]: Accepted publickey for core from 10.0.0.1 port 48912 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:17.452323 sshd[7275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:17.456503 systemd-logind[1515]: New session 106 of user core. Feb 13 20:00:17.469416 systemd[1]: Started session-106.scope - Session 106 of User core. Feb 13 20:00:17.572009 sshd[7275]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:17.575232 systemd[1]: sshd@105-10.0.0.116:22-10.0.0.1:48912.service: Deactivated successfully. Feb 13 20:00:17.577164 systemd-logind[1515]: Session 106 logged out. Waiting for processes to exit. Feb 13 20:00:17.577247 systemd[1]: session-106.scope: Deactivated successfully. Feb 13 20:00:17.578345 systemd-logind[1515]: Removed session 106. Feb 13 20:00:19.829005 kubelet[2669]: E0213 20:00:19.828968 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:00:22.592417 systemd[1]: Started sshd@106-10.0.0.116:22-10.0.0.1:38796.service - OpenSSH per-connection server daemon (10.0.0.1:38796). Feb 13 20:00:22.626952 sshd[7312]: Accepted publickey for core from 10.0.0.1 port 38796 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:22.628078 sshd[7312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:22.631453 systemd-logind[1515]: New session 107 of user core. Feb 13 20:00:22.641422 systemd[1]: Started session-107.scope - Session 107 of User core. Feb 13 20:00:22.745371 sshd[7312]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:22.748414 systemd[1]: sshd@106-10.0.0.116:22-10.0.0.1:38796.service: Deactivated successfully. Feb 13 20:00:22.750258 systemd-logind[1515]: Session 107 logged out. Waiting for processes to exit. Feb 13 20:00:22.750330 systemd[1]: session-107.scope: Deactivated successfully. Feb 13 20:00:22.751266 systemd-logind[1515]: Removed session 107. Feb 13 20:00:26.829036 kubelet[2669]: E0213 20:00:26.828939 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:00:27.753426 systemd[1]: Started sshd@107-10.0.0.116:22-10.0.0.1:38806.service - OpenSSH per-connection server daemon (10.0.0.1:38806). Feb 13 20:00:27.788155 sshd[7348]: Accepted publickey for core from 10.0.0.1 port 38806 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:27.789325 sshd[7348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:27.792822 systemd-logind[1515]: New session 108 of user core. Feb 13 20:00:27.807558 systemd[1]: Started session-108.scope - Session 108 of User core. Feb 13 20:00:27.910000 sshd[7348]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:27.912463 systemd[1]: sshd@107-10.0.0.116:22-10.0.0.1:38806.service: Deactivated successfully. Feb 13 20:00:27.915030 systemd[1]: session-108.scope: Deactivated successfully. Feb 13 20:00:27.915234 systemd-logind[1515]: Session 108 logged out. Waiting for processes to exit. Feb 13 20:00:27.917541 systemd-logind[1515]: Removed session 108. Feb 13 20:00:32.924416 systemd[1]: Started sshd@108-10.0.0.116:22-10.0.0.1:35736.service - OpenSSH per-connection server daemon (10.0.0.1:35736). Feb 13 20:00:32.958628 sshd[7387]: Accepted publickey for core from 10.0.0.1 port 35736 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:32.959794 sshd[7387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:32.964154 systemd-logind[1515]: New session 109 of user core. Feb 13 20:00:32.974449 systemd[1]: Started session-109.scope - Session 109 of User core. Feb 13 20:00:33.076434 sshd[7387]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:33.079477 systemd[1]: sshd@108-10.0.0.116:22-10.0.0.1:35736.service: Deactivated successfully. Feb 13 20:00:33.082620 systemd[1]: session-109.scope: Deactivated successfully. Feb 13 20:00:33.083260 systemd-logind[1515]: Session 109 logged out. Waiting for processes to exit. Feb 13 20:00:33.084018 systemd-logind[1515]: Removed session 109. Feb 13 20:00:38.094412 systemd[1]: Started sshd@109-10.0.0.116:22-10.0.0.1:35742.service - OpenSSH per-connection server daemon (10.0.0.1:35742). Feb 13 20:00:38.129979 sshd[7423]: Accepted publickey for core from 10.0.0.1 port 35742 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:38.131111 sshd[7423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:38.134563 systemd-logind[1515]: New session 110 of user core. Feb 13 20:00:38.147420 systemd[1]: Started session-110.scope - Session 110 of User core. Feb 13 20:00:38.251297 sshd[7423]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:38.254972 systemd[1]: sshd@109-10.0.0.116:22-10.0.0.1:35742.service: Deactivated successfully. Feb 13 20:00:38.257269 systemd-logind[1515]: Session 110 logged out. Waiting for processes to exit. Feb 13 20:00:38.257397 systemd[1]: session-110.scope: Deactivated successfully. Feb 13 20:00:38.258268 systemd-logind[1515]: Removed session 110. Feb 13 20:00:43.263492 systemd[1]: Started sshd@110-10.0.0.116:22-10.0.0.1:52182.service - OpenSSH per-connection server daemon (10.0.0.1:52182). Feb 13 20:00:43.297914 sshd[7460]: Accepted publickey for core from 10.0.0.1 port 52182 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:43.299664 sshd[7460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:43.303323 systemd-logind[1515]: New session 111 of user core. Feb 13 20:00:43.315510 systemd[1]: Started session-111.scope - Session 111 of User core. Feb 13 20:00:43.420085 sshd[7460]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:43.426671 systemd[1]: sshd@110-10.0.0.116:22-10.0.0.1:52182.service: Deactivated successfully. Feb 13 20:00:43.428589 systemd-logind[1515]: Session 111 logged out. Waiting for processes to exit. Feb 13 20:00:43.428684 systemd[1]: session-111.scope: Deactivated successfully. Feb 13 20:00:43.429768 systemd-logind[1515]: Removed session 111. Feb 13 20:00:44.828763 kubelet[2669]: E0213 20:00:44.828719 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:00:48.435410 systemd[1]: Started sshd@111-10.0.0.116:22-10.0.0.1:52186.service - OpenSSH per-connection server daemon (10.0.0.1:52186). Feb 13 20:00:48.470870 sshd[7498]: Accepted publickey for core from 10.0.0.1 port 52186 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:48.472032 sshd[7498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:48.476106 systemd-logind[1515]: New session 112 of user core. Feb 13 20:00:48.487430 systemd[1]: Started session-112.scope - Session 112 of User core. Feb 13 20:00:48.591368 sshd[7498]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:48.593747 systemd[1]: sshd@111-10.0.0.116:22-10.0.0.1:52186.service: Deactivated successfully. Feb 13 20:00:48.596299 systemd-logind[1515]: Session 112 logged out. Waiting for processes to exit. Feb 13 20:00:48.596956 systemd[1]: session-112.scope: Deactivated successfully. Feb 13 20:00:48.597959 systemd-logind[1515]: Removed session 112. Feb 13 20:00:49.828664 kubelet[2669]: E0213 20:00:49.828329 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:00:53.600428 systemd[1]: Started sshd@112-10.0.0.116:22-10.0.0.1:59910.service - OpenSSH per-connection server daemon (10.0.0.1:59910). Feb 13 20:00:53.634942 sshd[7535]: Accepted publickey for core from 10.0.0.1 port 59910 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:53.636299 sshd[7535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:53.640083 systemd-logind[1515]: New session 113 of user core. Feb 13 20:00:53.648485 systemd[1]: Started session-113.scope - Session 113 of User core. Feb 13 20:00:53.753728 sshd[7535]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:53.757828 systemd[1]: sshd@112-10.0.0.116:22-10.0.0.1:59910.service: Deactivated successfully. Feb 13 20:00:53.759982 systemd[1]: session-113.scope: Deactivated successfully. Feb 13 20:00:53.760002 systemd-logind[1515]: Session 113 logged out. Waiting for processes to exit. Feb 13 20:00:53.761467 systemd-logind[1515]: Removed session 113. Feb 13 20:00:58.773418 systemd[1]: Started sshd@113-10.0.0.116:22-10.0.0.1:59926.service - OpenSSH per-connection server daemon (10.0.0.1:59926). Feb 13 20:00:58.807855 sshd[7572]: Accepted publickey for core from 10.0.0.1 port 59926 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:58.809385 sshd[7572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:58.813350 systemd-logind[1515]: New session 114 of user core. Feb 13 20:00:58.821426 systemd[1]: Started session-114.scope - Session 114 of User core. Feb 13 20:00:58.923392 sshd[7572]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:58.926001 systemd[1]: sshd@113-10.0.0.116:22-10.0.0.1:59926.service: Deactivated successfully. Feb 13 20:00:58.928559 systemd[1]: session-114.scope: Deactivated successfully. Feb 13 20:00:58.928810 systemd-logind[1515]: Session 114 logged out. Waiting for processes to exit. Feb 13 20:00:58.930402 systemd-logind[1515]: Removed session 114. Feb 13 20:00:59.828795 kubelet[2669]: E0213 20:00:59.828754 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:01.828388 kubelet[2669]: E0213 20:01:01.828358 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:03.932436 systemd[1]: Started sshd@114-10.0.0.116:22-10.0.0.1:46240.service - OpenSSH per-connection server daemon (10.0.0.1:46240). Feb 13 20:01:03.966707 sshd[7611]: Accepted publickey for core from 10.0.0.1 port 46240 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:01:03.967923 sshd[7611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:01:03.971659 systemd-logind[1515]: New session 115 of user core. Feb 13 20:01:03.979483 systemd[1]: Started session-115.scope - Session 115 of User core. Feb 13 20:01:04.081254 sshd[7611]: pam_unix(sshd:session): session closed for user core Feb 13 20:01:04.084349 systemd[1]: sshd@114-10.0.0.116:22-10.0.0.1:46240.service: Deactivated successfully. Feb 13 20:01:04.086300 systemd-logind[1515]: Session 115 logged out. Waiting for processes to exit. Feb 13 20:01:04.086389 systemd[1]: session-115.scope: Deactivated successfully. Feb 13 20:01:04.087236 systemd-logind[1515]: Removed session 115. Feb 13 20:01:09.106916 systemd[1]: Started sshd@115-10.0.0.116:22-10.0.0.1:46256.service - OpenSSH per-connection server daemon (10.0.0.1:46256). Feb 13 20:01:09.143376 sshd[7649]: Accepted publickey for core from 10.0.0.1 port 46256 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:01:09.144704 sshd[7649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:01:09.148149 systemd-logind[1515]: New session 116 of user core. Feb 13 20:01:09.154415 systemd[1]: Started session-116.scope - Session 116 of User core. Feb 13 20:01:09.259408 sshd[7649]: pam_unix(sshd:session): session closed for user core Feb 13 20:01:09.263834 systemd[1]: sshd@115-10.0.0.116:22-10.0.0.1:46256.service: Deactivated successfully. Feb 13 20:01:09.265773 systemd[1]: session-116.scope: Deactivated successfully. Feb 13 20:01:09.265778 systemd-logind[1515]: Session 116 logged out. Waiting for processes to exit. Feb 13 20:01:09.267130 systemd-logind[1515]: Removed session 116. Feb 13 20:01:12.829488 kubelet[2669]: E0213 20:01:12.829389 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:14.280433 systemd[1]: Started sshd@116-10.0.0.116:22-10.0.0.1:34906.service - OpenSSH per-connection server daemon (10.0.0.1:34906). Feb 13 20:01:14.315072 sshd[7685]: Accepted publickey for core from 10.0.0.1 port 34906 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:01:14.316374 sshd[7685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:01:14.319983 systemd-logind[1515]: New session 117 of user core. Feb 13 20:01:14.330420 systemd[1]: Started session-117.scope - Session 117 of User core. Feb 13 20:01:14.435889 sshd[7685]: pam_unix(sshd:session): session closed for user core Feb 13 20:01:14.439087 systemd[1]: sshd@116-10.0.0.116:22-10.0.0.1:34906.service: Deactivated successfully. Feb 13 20:01:14.441550 systemd-logind[1515]: Session 117 logged out. Waiting for processes to exit. Feb 13 20:01:14.441557 systemd[1]: session-117.scope: Deactivated successfully. Feb 13 20:01:14.442959 systemd-logind[1515]: Removed session 117.