Feb 13 19:34:05.925082 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:34:05.925104 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 19:34:05.925114 kernel: KASLR enabled Feb 13 19:34:05.925120 kernel: efi: EFI v2.7 by EDK II Feb 13 19:34:05.925126 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 19:34:05.925131 kernel: random: crng init done Feb 13 19:34:05.925138 kernel: ACPI: Early table checksum verification disabled Feb 13 19:34:05.925145 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 19:34:05.925151 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:34:05.925158 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:34:05.925165 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:34:05.925171 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:34:05.925177 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:34:05.925183 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:34:05.925190 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:34:05.925198 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:34:05.925205 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:34:05.925211 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:34:05.925217 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:34:05.925223 kernel: NUMA: Failed to initialise from firmware Feb 13 19:34:05.925230 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:34:05.925236 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 19:34:05.925243 kernel: Zone ranges: Feb 13 19:34:05.925249 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:34:05.925255 kernel: DMA32 empty Feb 13 19:34:05.925263 kernel: Normal empty Feb 13 19:34:05.925269 kernel: Movable zone start for each node Feb 13 19:34:05.925275 kernel: Early memory node ranges Feb 13 19:34:05.925282 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 19:34:05.925288 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:34:05.925294 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:34:05.925301 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:34:05.925307 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:34:05.925313 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:34:05.925320 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:34:05.925326 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:34:05.925332 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:34:05.925339 kernel: psci: probing for conduit method from ACPI. Feb 13 19:34:05.925346 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:34:05.925352 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:34:05.925361 kernel: psci: Trusted OS migration not required Feb 13 19:34:05.925368 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:34:05.925375 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:34:05.925383 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:34:05.925390 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:34:05.925397 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:34:05.925404 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:34:05.925411 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:34:05.925419 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:34:05.925426 kernel: CPU features: detected: Spectre-v4 Feb 13 19:34:05.925432 kernel: CPU features: detected: Spectre-BHB Feb 13 19:34:05.925439 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:34:05.925454 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:34:05.925463 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:34:05.925470 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:34:05.925477 kernel: alternatives: applying boot alternatives Feb 13 19:34:05.925485 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:34:05.925492 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:34:05.925499 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:34:05.925505 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:34:05.925512 kernel: Fallback order for Node 0: 0 Feb 13 19:34:05.925519 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:34:05.925526 kernel: Policy zone: DMA Feb 13 19:34:05.925532 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:34:05.925540 kernel: software IO TLB: area num 4. Feb 13 19:34:05.925547 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:34:05.925555 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Feb 13 19:34:05.925561 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:34:05.925568 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:34:05.925576 kernel: rcu: RCU event tracing is enabled. Feb 13 19:34:05.925583 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:34:05.925590 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:34:05.925597 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:34:05.925604 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:34:05.925611 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:34:05.925617 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:34:05.925626 kernel: GICv3: 256 SPIs implemented Feb 13 19:34:05.925633 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:34:05.925640 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:34:05.925647 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:34:05.925654 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:34:05.925660 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:34:05.925667 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:34:05.925674 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:34:05.925681 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:34:05.925688 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:34:05.925695 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:34:05.925703 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:34:05.925710 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:34:05.925717 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:34:05.925724 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:34:05.925731 kernel: arm-pv: using stolen time PV Feb 13 19:34:05.925738 kernel: Console: colour dummy device 80x25 Feb 13 19:34:05.925745 kernel: ACPI: Core revision 20230628 Feb 13 19:34:05.925752 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:34:05.925759 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:34:05.925766 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:34:05.925774 kernel: landlock: Up and running. Feb 13 19:34:05.925781 kernel: SELinux: Initializing. Feb 13 19:34:05.925788 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:34:05.925795 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:34:05.925802 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:34:05.925809 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:34:05.925816 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:34:05.925823 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:34:05.925830 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:34:05.925838 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:34:05.925845 kernel: Remapping and enabling EFI services. Feb 13 19:34:05.925852 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:34:05.925859 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:34:05.925866 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:34:05.925873 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:34:05.925879 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:34:05.925886 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:34:05.925893 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:34:05.925900 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:34:05.925909 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:34:05.925916 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:34:05.925928 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:34:05.925936 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:34:05.925944 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:34:05.925951 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:34:05.925958 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:34:05.925966 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:34:05.925973 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:34:05.925982 kernel: SMP: Total of 4 processors activated. Feb 13 19:34:05.926002 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:34:05.926010 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:34:05.926017 kernel: CPU features: detected: Common not Private translations Feb 13 19:34:05.926025 kernel: CPU features: detected: CRC32 instructions Feb 13 19:34:05.926032 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:34:05.926039 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:34:05.926047 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:34:05.926056 kernel: CPU features: detected: Privileged Access Never Feb 13 19:34:05.926063 kernel: CPU features: detected: RAS Extension Support Feb 13 19:34:05.926071 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:34:05.926078 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:34:05.926085 kernel: alternatives: applying system-wide alternatives Feb 13 19:34:05.926092 kernel: devtmpfs: initialized Feb 13 19:34:05.926099 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:34:05.926107 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:34:05.926115 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:34:05.926124 kernel: SMBIOS 3.0.0 present. Feb 13 19:34:05.926131 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 19:34:05.926138 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:34:05.926146 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:34:05.926153 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:34:05.926160 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:34:05.926168 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:34:05.926175 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Feb 13 19:34:05.926183 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:34:05.926192 kernel: cpuidle: using governor menu Feb 13 19:34:05.926199 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:34:05.926206 kernel: ASID allocator initialised with 32768 entries Feb 13 19:34:05.926214 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:34:05.926221 kernel: Serial: AMBA PL011 UART driver Feb 13 19:34:05.926228 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:34:05.926236 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:34:05.926243 kernel: Modules: 509040 pages in range for PLT usage Feb 13 19:34:05.926250 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:34:05.926259 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:34:05.926266 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:34:05.926274 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:34:05.926281 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:34:05.926288 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:34:05.926295 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:34:05.926303 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:34:05.926310 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:34:05.926317 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:34:05.926326 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:34:05.926333 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:34:05.926340 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:34:05.926348 kernel: ACPI: Interpreter enabled Feb 13 19:34:05.926355 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:34:05.926362 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:34:05.926369 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:34:05.926376 kernel: printk: console [ttyAMA0] enabled Feb 13 19:34:05.926384 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:34:05.926532 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:34:05.926610 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:34:05.926676 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:34:05.926743 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:34:05.926809 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:34:05.926819 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:34:05.926826 kernel: PCI host bridge to bus 0000:00 Feb 13 19:34:05.926902 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:34:05.926963 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:34:05.927109 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:34:05.927181 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:34:05.927286 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:34:05.927374 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:34:05.927459 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:34:05.927530 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:34:05.927596 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:34:05.927665 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:34:05.927731 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:34:05.927798 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:34:05.927859 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:34:05.927919 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:34:05.927982 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:34:05.928003 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:34:05.928011 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:34:05.928019 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:34:05.928026 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:34:05.928033 kernel: iommu: Default domain type: Translated Feb 13 19:34:05.928040 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:34:05.928048 kernel: efivars: Registered efivars operations Feb 13 19:34:05.928057 kernel: vgaarb: loaded Feb 13 19:34:05.928065 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:34:05.928072 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:34:05.928079 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:34:05.928087 kernel: pnp: PnP ACPI init Feb 13 19:34:05.928168 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:34:05.928179 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:34:05.928186 kernel: NET: Registered PF_INET protocol family Feb 13 19:34:05.928196 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:34:05.928204 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:34:05.928212 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:34:05.928220 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:34:05.928227 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:34:05.928235 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:34:05.928242 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:34:05.928250 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:34:05.928257 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:34:05.928266 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:34:05.928273 kernel: kvm [1]: HYP mode not available Feb 13 19:34:05.928281 kernel: Initialise system trusted keyrings Feb 13 19:34:05.928288 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:34:05.928295 kernel: Key type asymmetric registered Feb 13 19:34:05.928302 kernel: Asymmetric key parser 'x509' registered Feb 13 19:34:05.928309 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:34:05.928317 kernel: io scheduler mq-deadline registered Feb 13 19:34:05.928324 kernel: io scheduler kyber registered Feb 13 19:34:05.928333 kernel: io scheduler bfq registered Feb 13 19:34:05.928340 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:34:05.928347 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:34:05.928355 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:34:05.928424 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:34:05.928434 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:34:05.928441 kernel: thunder_xcv, ver 1.0 Feb 13 19:34:05.928454 kernel: thunder_bgx, ver 1.0 Feb 13 19:34:05.928462 kernel: nicpf, ver 1.0 Feb 13 19:34:05.928471 kernel: nicvf, ver 1.0 Feb 13 19:34:05.928551 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:34:05.928616 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:34:05 UTC (1739475245) Feb 13 19:34:05.928626 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:34:05.928634 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:34:05.928642 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:34:05.928653 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:34:05.928665 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:34:05.928675 kernel: Segment Routing with IPv6 Feb 13 19:34:05.928682 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:34:05.928690 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:34:05.928697 kernel: Key type dns_resolver registered Feb 13 19:34:05.928704 kernel: registered taskstats version 1 Feb 13 19:34:05.928711 kernel: Loading compiled-in X.509 certificates Feb 13 19:34:05.928719 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 19:34:05.928726 kernel: Key type .fscrypt registered Feb 13 19:34:05.928733 kernel: Key type fscrypt-provisioning registered Feb 13 19:34:05.928742 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:34:05.928749 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:34:05.928756 kernel: ima: No architecture policies found Feb 13 19:34:05.928764 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:34:05.928771 kernel: clk: Disabling unused clocks Feb 13 19:34:05.928778 kernel: Freeing unused kernel memory: 39360K Feb 13 19:34:05.928785 kernel: Run /init as init process Feb 13 19:34:05.928792 kernel: with arguments: Feb 13 19:34:05.928799 kernel: /init Feb 13 19:34:05.928809 kernel: with environment: Feb 13 19:34:05.928816 kernel: HOME=/ Feb 13 19:34:05.928824 kernel: TERM=linux Feb 13 19:34:05.928831 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:34:05.928841 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:34:05.928850 systemd[1]: Detected virtualization kvm. Feb 13 19:34:05.928859 systemd[1]: Detected architecture arm64. Feb 13 19:34:05.928866 systemd[1]: Running in initrd. Feb 13 19:34:05.928876 systemd[1]: No hostname configured, using default hostname. Feb 13 19:34:05.928884 systemd[1]: Hostname set to . Feb 13 19:34:05.928892 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:34:05.928900 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:34:05.928908 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:34:05.928917 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:34:05.928925 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:34:05.928934 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:34:05.928943 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:34:05.928952 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:34:05.928961 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:34:05.928969 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:34:05.928978 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:34:05.929016 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:34:05.929028 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:34:05.929036 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:34:05.929045 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:34:05.929053 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:34:05.929061 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:34:05.929069 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:34:05.929077 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:34:05.929086 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:34:05.929094 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:34:05.929104 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:34:05.929112 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:34:05.929121 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:34:05.929129 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:34:05.929137 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:34:05.929146 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:34:05.929154 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:34:05.929162 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:34:05.929170 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:34:05.929180 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:34:05.929188 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:34:05.929197 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:34:05.929205 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:34:05.929213 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:34:05.929243 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 19:34:05.929262 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:34:05.929271 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:34:05.929282 systemd-journald[237]: Journal started Feb 13 19:34:05.929302 systemd-journald[237]: Runtime Journal (/run/log/journal/7462baef99e043b3a188c93250035a40) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:34:05.911802 systemd-modules-load[238]: Inserted module 'overlay' Feb 13 19:34:05.933143 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:34:05.933163 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:34:05.935190 systemd-modules-load[238]: Inserted module 'br_netfilter' Feb 13 19:34:05.936172 kernel: Bridge firewalling registered Feb 13 19:34:05.936048 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:34:05.945218 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:34:05.946960 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:34:05.949112 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:34:05.952717 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:34:05.959867 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:34:05.962105 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:34:05.965115 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:34:05.966567 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:34:05.974136 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:34:05.976525 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:34:05.986118 dracut-cmdline[275]: dracut-dracut-053 Feb 13 19:34:05.988682 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:34:06.004633 systemd-resolved[278]: Positive Trust Anchors: Feb 13 19:34:06.004650 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:34:06.004681 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:34:06.012122 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 13 19:34:06.013128 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:34:06.014266 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:34:06.062016 kernel: SCSI subsystem initialized Feb 13 19:34:06.066014 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:34:06.074023 kernel: iscsi: registered transport (tcp) Feb 13 19:34:06.087010 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:34:06.087025 kernel: QLogic iSCSI HBA Driver Feb 13 19:34:06.129909 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:34:06.138151 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:34:06.153652 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:34:06.153712 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:34:06.155039 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:34:06.202015 kernel: raid6: neonx8 gen() 15783 MB/s Feb 13 19:34:06.219016 kernel: raid6: neonx4 gen() 15624 MB/s Feb 13 19:34:06.236013 kernel: raid6: neonx2 gen() 13215 MB/s Feb 13 19:34:06.253011 kernel: raid6: neonx1 gen() 10472 MB/s Feb 13 19:34:06.270012 kernel: raid6: int64x8 gen() 6941 MB/s Feb 13 19:34:06.287012 kernel: raid6: int64x4 gen() 7335 MB/s Feb 13 19:34:06.304012 kernel: raid6: int64x2 gen() 6115 MB/s Feb 13 19:34:06.321123 kernel: raid6: int64x1 gen() 5041 MB/s Feb 13 19:34:06.321144 kernel: raid6: using algorithm neonx8 gen() 15783 MB/s Feb 13 19:34:06.339150 kernel: raid6: .... xor() 11931 MB/s, rmw enabled Feb 13 19:34:06.339164 kernel: raid6: using neon recovery algorithm Feb 13 19:34:06.344015 kernel: xor: measuring software checksum speed Feb 13 19:34:06.345235 kernel: 8regs : 17318 MB/sec Feb 13 19:34:06.345247 kernel: 32regs : 19631 MB/sec Feb 13 19:34:06.346563 kernel: arm64_neon : 26936 MB/sec Feb 13 19:34:06.346575 kernel: xor: using function: arm64_neon (26936 MB/sec) Feb 13 19:34:06.397381 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:34:06.409916 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:34:06.422213 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:34:06.435283 systemd-udevd[463]: Using default interface naming scheme 'v255'. Feb 13 19:34:06.438570 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:34:06.448187 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:34:06.461436 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Feb 13 19:34:06.489766 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:34:06.500176 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:34:06.543026 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:34:06.549178 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:34:06.565422 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:34:06.568040 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:34:06.569669 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:34:06.572097 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:34:06.583155 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:34:06.590997 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:34:06.605865 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:34:06.605982 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:34:06.606041 kernel: GPT:9289727 != 19775487 Feb 13 19:34:06.606053 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:34:06.606064 kernel: GPT:9289727 != 19775487 Feb 13 19:34:06.606073 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:34:06.606082 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:34:06.591660 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:34:06.604195 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:34:06.604315 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:34:06.606002 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:34:06.607339 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:34:06.607502 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:34:06.610691 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:34:06.623275 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:34:06.634018 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (507) Feb 13 19:34:06.634067 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (523) Feb 13 19:34:06.637002 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:34:06.638573 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:34:06.646277 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:34:06.652879 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:34:06.654143 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:34:06.659791 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:34:06.676134 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:34:06.677961 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:34:06.685034 disk-uuid[552]: Primary Header is updated. Feb 13 19:34:06.685034 disk-uuid[552]: Secondary Entries is updated. Feb 13 19:34:06.685034 disk-uuid[552]: Secondary Header is updated. Feb 13 19:34:06.689014 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:34:06.700775 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:34:07.714022 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:34:07.714078 disk-uuid[555]: The operation has completed successfully. Feb 13 19:34:07.740636 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:34:07.741687 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:34:07.762172 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:34:07.765304 sh[574]: Success Feb 13 19:34:07.781028 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:34:07.815381 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:34:07.817243 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:34:07.818221 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:34:07.829054 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 19:34:07.829088 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:34:07.829099 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:34:07.830097 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:34:07.831495 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:34:07.835326 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:34:07.836363 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:34:07.851152 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:34:07.852686 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:34:07.861120 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:34:07.861153 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:34:07.862006 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:34:07.865016 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:34:07.871534 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:34:07.873624 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:34:07.878534 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:34:07.890167 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:34:07.947494 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:34:07.956564 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:34:07.982876 systemd-networkd[766]: lo: Link UP Feb 13 19:34:07.982892 systemd-networkd[766]: lo: Gained carrier Feb 13 19:34:07.983631 systemd-networkd[766]: Enumeration completed Feb 13 19:34:07.983802 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:34:07.984110 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:34:07.984113 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:34:07.990007 ignition[667]: Ignition 2.19.0 Feb 13 19:34:07.985040 systemd[1]: Reached target network.target - Network. Feb 13 19:34:07.990014 ignition[667]: Stage: fetch-offline Feb 13 19:34:07.986023 systemd-networkd[766]: eth0: Link UP Feb 13 19:34:07.990047 ignition[667]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:34:07.986027 systemd-networkd[766]: eth0: Gained carrier Feb 13 19:34:07.990055 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:34:07.986035 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:34:07.990206 ignition[667]: parsed url from cmdline: "" Feb 13 19:34:07.990209 ignition[667]: no config URL provided Feb 13 19:34:07.990214 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:34:07.990221 ignition[667]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:34:07.990242 ignition[667]: op(1): [started] loading QEMU firmware config module Feb 13 19:34:07.990247 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:34:07.999396 ignition[667]: op(1): [finished] loading QEMU firmware config module Feb 13 19:34:08.010046 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:34:08.026972 ignition[667]: parsing config with SHA512: 1cbb78b7da7292084501afde79fc8c0eaf8ffee25ac4e3e105b6d2bd638e8e6b34739663385193d1f53e04221f35e808ceefdbaaa4e7b8315567f502d8802240 Feb 13 19:34:08.030806 unknown[667]: fetched base config from "system" Feb 13 19:34:08.030815 unknown[667]: fetched user config from "qemu" Feb 13 19:34:08.031230 ignition[667]: fetch-offline: fetch-offline passed Feb 13 19:34:08.031298 ignition[667]: Ignition finished successfully Feb 13 19:34:08.034715 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:34:08.036243 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:34:08.044153 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:34:08.054362 ignition[774]: Ignition 2.19.0 Feb 13 19:34:08.054372 ignition[774]: Stage: kargs Feb 13 19:34:08.054563 ignition[774]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:34:08.054574 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:34:08.055431 ignition[774]: kargs: kargs passed Feb 13 19:34:08.058278 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:34:08.055488 ignition[774]: Ignition finished successfully Feb 13 19:34:08.060433 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:34:08.074094 ignition[782]: Ignition 2.19.0 Feb 13 19:34:08.074107 ignition[782]: Stage: disks Feb 13 19:34:08.074266 ignition[782]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:34:08.074275 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:34:08.075130 ignition[782]: disks: disks passed Feb 13 19:34:08.077199 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:34:08.075183 ignition[782]: Ignition finished successfully Feb 13 19:34:08.078771 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:34:08.080248 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:34:08.082230 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:34:08.083868 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:34:08.085807 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:34:08.098154 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:34:08.107355 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:34:08.111435 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:34:08.113488 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:34:08.160302 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 19:34:08.160730 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:34:08.162047 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:34:08.173081 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:34:08.174775 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:34:08.175998 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:34:08.176076 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:34:08.183008 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Feb 13 19:34:08.176127 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:34:08.180587 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:34:08.190430 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:34:08.190456 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:34:08.190466 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:34:08.190476 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:34:08.184693 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:34:08.191873 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:34:08.228012 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:34:08.231213 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:34:08.235304 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:34:08.239142 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:34:08.312502 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:34:08.326079 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:34:08.328508 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:34:08.334003 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:34:08.347624 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:34:08.349521 ignition[915]: INFO : Ignition 2.19.0 Feb 13 19:34:08.349521 ignition[915]: INFO : Stage: mount Feb 13 19:34:08.349521 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:34:08.349521 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:34:08.354766 ignition[915]: INFO : mount: mount passed Feb 13 19:34:08.354766 ignition[915]: INFO : Ignition finished successfully Feb 13 19:34:08.351719 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:34:08.365090 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:34:08.828075 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:34:08.849163 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:34:08.855975 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Feb 13 19:34:08.856013 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:34:08.856025 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:34:08.857602 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:34:08.860003 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:34:08.860895 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:34:08.876165 ignition[944]: INFO : Ignition 2.19.0 Feb 13 19:34:08.876165 ignition[944]: INFO : Stage: files Feb 13 19:34:08.877733 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:34:08.877733 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:34:08.877733 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:34:08.881092 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:34:08.881092 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:34:08.881092 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:34:08.881092 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:34:08.881092 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:34:08.880205 unknown[944]: wrote ssh authorized keys file for user: core Feb 13 19:34:08.888504 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:34:08.888504 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:34:08.950329 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:34:09.170225 systemd-networkd[766]: eth0: Gained IPv6LL Feb 13 19:34:09.485916 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:34:09.485916 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:34:09.485916 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:34:09.485916 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:34:09.485916 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:34:09.485916 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:34:09.485916 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:34:09.485916 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:34:09.485916 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:34:09.501262 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:34:09.501262 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:34:09.501262 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:34:09.501262 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:34:09.501262 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:34:09.501262 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 19:34:09.686212 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:34:09.889241 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:34:09.889241 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:34:09.893041 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:34:09.893041 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:34:09.893041 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:34:09.893041 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 19:34:09.893041 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:34:09.893041 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:34:09.893041 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 19:34:09.893041 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:34:09.914392 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:34:09.918275 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:34:09.921034 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:34:09.921034 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:34:09.921034 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:34:09.921034 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:34:09.921034 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:34:09.921034 ignition[944]: INFO : files: files passed Feb 13 19:34:09.921034 ignition[944]: INFO : Ignition finished successfully Feb 13 19:34:09.921557 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:34:09.930194 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:34:09.933286 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:34:09.935456 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:34:09.935556 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:34:09.940707 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:34:09.943251 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:34:09.943251 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:34:09.946364 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:34:09.946297 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:34:09.947776 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:34:09.961161 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:34:09.981110 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:34:09.982139 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:34:09.983618 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:34:09.985493 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:34:09.987274 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:34:09.997124 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:34:10.010016 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:34:10.019152 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:34:10.028756 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:34:10.030052 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:34:10.032144 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:34:10.034008 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:34:10.034128 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:34:10.036702 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:34:10.038744 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:34:10.040362 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:34:10.042069 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:34:10.044039 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:34:10.046030 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:34:10.047886 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:34:10.049888 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:34:10.051851 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:34:10.053634 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:34:10.055135 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:34:10.055256 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:34:10.057609 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:34:10.059569 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:34:10.061473 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:34:10.066074 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:34:10.067345 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:34:10.067478 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:34:10.070265 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:34:10.070381 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:34:10.072462 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:34:10.074015 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:34:10.080041 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:34:10.081330 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:34:10.083406 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:34:10.084972 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:34:10.085082 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:34:10.086678 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:34:10.086765 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:34:10.088300 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:34:10.088413 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:34:10.090167 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:34:10.090272 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:34:10.102216 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:34:10.103828 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:34:10.104793 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:34:10.104915 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:34:10.106810 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:34:10.106914 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:34:10.112146 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:34:10.113981 ignition[1000]: INFO : Ignition 2.19.0 Feb 13 19:34:10.113981 ignition[1000]: INFO : Stage: umount Feb 13 19:34:10.116664 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:34:10.116664 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:34:10.114014 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:34:10.121102 ignition[1000]: INFO : umount: umount passed Feb 13 19:34:10.121102 ignition[1000]: INFO : Ignition finished successfully Feb 13 19:34:10.118552 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:34:10.118647 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:34:10.120905 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:34:10.121262 systemd[1]: Stopped target network.target - Network. Feb 13 19:34:10.122822 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:34:10.122882 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:34:10.124643 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:34:10.124692 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:34:10.125809 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:34:10.125851 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:34:10.127645 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:34:10.127692 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:34:10.129633 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:34:10.131335 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:34:10.138032 systemd-networkd[766]: eth0: DHCPv6 lease lost Feb 13 19:34:10.139467 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:34:10.141033 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:34:10.143245 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:34:10.143490 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:34:10.147330 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:34:10.147371 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:34:10.153072 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:34:10.154695 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:34:10.154753 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:34:10.156777 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:34:10.156824 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:34:10.158602 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:34:10.158645 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:34:10.160845 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:34:10.160890 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:34:10.162871 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:34:10.173508 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:34:10.173622 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:34:10.179642 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:34:10.179740 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:34:10.182737 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:34:10.182866 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:34:10.184839 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:34:10.184897 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:34:10.186075 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:34:10.186108 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:34:10.188065 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:34:10.188112 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:34:10.190840 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:34:10.190886 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:34:10.193699 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:34:10.193746 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:34:10.196608 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:34:10.196657 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:34:10.209129 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:34:10.210169 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:34:10.210234 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:34:10.212288 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:34:10.212333 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:34:10.214301 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:34:10.214344 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:34:10.216470 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:34:10.216516 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:34:10.218752 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:34:10.218828 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:34:10.221174 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:34:10.223291 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:34:10.232391 systemd[1]: Switching root. Feb 13 19:34:10.262130 systemd-journald[237]: Journal stopped Feb 13 19:34:10.942620 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 19:34:10.942683 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:34:10.942696 kernel: SELinux: policy capability open_perms=1 Feb 13 19:34:10.942706 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:34:10.942715 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:34:10.942725 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:34:10.942739 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:34:10.942748 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:34:10.942760 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:34:10.942770 kernel: audit: type=1403 audit(1739475250.389:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:34:10.942785 systemd[1]: Successfully loaded SELinux policy in 30.562ms. Feb 13 19:34:10.942802 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.458ms. Feb 13 19:34:10.942813 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:34:10.942826 systemd[1]: Detected virtualization kvm. Feb 13 19:34:10.942836 systemd[1]: Detected architecture arm64. Feb 13 19:34:10.942848 systemd[1]: Detected first boot. Feb 13 19:34:10.942859 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:34:10.942875 zram_generator::config[1044]: No configuration found. Feb 13 19:34:10.942886 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:34:10.942897 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:34:10.942907 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:34:10.942917 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:34:10.942929 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:34:10.942940 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:34:10.942950 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:34:10.942962 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:34:10.942973 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:34:10.942984 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:34:10.943006 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:34:10.943016 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:34:10.943027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:34:10.943037 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:34:10.943048 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:34:10.943059 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:34:10.943071 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:34:10.943082 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:34:10.943093 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:34:10.943104 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:34:10.943114 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:34:10.943124 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:34:10.943135 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:34:10.943147 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:34:10.943158 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:34:10.943168 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:34:10.943179 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:34:10.943189 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:34:10.943200 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:34:10.943210 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:34:10.943221 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:34:10.943231 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:34:10.943241 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:34:10.943254 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:34:10.943265 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:34:10.943275 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:34:10.943285 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:34:10.943296 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:34:10.943332 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:34:10.943347 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:34:10.943359 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:34:10.943390 systemd[1]: Reached target machines.target - Containers. Feb 13 19:34:10.943406 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:34:10.943417 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:34:10.943435 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:34:10.943448 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:34:10.943459 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:34:10.943470 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:34:10.943481 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:34:10.943492 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:34:10.943505 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:34:10.943517 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:34:10.943528 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:34:10.943539 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:34:10.943549 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:34:10.943560 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:34:10.943571 kernel: fuse: init (API version 7.39) Feb 13 19:34:10.943581 kernel: ACPI: bus type drm_connector registered Feb 13 19:34:10.943591 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:34:10.943603 kernel: loop: module loaded Feb 13 19:34:10.943615 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:34:10.943625 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:34:10.943636 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:34:10.943646 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:34:10.943657 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:34:10.943668 systemd[1]: Stopped verity-setup.service. Feb 13 19:34:10.943698 systemd-journald[1116]: Collecting audit messages is disabled. Feb 13 19:34:10.943721 systemd-journald[1116]: Journal started Feb 13 19:34:10.943742 systemd-journald[1116]: Runtime Journal (/run/log/journal/7462baef99e043b3a188c93250035a40) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:34:10.738420 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:34:10.755951 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:34:10.756315 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:34:10.945650 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:34:10.946288 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:34:10.947436 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:34:10.948657 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:34:10.949782 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:34:10.951039 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:34:10.952219 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:34:10.955017 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:34:10.956364 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:34:10.957813 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:34:10.957962 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:34:10.959368 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:34:10.959524 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:34:10.960929 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:34:10.961098 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:34:10.962356 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:34:10.962500 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:34:10.963931 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:34:10.964296 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:34:10.966460 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:34:10.966643 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:34:10.968032 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:34:10.971370 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:34:10.972962 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:34:10.985371 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:34:11.000088 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:34:11.002131 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:34:11.003253 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:34:11.003290 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:34:11.005201 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:34:11.007449 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:34:11.009639 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:34:11.010781 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:34:11.012376 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:34:11.014317 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:34:11.015504 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:34:11.019184 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:34:11.020454 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:34:11.022177 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:34:11.027289 systemd-journald[1116]: Time spent on flushing to /var/log/journal/7462baef99e043b3a188c93250035a40 is 19.541ms for 854 entries. Feb 13 19:34:11.027289 systemd-journald[1116]: System Journal (/var/log/journal/7462baef99e043b3a188c93250035a40) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:34:11.053974 systemd-journald[1116]: Received client request to flush runtime journal. Feb 13 19:34:11.054049 kernel: loop0: detected capacity change from 0 to 114432 Feb 13 19:34:11.027228 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:34:11.030757 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:34:11.033581 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:34:11.035116 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:34:11.037478 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:34:11.039116 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:34:11.040632 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:34:11.046845 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:34:11.056205 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:34:11.058941 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:34:11.062013 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:34:11.065113 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:34:11.067210 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Feb 13 19:34:11.067228 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Feb 13 19:34:11.071139 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:34:11.073856 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:34:11.091143 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:34:11.093066 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:34:11.093825 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:34:11.097908 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:34:11.112603 kernel: loop1: detected capacity change from 0 to 189592 Feb 13 19:34:11.119661 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:34:11.128166 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:34:11.141390 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Feb 13 19:34:11.141409 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Feb 13 19:34:11.144923 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:34:11.148036 kernel: loop2: detected capacity change from 0 to 114328 Feb 13 19:34:11.199036 kernel: loop3: detected capacity change from 0 to 114432 Feb 13 19:34:11.205033 kernel: loop4: detected capacity change from 0 to 189592 Feb 13 19:34:11.214003 kernel: loop5: detected capacity change from 0 to 114328 Feb 13 19:34:11.216541 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:34:11.216911 (sd-merge)[1185]: Merged extensions into '/usr'. Feb 13 19:34:11.221117 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:34:11.221138 systemd[1]: Reloading... Feb 13 19:34:11.290074 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:34:11.291090 zram_generator::config[1224]: No configuration found. Feb 13 19:34:11.362417 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:34:11.397919 systemd[1]: Reloading finished in 176 ms. Feb 13 19:34:11.436584 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:34:11.438314 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:34:11.451212 systemd[1]: Starting ensure-sysext.service... Feb 13 19:34:11.453089 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:34:11.466691 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:34:11.466708 systemd[1]: Reloading... Feb 13 19:34:11.474434 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:34:11.475003 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:34:11.475742 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:34:11.476098 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Feb 13 19:34:11.476225 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Feb 13 19:34:11.478372 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:34:11.478482 systemd-tmpfiles[1247]: Skipping /boot Feb 13 19:34:11.485404 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:34:11.485498 systemd-tmpfiles[1247]: Skipping /boot Feb 13 19:34:11.515266 zram_generator::config[1271]: No configuration found. Feb 13 19:34:11.597356 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:34:11.632504 systemd[1]: Reloading finished in 165 ms. Feb 13 19:34:11.648752 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:34:11.662476 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:34:11.673120 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:34:11.675615 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:34:11.678240 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:34:11.682085 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:34:11.690517 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:34:11.695229 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:34:11.698421 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:34:11.699801 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:34:11.702043 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:34:11.707285 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:34:11.708335 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:34:11.709107 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:34:11.712124 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:34:11.714038 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:34:11.715902 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:34:11.716041 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:34:11.718020 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:34:11.718140 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:34:11.724853 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Feb 13 19:34:11.726357 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:34:11.738413 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:34:11.742184 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:34:11.744578 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:34:11.745704 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:34:11.750192 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:34:11.756339 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:34:11.758609 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:34:11.761020 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:34:11.763106 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:34:11.765028 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:34:11.766948 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:34:11.769037 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:34:11.770835 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:34:11.770952 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:34:11.772730 augenrules[1357]: No rules Feb 13 19:34:11.779339 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:34:11.781151 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:34:11.784977 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:34:11.797022 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1358) Feb 13 19:34:11.795929 systemd[1]: Finished ensure-sysext.service. Feb 13 19:34:11.802485 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:34:11.804582 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:34:11.812200 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:34:11.816461 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:34:11.819380 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:34:11.823230 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:34:11.826163 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:34:11.830702 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:34:11.833435 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:34:11.834635 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:34:11.834962 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:34:11.837451 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:34:11.839008 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:34:11.842058 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:34:11.842400 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:34:11.843743 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:34:11.843869 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:34:11.845810 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:34:11.845954 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:34:11.857615 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:34:11.861248 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:34:11.863187 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:34:11.863269 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:34:11.884601 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:34:11.894951 systemd-resolved[1315]: Positive Trust Anchors: Feb 13 19:34:11.894971 systemd-resolved[1315]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:34:11.895015 systemd-resolved[1315]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:34:11.904746 systemd-resolved[1315]: Defaulting to hostname 'linux'. Feb 13 19:34:11.906129 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:34:11.907413 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:34:11.911226 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:34:11.912956 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:34:11.927871 systemd-networkd[1389]: lo: Link UP Feb 13 19:34:11.927879 systemd-networkd[1389]: lo: Gained carrier Feb 13 19:34:11.929604 systemd-networkd[1389]: Enumeration completed Feb 13 19:34:11.929750 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:34:11.930131 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:34:11.930139 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:34:11.930962 systemd-networkd[1389]: eth0: Link UP Feb 13 19:34:11.930969 systemd-networkd[1389]: eth0: Gained carrier Feb 13 19:34:11.930982 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:34:11.933687 systemd[1]: Reached target network.target - Network. Feb 13 19:34:11.945035 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:34:11.945586 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. Feb 13 19:34:11.946610 systemd-timesyncd[1390]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:34:11.946652 systemd-timesyncd[1390]: Initial clock synchronization to Thu 2025-02-13 19:34:12.077589 UTC. Feb 13 19:34:11.957249 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:34:11.959848 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:34:11.968975 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:34:11.972448 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:34:12.006900 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:34:12.013056 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:34:12.041411 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:34:12.042878 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:34:12.045134 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:34:12.046289 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:34:12.047686 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:34:12.049224 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:34:12.050482 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:34:12.051968 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:34:12.053171 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:34:12.053208 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:34:12.054088 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:34:12.055681 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:34:12.058008 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:34:12.076002 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:34:12.078158 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:34:12.079677 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:34:12.080859 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:34:12.081839 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:34:12.082810 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:34:12.082844 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:34:12.083763 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:34:12.088025 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:34:12.085758 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:34:12.088656 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:34:12.091466 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:34:12.093582 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:34:12.095026 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:34:12.101061 jq[1418]: false Feb 13 19:34:12.103562 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:34:12.107695 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:34:12.112179 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:34:12.118780 extend-filesystems[1419]: Found loop3 Feb 13 19:34:12.118780 extend-filesystems[1419]: Found loop4 Feb 13 19:34:12.118780 extend-filesystems[1419]: Found loop5 Feb 13 19:34:12.118780 extend-filesystems[1419]: Found vda Feb 13 19:34:12.118780 extend-filesystems[1419]: Found vda1 Feb 13 19:34:12.118780 extend-filesystems[1419]: Found vda2 Feb 13 19:34:12.118780 extend-filesystems[1419]: Found vda3 Feb 13 19:34:12.118780 extend-filesystems[1419]: Found usr Feb 13 19:34:12.118780 extend-filesystems[1419]: Found vda4 Feb 13 19:34:12.118780 extend-filesystems[1419]: Found vda6 Feb 13 19:34:12.118780 extend-filesystems[1419]: Found vda7 Feb 13 19:34:12.118780 extend-filesystems[1419]: Found vda9 Feb 13 19:34:12.118780 extend-filesystems[1419]: Checking size of /dev/vda9 Feb 13 19:34:12.118024 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:34:12.121083 dbus-daemon[1417]: [system] SELinux support is enabled Feb 13 19:34:12.119895 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:34:12.120463 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:34:12.124178 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:34:12.139122 jq[1435]: true Feb 13 19:34:12.127140 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:34:12.131362 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:34:12.139079 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:34:12.141289 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:34:12.144068 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:34:12.144360 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:34:12.144512 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:34:12.146894 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:34:12.147079 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:34:12.160969 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:34:12.161011 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:34:12.162019 (ntainerd)[1441]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:34:12.165047 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:34:12.165071 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:34:12.167346 tar[1439]: linux-arm64/helm Feb 13 19:34:12.168925 jq[1440]: true Feb 13 19:34:12.170357 extend-filesystems[1419]: Resized partition /dev/vda9 Feb 13 19:34:12.180827 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:34:12.195104 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1341) Feb 13 19:34:12.195132 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:34:12.202140 systemd-logind[1430]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:34:12.202576 systemd-logind[1430]: New seat seat0. Feb 13 19:34:12.214105 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:34:12.214425 update_engine[1433]: I20250213 19:34:12.214219 1433 main.cc:92] Flatcar Update Engine starting Feb 13 19:34:12.222463 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:34:12.225025 update_engine[1433]: I20250213 19:34:12.222499 1433 update_check_scheduler.cc:74] Next update check in 9m56s Feb 13 19:34:12.231014 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:34:12.233339 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:34:12.250753 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:34:12.250753 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:34:12.250753 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:34:12.257332 extend-filesystems[1419]: Resized filesystem in /dev/vda9 Feb 13 19:34:12.252818 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:34:12.253113 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:34:12.271909 bash[1471]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:34:12.273257 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:34:12.275708 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:34:12.280677 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:34:12.376772 containerd[1441]: time="2025-02-13T19:34:12.376643765Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:34:12.402301 containerd[1441]: time="2025-02-13T19:34:12.402255496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:34:12.403688 containerd[1441]: time="2025-02-13T19:34:12.403648091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:34:12.404846 containerd[1441]: time="2025-02-13T19:34:12.403765525Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:34:12.404846 containerd[1441]: time="2025-02-13T19:34:12.403790891Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:34:12.404846 containerd[1441]: time="2025-02-13T19:34:12.403936070Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:34:12.404846 containerd[1441]: time="2025-02-13T19:34:12.403965147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:34:12.404846 containerd[1441]: time="2025-02-13T19:34:12.404044028Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:34:12.404846 containerd[1441]: time="2025-02-13T19:34:12.404059029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:34:12.404846 containerd[1441]: time="2025-02-13T19:34:12.404219090Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:34:12.404846 containerd[1441]: time="2025-02-13T19:34:12.404235261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:34:12.404846 containerd[1441]: time="2025-02-13T19:34:12.404248085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:34:12.404846 containerd[1441]: time="2025-02-13T19:34:12.404258490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:34:12.404846 containerd[1441]: time="2025-02-13T19:34:12.404334185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:34:12.404846 containerd[1441]: time="2025-02-13T19:34:12.404528968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:34:12.405108 containerd[1441]: time="2025-02-13T19:34:12.404635675Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:34:12.405108 containerd[1441]: time="2025-02-13T19:34:12.404650072Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:34:12.405108 containerd[1441]: time="2025-02-13T19:34:12.404740526Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:34:12.405108 containerd[1441]: time="2025-02-13T19:34:12.404783435Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:34:12.411324 containerd[1441]: time="2025-02-13T19:34:12.411294335Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:34:12.411446 containerd[1441]: time="2025-02-13T19:34:12.411428908Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:34:12.411511 containerd[1441]: time="2025-02-13T19:34:12.411490892Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:34:12.411574 containerd[1441]: time="2025-02-13T19:34:12.411561949Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:34:12.411649 containerd[1441]: time="2025-02-13T19:34:12.411634983Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:34:12.411869 containerd[1441]: time="2025-02-13T19:34:12.411845170Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:34:12.412231 containerd[1441]: time="2025-02-13T19:34:12.412201546Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:34:12.412415 containerd[1441]: time="2025-02-13T19:34:12.412394030Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:34:12.412484 containerd[1441]: time="2025-02-13T19:34:12.412469765Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:34:12.412536 containerd[1441]: time="2025-02-13T19:34:12.412523401Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:34:12.412610 containerd[1441]: time="2025-02-13T19:34:12.412595184Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:34:12.412667 containerd[1441]: time="2025-02-13T19:34:12.412654264Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:34:12.412727 containerd[1441]: time="2025-02-13T19:34:12.412707295Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:34:12.412798 containerd[1441]: time="2025-02-13T19:34:12.412781780Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:34:12.412863 containerd[1441]: time="2025-02-13T19:34:12.412849450Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:34:12.412925 containerd[1441]: time="2025-02-13T19:34:12.412911272Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:34:12.412979 containerd[1441]: time="2025-02-13T19:34:12.412965553Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:34:12.413073 containerd[1441]: time="2025-02-13T19:34:12.413057016Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:34:12.413137 containerd[1441]: time="2025-02-13T19:34:12.413123476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.413190 containerd[1441]: time="2025-02-13T19:34:12.413177193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.413250 containerd[1441]: time="2025-02-13T19:34:12.413236878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.413312 containerd[1441]: time="2025-02-13T19:34:12.413297894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.413368 containerd[1441]: time="2025-02-13T19:34:12.413354151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.413425 containerd[1441]: time="2025-02-13T19:34:12.413412343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.413480 containerd[1441]: time="2025-02-13T19:34:12.413465818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.413545 containerd[1441]: time="2025-02-13T19:34:12.413531310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.413602 containerd[1441]: time="2025-02-13T19:34:12.413588616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.413672 containerd[1441]: time="2025-02-13T19:34:12.413656850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.413727 containerd[1441]: time="2025-02-13T19:34:12.413715365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.413797 containerd[1441]: time="2025-02-13T19:34:12.413770413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.413851 containerd[1441]: time="2025-02-13T19:34:12.413839171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.413910 containerd[1441]: time="2025-02-13T19:34:12.413896961Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:34:12.414239 containerd[1441]: time="2025-02-13T19:34:12.413971769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.414239 containerd[1441]: time="2025-02-13T19:34:12.413991852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.414239 containerd[1441]: time="2025-02-13T19:34:12.414031050Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:34:12.414921 containerd[1441]: time="2025-02-13T19:34:12.414891723Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:34:12.415170 containerd[1441]: time="2025-02-13T19:34:12.415149135Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:34:12.415229 containerd[1441]: time="2025-02-13T19:34:12.415215756Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:34:12.415282 containerd[1441]: time="2025-02-13T19:34:12.415267859Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:34:12.415329 containerd[1441]: time="2025-02-13T19:34:12.415316494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.416003 containerd[1441]: time="2025-02-13T19:34:12.415390617Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:34:12.416003 containerd[1441]: time="2025-02-13T19:34:12.415408885Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:34:12.416003 containerd[1441]: time="2025-02-13T19:34:12.415420378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:34:12.416098 containerd[1441]: time="2025-02-13T19:34:12.415775141Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:34:12.416098 containerd[1441]: time="2025-02-13T19:34:12.415845593Z" level=info msg="Connect containerd service" Feb 13 19:34:12.416098 containerd[1441]: time="2025-02-13T19:34:12.415874992Z" level=info msg="using legacy CRI server" Feb 13 19:34:12.416098 containerd[1441]: time="2025-02-13T19:34:12.415881807Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:34:12.416098 containerd[1441]: time="2025-02-13T19:34:12.415957583Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:34:12.416696 containerd[1441]: time="2025-02-13T19:34:12.416666382Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:34:12.417132 containerd[1441]: time="2025-02-13T19:34:12.417028928Z" level=info msg="Start subscribing containerd event" Feb 13 19:34:12.417132 containerd[1441]: time="2025-02-13T19:34:12.417221250Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:34:12.417306 containerd[1441]: time="2025-02-13T19:34:12.417269482Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:34:12.417416 containerd[1441]: time="2025-02-13T19:34:12.417397643Z" level=info msg="Start recovering state" Feb 13 19:34:12.417555 containerd[1441]: time="2025-02-13T19:34:12.417540927Z" level=info msg="Start event monitor" Feb 13 19:34:12.417771 containerd[1441]: time="2025-02-13T19:34:12.417654208Z" level=info msg="Start snapshots syncer" Feb 13 19:34:12.417771 containerd[1441]: time="2025-02-13T19:34:12.417672920Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:34:12.417771 containerd[1441]: time="2025-02-13T19:34:12.417683365Z" level=info msg="Start streaming server" Feb 13 19:34:12.418055 containerd[1441]: time="2025-02-13T19:34:12.418035990Z" level=info msg="containerd successfully booted in 0.046146s" Feb 13 19:34:12.418136 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:34:12.563649 tar[1439]: linux-arm64/LICENSE Feb 13 19:34:12.563649 tar[1439]: linux-arm64/README.md Feb 13 19:34:12.583543 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:34:12.869235 sshd_keygen[1436]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:34:12.888023 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:34:12.910273 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:34:12.915136 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:34:12.915316 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:34:12.918320 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:34:12.929910 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:34:12.933214 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:34:12.935753 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:34:12.937639 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:34:13.138329 systemd-networkd[1389]: eth0: Gained IPv6LL Feb 13 19:34:13.140762 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:34:13.142701 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:34:13.154322 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:34:13.156635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:13.158684 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:34:13.173495 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:34:13.173756 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:34:13.175487 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:34:13.182927 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:34:13.646297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:13.647846 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:34:13.649346 systemd[1]: Startup finished in 563ms (kernel) + 4.690s (initrd) + 3.289s (userspace) = 8.543s. Feb 13 19:34:13.649692 (kubelet)[1530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:34:14.071764 kubelet[1530]: E0213 19:34:14.071597 1530 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:34:14.073961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:34:14.074132 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:34:18.115641 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:34:18.116884 systemd[1]: Started sshd@0-10.0.0.52:22-10.0.0.1:45840.service - OpenSSH per-connection server daemon (10.0.0.1:45840). Feb 13 19:34:18.174568 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 45840 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:34:18.175318 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:18.184733 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:34:18.217010 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:34:18.225642 systemd-logind[1430]: New session 1 of user core. Feb 13 19:34:18.231200 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:34:18.233484 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:34:18.240651 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:34:18.318858 systemd[1547]: Queued start job for default target default.target. Feb 13 19:34:18.336049 systemd[1547]: Created slice app.slice - User Application Slice. Feb 13 19:34:18.336105 systemd[1547]: Reached target paths.target - Paths. Feb 13 19:34:18.336118 systemd[1547]: Reached target timers.target - Timers. Feb 13 19:34:18.337354 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:34:18.346567 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:34:18.346629 systemd[1547]: Reached target sockets.target - Sockets. Feb 13 19:34:18.346641 systemd[1547]: Reached target basic.target - Basic System. Feb 13 19:34:18.346674 systemd[1547]: Reached target default.target - Main User Target. Feb 13 19:34:18.346699 systemd[1547]: Startup finished in 98ms. Feb 13 19:34:18.346989 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:34:18.348242 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:34:18.414449 systemd[1]: Started sshd@1-10.0.0.52:22-10.0.0.1:45848.service - OpenSSH per-connection server daemon (10.0.0.1:45848). Feb 13 19:34:18.460576 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 45848 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:34:18.461848 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:18.466663 systemd-logind[1430]: New session 2 of user core. Feb 13 19:34:18.478194 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:34:18.532658 sshd[1558]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:18.542316 systemd[1]: sshd@1-10.0.0.52:22-10.0.0.1:45848.service: Deactivated successfully. Feb 13 19:34:18.543825 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:34:18.545447 systemd-logind[1430]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:34:18.546826 systemd[1]: Started sshd@2-10.0.0.52:22-10.0.0.1:45864.service - OpenSSH per-connection server daemon (10.0.0.1:45864). Feb 13 19:34:18.547928 systemd-logind[1430]: Removed session 2. Feb 13 19:34:18.581958 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 45864 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:34:18.583102 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:18.586503 systemd-logind[1430]: New session 3 of user core. Feb 13 19:34:18.592126 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:34:18.639328 sshd[1565]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:18.649079 systemd[1]: sshd@2-10.0.0.52:22-10.0.0.1:45864.service: Deactivated successfully. Feb 13 19:34:18.651271 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:34:18.652439 systemd-logind[1430]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:34:18.663390 systemd[1]: Started sshd@3-10.0.0.52:22-10.0.0.1:45872.service - OpenSSH per-connection server daemon (10.0.0.1:45872). Feb 13 19:34:18.664181 systemd-logind[1430]: Removed session 3. Feb 13 19:34:18.692903 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 45872 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:34:18.693603 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:18.697063 systemd-logind[1430]: New session 4 of user core. Feb 13 19:34:18.707153 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:34:18.757086 sshd[1572]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:18.764947 systemd[1]: sshd@3-10.0.0.52:22-10.0.0.1:45872.service: Deactivated successfully. Feb 13 19:34:18.766132 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:34:18.769063 systemd-logind[1430]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:34:18.770047 systemd[1]: Started sshd@4-10.0.0.52:22-10.0.0.1:45874.service - OpenSSH per-connection server daemon (10.0.0.1:45874). Feb 13 19:34:18.770679 systemd-logind[1430]: Removed session 4. Feb 13 19:34:18.802455 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 45874 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:34:18.803531 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:18.806616 systemd-logind[1430]: New session 5 of user core. Feb 13 19:34:18.818111 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:34:18.878695 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:34:18.878963 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:34:19.181248 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:34:19.181424 (dockerd)[1600]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:34:19.449263 dockerd[1600]: time="2025-02-13T19:34:19.449134377Z" level=info msg="Starting up" Feb 13 19:34:19.590438 dockerd[1600]: time="2025-02-13T19:34:19.590385012Z" level=info msg="Loading containers: start." Feb 13 19:34:19.666028 kernel: Initializing XFRM netlink socket Feb 13 19:34:19.732778 systemd-networkd[1389]: docker0: Link UP Feb 13 19:34:19.755212 dockerd[1600]: time="2025-02-13T19:34:19.755154434Z" level=info msg="Loading containers: done." Feb 13 19:34:19.768092 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck341982809-merged.mount: Deactivated successfully. Feb 13 19:34:19.768688 dockerd[1600]: time="2025-02-13T19:34:19.768400538Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:34:19.768688 dockerd[1600]: time="2025-02-13T19:34:19.768497642Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 19:34:19.768688 dockerd[1600]: time="2025-02-13T19:34:19.768593459Z" level=info msg="Daemon has completed initialization" Feb 13 19:34:19.798549 dockerd[1600]: time="2025-02-13T19:34:19.798416625Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:34:19.798623 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:34:20.376699 containerd[1441]: time="2025-02-13T19:34:20.376652338Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 19:34:21.012820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2341802239.mount: Deactivated successfully. Feb 13 19:34:22.911828 containerd[1441]: time="2025-02-13T19:34:22.911781422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:22.912792 containerd[1441]: time="2025-02-13T19:34:22.912624790Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620377" Feb 13 19:34:22.913560 containerd[1441]: time="2025-02-13T19:34:22.913499130Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:22.916695 containerd[1441]: time="2025-02-13T19:34:22.916660534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:22.917965 containerd[1441]: time="2025-02-13T19:34:22.917918595Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 2.541223938s" Feb 13 19:34:22.917965 containerd[1441]: time="2025-02-13T19:34:22.917959088Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 19:34:22.918694 containerd[1441]: time="2025-02-13T19:34:22.918670130Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 19:34:24.326387 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:34:24.339311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:24.439062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:24.443743 (kubelet)[1817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:34:24.482920 kubelet[1817]: E0213 19:34:24.482861 1817 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:34:24.486348 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:34:24.486623 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:34:24.715851 containerd[1441]: time="2025-02-13T19:34:24.715716628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:24.716367 containerd[1441]: time="2025-02-13T19:34:24.716322949Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471775" Feb 13 19:34:24.717266 containerd[1441]: time="2025-02-13T19:34:24.717190854Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:24.720485 containerd[1441]: time="2025-02-13T19:34:24.720445738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:24.721737 containerd[1441]: time="2025-02-13T19:34:24.721695398Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.8029943s" Feb 13 19:34:24.721737 containerd[1441]: time="2025-02-13T19:34:24.721731655Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 19:34:24.722293 containerd[1441]: time="2025-02-13T19:34:24.722269197Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 19:34:26.041758 containerd[1441]: time="2025-02-13T19:34:26.041704635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:26.042283 containerd[1441]: time="2025-02-13T19:34:26.042251444Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024542" Feb 13 19:34:26.043634 containerd[1441]: time="2025-02-13T19:34:26.043591797Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:26.046205 containerd[1441]: time="2025-02-13T19:34:26.046172413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:26.047494 containerd[1441]: time="2025-02-13T19:34:26.047461677Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.325160004s" Feb 13 19:34:26.047567 containerd[1441]: time="2025-02-13T19:34:26.047498960Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 19:34:26.047967 containerd[1441]: time="2025-02-13T19:34:26.047911966Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:34:27.052533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2314683293.mount: Deactivated successfully. Feb 13 19:34:27.359186 containerd[1441]: time="2025-02-13T19:34:27.359054861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:27.359965 containerd[1441]: time="2025-02-13T19:34:27.359896110Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769258" Feb 13 19:34:27.360645 containerd[1441]: time="2025-02-13T19:34:27.360614980Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:27.363370 containerd[1441]: time="2025-02-13T19:34:27.363311423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:27.364170 containerd[1441]: time="2025-02-13T19:34:27.364136743Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.316188579s" Feb 13 19:34:27.364273 containerd[1441]: time="2025-02-13T19:34:27.364254708Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 19:34:27.364812 containerd[1441]: time="2025-02-13T19:34:27.364789808Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:34:28.046872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1328687846.mount: Deactivated successfully. Feb 13 19:34:28.826644 containerd[1441]: time="2025-02-13T19:34:28.826592791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:28.827185 containerd[1441]: time="2025-02-13T19:34:28.827147043Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 19:34:28.827925 containerd[1441]: time="2025-02-13T19:34:28.827875441Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:28.831172 containerd[1441]: time="2025-02-13T19:34:28.831124490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:28.832541 containerd[1441]: time="2025-02-13T19:34:28.832446495Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.467547511s" Feb 13 19:34:28.832541 containerd[1441]: time="2025-02-13T19:34:28.832488377Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:34:28.833244 containerd[1441]: time="2025-02-13T19:34:28.833007487Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:34:29.288240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1657608954.mount: Deactivated successfully. Feb 13 19:34:29.293020 containerd[1441]: time="2025-02-13T19:34:29.292676610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:29.293721 containerd[1441]: time="2025-02-13T19:34:29.293680508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 19:34:29.294628 containerd[1441]: time="2025-02-13T19:34:29.294565922Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:29.297396 containerd[1441]: time="2025-02-13T19:34:29.297317506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:29.298399 containerd[1441]: time="2025-02-13T19:34:29.298165819Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 465.129169ms" Feb 13 19:34:29.298399 containerd[1441]: time="2025-02-13T19:34:29.298213549Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:34:29.298812 containerd[1441]: time="2025-02-13T19:34:29.298760962Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 19:34:30.028670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1447277750.mount: Deactivated successfully. Feb 13 19:34:31.947475 containerd[1441]: time="2025-02-13T19:34:31.947421026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:31.947972 containerd[1441]: time="2025-02-13T19:34:31.947918820Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Feb 13 19:34:31.948871 containerd[1441]: time="2025-02-13T19:34:31.948834535Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:31.952033 containerd[1441]: time="2025-02-13T19:34:31.951975987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:31.953393 containerd[1441]: time="2025-02-13T19:34:31.953358181Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.654562487s" Feb 13 19:34:31.953393 containerd[1441]: time="2025-02-13T19:34:31.953393666Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 19:34:34.557271 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:34:34.567230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:34.676826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:34.681758 (kubelet)[1970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:34:34.732302 kubelet[1970]: E0213 19:34:34.732255 1970 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:34:34.734431 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:34:34.734562 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:34:36.766740 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:36.779505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:36.802216 systemd[1]: Reloading requested from client PID 1986 ('systemctl') (unit session-5.scope)... Feb 13 19:34:36.802234 systemd[1]: Reloading... Feb 13 19:34:36.861369 zram_generator::config[2025]: No configuration found. Feb 13 19:34:36.965203 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:34:37.017215 systemd[1]: Reloading finished in 214 ms. Feb 13 19:34:37.053375 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:34:37.053477 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:34:37.053717 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:37.061810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:37.149969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:37.154382 (kubelet)[2066]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:34:37.196800 kubelet[2066]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:34:37.196800 kubelet[2066]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:34:37.196800 kubelet[2066]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:34:37.196800 kubelet[2066]: I0213 19:34:37.195410 2066 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:34:37.810273 kubelet[2066]: I0213 19:34:37.810214 2066 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:34:37.810273 kubelet[2066]: I0213 19:34:37.810248 2066 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:34:37.810509 kubelet[2066]: I0213 19:34:37.810483 2066 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:34:37.847355 kubelet[2066]: E0213 19:34:37.847294 2066 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:34:37.849769 kubelet[2066]: I0213 19:34:37.849731 2066 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:34:37.855369 kubelet[2066]: E0213 19:34:37.855272 2066 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:34:37.855369 kubelet[2066]: I0213 19:34:37.855301 2066 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:34:37.858794 kubelet[2066]: I0213 19:34:37.858744 2066 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:34:37.859645 kubelet[2066]: I0213 19:34:37.859602 2066 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:34:37.859773 kubelet[2066]: I0213 19:34:37.859741 2066 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:34:37.859927 kubelet[2066]: I0213 19:34:37.859764 2066 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:34:37.860112 kubelet[2066]: I0213 19:34:37.860061 2066 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:34:37.860112 kubelet[2066]: I0213 19:34:37.860073 2066 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:34:37.860300 kubelet[2066]: I0213 19:34:37.860248 2066 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:34:37.864104 kubelet[2066]: I0213 19:34:37.863966 2066 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:34:37.864104 kubelet[2066]: I0213 19:34:37.864002 2066 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:34:37.864104 kubelet[2066]: I0213 19:34:37.864084 2066 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:34:37.864104 kubelet[2066]: I0213 19:34:37.864094 2066 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:34:37.866676 kubelet[2066]: W0213 19:34:37.866631 2066 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Feb 13 19:34:37.866785 kubelet[2066]: E0213 19:34:37.866690 2066 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:34:37.866785 kubelet[2066]: I0213 19:34:37.866763 2066 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:34:37.868675 kubelet[2066]: I0213 19:34:37.868638 2066 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:34:37.870592 kubelet[2066]: W0213 19:34:37.870545 2066 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Feb 13 19:34:37.870636 kubelet[2066]: E0213 19:34:37.870597 2066 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:34:37.871131 kubelet[2066]: W0213 19:34:37.871098 2066 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:34:37.871888 kubelet[2066]: I0213 19:34:37.871857 2066 server.go:1269] "Started kubelet" Feb 13 19:34:37.872780 kubelet[2066]: I0213 19:34:37.872742 2066 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:34:37.874016 kubelet[2066]: I0213 19:34:37.873979 2066 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:34:37.875750 kubelet[2066]: I0213 19:34:37.875714 2066 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:34:37.880023 kubelet[2066]: I0213 19:34:37.879746 2066 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:34:37.880023 kubelet[2066]: I0213 19:34:37.879969 2066 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:34:37.880339 kubelet[2066]: I0213 19:34:37.880318 2066 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:34:37.880653 kubelet[2066]: I0213 19:34:37.880615 2066 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:34:37.881509 kubelet[2066]: E0213 19:34:37.881471 2066 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:37.881952 kubelet[2066]: I0213 19:34:37.881830 2066 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:34:37.882068 kubelet[2066]: I0213 19:34:37.882050 2066 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:34:37.882214 kubelet[2066]: W0213 19:34:37.882162 2066 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Feb 13 19:34:37.882214 kubelet[2066]: E0213 19:34:37.882211 2066 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:34:37.882391 kubelet[2066]: E0213 19:34:37.882346 2066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="200ms" Feb 13 19:34:37.883398 kubelet[2066]: E0213 19:34:37.881901 2066 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.52:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.52:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823db8c911fe5d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:34:37.871834576 +0000 UTC m=+0.714261159,LastTimestamp:2025-02-13 19:34:37.871834576 +0000 UTC m=+0.714261159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:34:37.885038 kubelet[2066]: I0213 19:34:37.885016 2066 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:34:37.885296 kubelet[2066]: I0213 19:34:37.885097 2066 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:34:37.886572 kubelet[2066]: I0213 19:34:37.886545 2066 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:34:37.890305 kubelet[2066]: E0213 19:34:37.889556 2066 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:34:37.896623 kubelet[2066]: I0213 19:34:37.896588 2066 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:34:37.896623 kubelet[2066]: I0213 19:34:37.896608 2066 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:34:37.896623 kubelet[2066]: I0213 19:34:37.896625 2066 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:34:37.898444 kubelet[2066]: I0213 19:34:37.898409 2066 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:34:37.899724 kubelet[2066]: I0213 19:34:37.899672 2066 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:34:37.899724 kubelet[2066]: I0213 19:34:37.899700 2066 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:34:37.899724 kubelet[2066]: I0213 19:34:37.899719 2066 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:34:37.899837 kubelet[2066]: E0213 19:34:37.899763 2066 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:34:37.901059 kubelet[2066]: W0213 19:34:37.900917 2066 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Feb 13 19:34:37.901059 kubelet[2066]: E0213 19:34:37.900953 2066 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:34:37.974813 kubelet[2066]: I0213 19:34:37.974757 2066 policy_none.go:49] "None policy: Start" Feb 13 19:34:37.975696 kubelet[2066]: I0213 19:34:37.975658 2066 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:34:37.975696 kubelet[2066]: I0213 19:34:37.975691 2066 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:34:37.982345 kubelet[2066]: E0213 19:34:37.982260 2066 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:37.983167 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:34:38.000058 kubelet[2066]: E0213 19:34:37.999961 2066 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:34:38.001872 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:34:38.004855 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:34:38.012036 kubelet[2066]: I0213 19:34:38.011825 2066 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:34:38.012036 kubelet[2066]: I0213 19:34:38.012025 2066 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:34:38.012147 kubelet[2066]: I0213 19:34:38.012038 2066 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:34:38.012466 kubelet[2066]: I0213 19:34:38.012320 2066 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:34:38.013490 kubelet[2066]: E0213 19:34:38.013366 2066 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:34:38.083496 kubelet[2066]: E0213 19:34:38.083357 2066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="400ms" Feb 13 19:34:38.114720 kubelet[2066]: I0213 19:34:38.114689 2066 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:34:38.116843 kubelet[2066]: E0213 19:34:38.116817 2066 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Feb 13 19:34:38.208652 systemd[1]: Created slice kubepods-burstable-podf8fd48ea9db8dfa5495f79462ec2b0ce.slice - libcontainer container kubepods-burstable-podf8fd48ea9db8dfa5495f79462ec2b0ce.slice. Feb 13 19:34:38.238437 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 19:34:38.244014 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 19:34:38.284202 kubelet[2066]: I0213 19:34:38.284063 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:34:38.284202 kubelet[2066]: I0213 19:34:38.284098 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:38.284202 kubelet[2066]: I0213 19:34:38.284120 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:38.284202 kubelet[2066]: I0213 19:34:38.284135 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:38.284202 kubelet[2066]: I0213 19:34:38.284149 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:38.284718 kubelet[2066]: I0213 19:34:38.284164 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:38.284718 kubelet[2066]: I0213 19:34:38.284233 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8fd48ea9db8dfa5495f79462ec2b0ce-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f8fd48ea9db8dfa5495f79462ec2b0ce\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:38.284718 kubelet[2066]: I0213 19:34:38.284283 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8fd48ea9db8dfa5495f79462ec2b0ce-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f8fd48ea9db8dfa5495f79462ec2b0ce\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:38.284718 kubelet[2066]: I0213 19:34:38.284312 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8fd48ea9db8dfa5495f79462ec2b0ce-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f8fd48ea9db8dfa5495f79462ec2b0ce\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:38.318578 kubelet[2066]: I0213 19:34:38.318194 2066 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:34:38.318578 kubelet[2066]: E0213 19:34:38.318543 2066 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Feb 13 19:34:38.484460 kubelet[2066]: E0213 19:34:38.484331 2066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="800ms" Feb 13 19:34:38.537158 kubelet[2066]: E0213 19:34:38.537086 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:38.537946 containerd[1441]: time="2025-02-13T19:34:38.537787026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f8fd48ea9db8dfa5495f79462ec2b0ce,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:38.542087 kubelet[2066]: E0213 19:34:38.542062 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:38.542461 containerd[1441]: time="2025-02-13T19:34:38.542411930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:38.547935 kubelet[2066]: E0213 19:34:38.547725 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:38.550742 containerd[1441]: time="2025-02-13T19:34:38.550697185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:38.720017 kubelet[2066]: I0213 19:34:38.719891 2066 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:34:38.720233 kubelet[2066]: E0213 19:34:38.720210 2066 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Feb 13 19:34:38.832279 kubelet[2066]: W0213 19:34:38.832210 2066 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Feb 13 19:34:38.832377 kubelet[2066]: E0213 19:34:38.832281 2066 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:34:39.033337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1672718693.mount: Deactivated successfully. Feb 13 19:34:39.037589 containerd[1441]: time="2025-02-13T19:34:39.037537494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:39.038795 containerd[1441]: time="2025-02-13T19:34:39.038763970Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:34:39.039491 containerd[1441]: time="2025-02-13T19:34:39.039460527Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:39.040236 containerd[1441]: time="2025-02-13T19:34:39.040210441Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:39.040904 containerd[1441]: time="2025-02-13T19:34:39.040875233Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:34:39.041899 containerd[1441]: time="2025-02-13T19:34:39.041855998Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:39.042837 containerd[1441]: time="2025-02-13T19:34:39.042787973Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:34:39.044594 containerd[1441]: time="2025-02-13T19:34:39.044547733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:39.046772 containerd[1441]: time="2025-02-13T19:34:39.046741554Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 495.967894ms" Feb 13 19:34:39.050979 containerd[1441]: time="2025-02-13T19:34:39.048434819Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 505.726133ms" Feb 13 19:34:39.055171 containerd[1441]: time="2025-02-13T19:34:39.054971180Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 517.046464ms" Feb 13 19:34:39.203391 containerd[1441]: time="2025-02-13T19:34:39.202591700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:39.203391 containerd[1441]: time="2025-02-13T19:34:39.203130071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:39.203391 containerd[1441]: time="2025-02-13T19:34:39.203183948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:39.203391 containerd[1441]: time="2025-02-13T19:34:39.203196646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:39.203391 containerd[1441]: time="2025-02-13T19:34:39.203279445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:39.204085 containerd[1441]: time="2025-02-13T19:34:39.202200740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:39.204134 containerd[1441]: time="2025-02-13T19:34:39.204103745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:39.204168 containerd[1441]: time="2025-02-13T19:34:39.204133989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:39.204275 containerd[1441]: time="2025-02-13T19:34:39.204218109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:39.204275 containerd[1441]: time="2025-02-13T19:34:39.204250716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:39.204339 containerd[1441]: time="2025-02-13T19:34:39.204299025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:39.204492 containerd[1441]: time="2025-02-13T19:34:39.204454808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:39.224223 systemd[1]: Started cri-containerd-82de6b822e9f55faeff0bf52ebfc1a9a41104f0b35942c470c6e50598c1a4ace.scope - libcontainer container 82de6b822e9f55faeff0bf52ebfc1a9a41104f0b35942c470c6e50598c1a4ace. Feb 13 19:34:39.227885 systemd[1]: Started cri-containerd-00adaa54efc1d5df3fcc7aca23e58dc2706e87648d6ed7a2fd6d3d7a99d97403.scope - libcontainer container 00adaa54efc1d5df3fcc7aca23e58dc2706e87648d6ed7a2fd6d3d7a99d97403. Feb 13 19:34:39.229234 systemd[1]: Started cri-containerd-b8aee00dc5dbe438b6ac7e8026845907b99b0a69b67286dc9d30ba4ff1465327.scope - libcontainer container b8aee00dc5dbe438b6ac7e8026845907b99b0a69b67286dc9d30ba4ff1465327. Feb 13 19:34:39.244030 kubelet[2066]: W0213 19:34:39.243294 2066 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Feb 13 19:34:39.244030 kubelet[2066]: E0213 19:34:39.243345 2066 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:34:39.259774 containerd[1441]: time="2025-02-13T19:34:39.259273671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f8fd48ea9db8dfa5495f79462ec2b0ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"82de6b822e9f55faeff0bf52ebfc1a9a41104f0b35942c470c6e50598c1a4ace\"" Feb 13 19:34:39.260573 kubelet[2066]: E0213 19:34:39.260523 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:39.262784 containerd[1441]: time="2025-02-13T19:34:39.262753375Z" level=info msg="CreateContainer within sandbox \"82de6b822e9f55faeff0bf52ebfc1a9a41104f0b35942c470c6e50598c1a4ace\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:34:39.267057 containerd[1441]: time="2025-02-13T19:34:39.267021487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"00adaa54efc1d5df3fcc7aca23e58dc2706e87648d6ed7a2fd6d3d7a99d97403\"" Feb 13 19:34:39.268013 kubelet[2066]: E0213 19:34:39.267642 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:39.269225 containerd[1441]: time="2025-02-13T19:34:39.269196882Z" level=info msg="CreateContainer within sandbox \"00adaa54efc1d5df3fcc7aca23e58dc2706e87648d6ed7a2fd6d3d7a99d97403\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:34:39.271029 containerd[1441]: time="2025-02-13T19:34:39.271001747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8aee00dc5dbe438b6ac7e8026845907b99b0a69b67286dc9d30ba4ff1465327\"" Feb 13 19:34:39.271721 kubelet[2066]: E0213 19:34:39.271696 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:39.274179 containerd[1441]: time="2025-02-13T19:34:39.274150896Z" level=info msg="CreateContainer within sandbox \"b8aee00dc5dbe438b6ac7e8026845907b99b0a69b67286dc9d30ba4ff1465327\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:34:39.281050 containerd[1441]: time="2025-02-13T19:34:39.280962211Z" level=info msg="CreateContainer within sandbox \"82de6b822e9f55faeff0bf52ebfc1a9a41104f0b35942c470c6e50598c1a4ace\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"472ac56ff7d817b6c37255972b698233b94814093856db6ae4244e8da0fa28ec\"" Feb 13 19:34:39.281663 containerd[1441]: time="2025-02-13T19:34:39.281623397Z" level=info msg="StartContainer for \"472ac56ff7d817b6c37255972b698233b94814093856db6ae4244e8da0fa28ec\"" Feb 13 19:34:39.285421 kubelet[2066]: E0213 19:34:39.285386 2066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="1.6s" Feb 13 19:34:39.285896 containerd[1441]: time="2025-02-13T19:34:39.285863389Z" level=info msg="CreateContainer within sandbox \"00adaa54efc1d5df3fcc7aca23e58dc2706e87648d6ed7a2fd6d3d7a99d97403\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2f82fbf8e0cdfc60391d6016a382ce70856b6d8ae023609c4ec3d1dfc5aa13e1\"" Feb 13 19:34:39.286349 containerd[1441]: time="2025-02-13T19:34:39.286321125Z" level=info msg="StartContainer for \"2f82fbf8e0cdfc60391d6016a382ce70856b6d8ae023609c4ec3d1dfc5aa13e1\"" Feb 13 19:34:39.293233 containerd[1441]: time="2025-02-13T19:34:39.293199094Z" level=info msg="CreateContainer within sandbox \"b8aee00dc5dbe438b6ac7e8026845907b99b0a69b67286dc9d30ba4ff1465327\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bc8ebcb724489110f697ae6f4a7e3fdb31a3582cf1604c5f2dc64dcb4fb0ccb1\"" Feb 13 19:34:39.293615 containerd[1441]: time="2025-02-13T19:34:39.293590415Z" level=info msg="StartContainer for \"bc8ebcb724489110f697ae6f4a7e3fdb31a3582cf1604c5f2dc64dcb4fb0ccb1\"" Feb 13 19:34:39.306161 systemd[1]: Started cri-containerd-472ac56ff7d817b6c37255972b698233b94814093856db6ae4244e8da0fa28ec.scope - libcontainer container 472ac56ff7d817b6c37255972b698233b94814093856db6ae4244e8da0fa28ec. Feb 13 19:34:39.308569 systemd[1]: Started cri-containerd-2f82fbf8e0cdfc60391d6016a382ce70856b6d8ae023609c4ec3d1dfc5aa13e1.scope - libcontainer container 2f82fbf8e0cdfc60391d6016a382ce70856b6d8ae023609c4ec3d1dfc5aa13e1. Feb 13 19:34:39.312864 kubelet[2066]: W0213 19:34:39.312812 2066 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Feb 13 19:34:39.312953 kubelet[2066]: E0213 19:34:39.312877 2066 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:34:39.328200 systemd[1]: Started cri-containerd-bc8ebcb724489110f697ae6f4a7e3fdb31a3582cf1604c5f2dc64dcb4fb0ccb1.scope - libcontainer container bc8ebcb724489110f697ae6f4a7e3fdb31a3582cf1604c5f2dc64dcb4fb0ccb1. Feb 13 19:34:39.352403 containerd[1441]: time="2025-02-13T19:34:39.352355249Z" level=info msg="StartContainer for \"472ac56ff7d817b6c37255972b698233b94814093856db6ae4244e8da0fa28ec\" returns successfully" Feb 13 19:34:39.357747 containerd[1441]: time="2025-02-13T19:34:39.357719090Z" level=info msg="StartContainer for \"2f82fbf8e0cdfc60391d6016a382ce70856b6d8ae023609c4ec3d1dfc5aa13e1\" returns successfully" Feb 13 19:34:39.367141 containerd[1441]: time="2025-02-13T19:34:39.367111661Z" level=info msg="StartContainer for \"bc8ebcb724489110f697ae6f4a7e3fdb31a3582cf1604c5f2dc64dcb4fb0ccb1\" returns successfully" Feb 13 19:34:39.406124 kubelet[2066]: W0213 19:34:39.406066 2066 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Feb 13 19:34:39.406240 kubelet[2066]: E0213 19:34:39.406220 2066 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:34:39.521501 kubelet[2066]: I0213 19:34:39.521407 2066 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:34:39.521760 kubelet[2066]: E0213 19:34:39.521732 2066 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Feb 13 19:34:39.908316 kubelet[2066]: E0213 19:34:39.908283 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:39.910061 kubelet[2066]: E0213 19:34:39.910040 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:39.911779 kubelet[2066]: E0213 19:34:39.911753 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:40.915653 kubelet[2066]: E0213 19:34:40.915579 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:40.917180 kubelet[2066]: E0213 19:34:40.917128 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:41.124655 kubelet[2066]: I0213 19:34:41.124618 2066 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:34:41.385833 kubelet[2066]: E0213 19:34:41.385768 2066 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:34:41.466038 kubelet[2066]: I0213 19:34:41.465685 2066 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 19:34:41.585163 kubelet[2066]: E0213 19:34:41.585059 2066 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823db8c911fe5d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:34:37.871834576 +0000 UTC m=+0.714261159,LastTimestamp:2025-02-13 19:34:37.871834576 +0000 UTC m=+0.714261159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:34:41.638711 kubelet[2066]: E0213 19:34:41.638485 2066 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823db8c922e1e41 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:34:37.889543745 +0000 UTC m=+0.731970288,LastTimestamp:2025-02-13 19:34:37.889543745 +0000 UTC m=+0.731970288,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:34:41.692313 kubelet[2066]: E0213 19:34:41.692216 2066 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823db8c928fe16e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:34:37.895950702 +0000 UTC m=+0.738377285,LastTimestamp:2025-02-13 19:34:37.895950702 +0000 UTC m=+0.738377285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:34:41.867870 kubelet[2066]: I0213 19:34:41.867649 2066 apiserver.go:52] "Watching apiserver" Feb 13 19:34:41.881931 kubelet[2066]: I0213 19:34:41.881908 2066 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:34:43.623648 systemd[1]: Reloading requested from client PID 2345 ('systemctl') (unit session-5.scope)... Feb 13 19:34:43.623664 systemd[1]: Reloading... Feb 13 19:34:43.689135 zram_generator::config[2384]: No configuration found. Feb 13 19:34:43.772337 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:34:43.836042 systemd[1]: Reloading finished in 211 ms. Feb 13 19:34:43.867844 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:43.887340 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:34:43.887536 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:43.887581 systemd[1]: kubelet.service: Consumed 1.078s CPU time, 118.2M memory peak, 0B memory swap peak. Feb 13 19:34:43.898325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:43.986228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:43.991957 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:34:44.029646 kubelet[2426]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:34:44.029646 kubelet[2426]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:34:44.029646 kubelet[2426]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:34:44.030008 kubelet[2426]: I0213 19:34:44.029720 2426 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:34:44.036498 kubelet[2426]: I0213 19:34:44.036460 2426 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:34:44.036498 kubelet[2426]: I0213 19:34:44.036487 2426 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:34:44.036819 kubelet[2426]: I0213 19:34:44.036680 2426 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:34:44.037952 kubelet[2426]: I0213 19:34:44.037928 2426 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:34:44.041565 kubelet[2426]: I0213 19:34:44.041236 2426 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:34:44.044777 kubelet[2426]: E0213 19:34:44.044749 2426 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:34:44.044777 kubelet[2426]: I0213 19:34:44.044776 2426 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:34:44.049684 kubelet[2426]: I0213 19:34:44.049574 2426 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:34:44.049684 kubelet[2426]: I0213 19:34:44.049677 2426 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:34:44.049792 kubelet[2426]: I0213 19:34:44.049763 2426 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:34:44.049937 kubelet[2426]: I0213 19:34:44.049790 2426 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:34:44.050041 kubelet[2426]: I0213 19:34:44.049945 2426 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:34:44.050041 kubelet[2426]: I0213 19:34:44.049955 2426 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:34:44.050041 kubelet[2426]: I0213 19:34:44.049981 2426 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:34:44.050115 kubelet[2426]: I0213 19:34:44.050106 2426 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:34:44.050136 kubelet[2426]: I0213 19:34:44.050117 2426 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:34:44.050159 kubelet[2426]: I0213 19:34:44.050135 2426 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:34:44.050159 kubelet[2426]: I0213 19:34:44.050145 2426 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:34:44.055013 kubelet[2426]: I0213 19:34:44.052671 2426 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:34:44.055013 kubelet[2426]: I0213 19:34:44.053180 2426 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:34:44.055013 kubelet[2426]: I0213 19:34:44.054052 2426 server.go:1269] "Started kubelet" Feb 13 19:34:44.055013 kubelet[2426]: I0213 19:34:44.054399 2426 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:34:44.055013 kubelet[2426]: I0213 19:34:44.054503 2426 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:34:44.055013 kubelet[2426]: I0213 19:34:44.054722 2426 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:34:44.056135 kubelet[2426]: I0213 19:34:44.056116 2426 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:34:44.056234 kubelet[2426]: I0213 19:34:44.056196 2426 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:34:44.056526 kubelet[2426]: I0213 19:34:44.056502 2426 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:34:44.056629 kubelet[2426]: I0213 19:34:44.056611 2426 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:34:44.056735 kubelet[2426]: I0213 19:34:44.056718 2426 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:34:44.057006 kubelet[2426]: E0213 19:34:44.056964 2426 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:44.057782 kubelet[2426]: I0213 19:34:44.057763 2426 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:34:44.059864 kubelet[2426]: I0213 19:34:44.059838 2426 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:34:44.060400 kubelet[2426]: I0213 19:34:44.060375 2426 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:34:44.067182 kubelet[2426]: I0213 19:34:44.067157 2426 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:34:44.072351 kubelet[2426]: I0213 19:34:44.071227 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:34:44.075710 kubelet[2426]: I0213 19:34:44.075686 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:34:44.075916 kubelet[2426]: I0213 19:34:44.075886 2426 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:34:44.076026 kubelet[2426]: I0213 19:34:44.076014 2426 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:34:44.076142 kubelet[2426]: E0213 19:34:44.076124 2426 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:34:44.091192 kubelet[2426]: E0213 19:34:44.091165 2426 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:34:44.118298 kubelet[2426]: I0213 19:34:44.118276 2426 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:34:44.118526 kubelet[2426]: I0213 19:34:44.118509 2426 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:34:44.118588 kubelet[2426]: I0213 19:34:44.118579 2426 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:34:44.118790 kubelet[2426]: I0213 19:34:44.118773 2426 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:34:44.118866 kubelet[2426]: I0213 19:34:44.118843 2426 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:34:44.118915 kubelet[2426]: I0213 19:34:44.118907 2426 policy_none.go:49] "None policy: Start" Feb 13 19:34:44.119544 kubelet[2426]: I0213 19:34:44.119517 2426 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:34:44.119544 kubelet[2426]: I0213 19:34:44.119543 2426 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:34:44.119698 kubelet[2426]: I0213 19:34:44.119680 2426 state_mem.go:75] "Updated machine memory state" Feb 13 19:34:44.124424 kubelet[2426]: I0213 19:34:44.123740 2426 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:34:44.124424 kubelet[2426]: I0213 19:34:44.123920 2426 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:34:44.124424 kubelet[2426]: I0213 19:34:44.123930 2426 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:34:44.124424 kubelet[2426]: I0213 19:34:44.124213 2426 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:34:44.228895 kubelet[2426]: I0213 19:34:44.228793 2426 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:34:44.235778 kubelet[2426]: I0213 19:34:44.235738 2426 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 19:34:44.235859 kubelet[2426]: I0213 19:34:44.235823 2426 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 19:34:44.358308 kubelet[2426]: I0213 19:34:44.358272 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8fd48ea9db8dfa5495f79462ec2b0ce-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f8fd48ea9db8dfa5495f79462ec2b0ce\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:44.358308 kubelet[2426]: I0213 19:34:44.358311 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:44.358467 kubelet[2426]: I0213 19:34:44.358334 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:44.358467 kubelet[2426]: I0213 19:34:44.358354 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:44.358467 kubelet[2426]: I0213 19:34:44.358371 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:34:44.358467 kubelet[2426]: I0213 19:34:44.358385 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8fd48ea9db8dfa5495f79462ec2b0ce-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f8fd48ea9db8dfa5495f79462ec2b0ce\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:44.358467 kubelet[2426]: I0213 19:34:44.358399 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8fd48ea9db8dfa5495f79462ec2b0ce-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f8fd48ea9db8dfa5495f79462ec2b0ce\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:44.358711 kubelet[2426]: I0213 19:34:44.358413 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:44.358711 kubelet[2426]: I0213 19:34:44.358430 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:44.487559 kubelet[2426]: E0213 19:34:44.486680 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:44.487559 kubelet[2426]: E0213 19:34:44.486712 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:44.487559 kubelet[2426]: E0213 19:34:44.487046 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:45.051193 kubelet[2426]: I0213 19:34:45.051144 2426 apiserver.go:52] "Watching apiserver" Feb 13 19:34:45.057726 kubelet[2426]: I0213 19:34:45.057680 2426 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:34:45.105058 kubelet[2426]: E0213 19:34:45.103779 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:45.105058 kubelet[2426]: E0213 19:34:45.104132 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:45.109508 kubelet[2426]: E0213 19:34:45.109478 2426 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:45.109639 kubelet[2426]: E0213 19:34:45.109622 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:45.124522 kubelet[2426]: I0213 19:34:45.124089 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.124076638 podStartE2EDuration="1.124076638s" podCreationTimestamp="2025-02-13 19:34:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:45.123775465 +0000 UTC m=+1.128734702" watchObservedRunningTime="2025-02-13 19:34:45.124076638 +0000 UTC m=+1.129035875" Feb 13 19:34:45.137404 kubelet[2426]: I0213 19:34:45.137358 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.1373448640000001 podStartE2EDuration="1.137344864s" podCreationTimestamp="2025-02-13 19:34:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:45.136902674 +0000 UTC m=+1.141861911" watchObservedRunningTime="2025-02-13 19:34:45.137344864 +0000 UTC m=+1.142304061" Feb 13 19:34:45.137516 kubelet[2426]: I0213 19:34:45.137480 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.13747483 podStartE2EDuration="1.13747483s" podCreationTimestamp="2025-02-13 19:34:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:45.129930852 +0000 UTC m=+1.134890089" watchObservedRunningTime="2025-02-13 19:34:45.13747483 +0000 UTC m=+1.142434068" Feb 13 19:34:45.381181 sudo[1582]: pam_unix(sudo:session): session closed for user root Feb 13 19:34:45.383118 sshd[1579]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:45.386524 systemd[1]: sshd@4-10.0.0.52:22-10.0.0.1:45874.service: Deactivated successfully. Feb 13 19:34:45.388763 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:34:45.389006 systemd[1]: session-5.scope: Consumed 6.039s CPU time, 154.7M memory peak, 0B memory swap peak. Feb 13 19:34:45.389744 systemd-logind[1430]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:34:45.390587 systemd-logind[1430]: Removed session 5. Feb 13 19:34:46.105251 kubelet[2426]: E0213 19:34:46.105217 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:48.441820 kubelet[2426]: E0213 19:34:48.441779 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:48.542679 kubelet[2426]: I0213 19:34:48.542640 2426 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:34:48.542992 containerd[1441]: time="2025-02-13T19:34:48.542945314Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:34:48.543346 kubelet[2426]: I0213 19:34:48.543125 2426 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:34:48.553497 kubelet[2426]: E0213 19:34:48.552731 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:49.530561 kubelet[2426]: W0213 19:34:49.530507 2426 reflector.go:561] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'localhost' and this object Feb 13 19:34:49.530561 kubelet[2426]: E0213 19:34:49.530553 2426 reflector.go:158] "Unhandled Error" err="object-\"kube-flannel\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Feb 13 19:34:49.535685 systemd[1]: Created slice kubepods-besteffort-pode1bae205_7ba7_444a_a3b5_36d3adeaed43.slice - libcontainer container kubepods-besteffort-pode1bae205_7ba7_444a_a3b5_36d3adeaed43.slice. Feb 13 19:34:49.550887 systemd[1]: Created slice kubepods-burstable-podd38ea08b_94c3_4ad2_81f0_f49f26735f9e.slice - libcontainer container kubepods-burstable-podd38ea08b_94c3_4ad2_81f0_f49f26735f9e.slice. Feb 13 19:34:49.594712 kubelet[2426]: I0213 19:34:49.594675 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c92h\" (UniqueName: \"kubernetes.io/projected/e1bae205-7ba7-444a-a3b5-36d3adeaed43-kube-api-access-9c92h\") pod \"kube-proxy-dhn7k\" (UID: \"e1bae205-7ba7-444a-a3b5-36d3adeaed43\") " pod="kube-system/kube-proxy-dhn7k" Feb 13 19:34:49.594712 kubelet[2426]: I0213 19:34:49.594713 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/d38ea08b-94c3-4ad2-81f0-f49f26735f9e-cni\") pod \"kube-flannel-ds-4px8r\" (UID: \"d38ea08b-94c3-4ad2-81f0-f49f26735f9e\") " pod="kube-flannel/kube-flannel-ds-4px8r" Feb 13 19:34:49.595034 kubelet[2426]: I0213 19:34:49.594734 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1bae205-7ba7-444a-a3b5-36d3adeaed43-lib-modules\") pod \"kube-proxy-dhn7k\" (UID: \"e1bae205-7ba7-444a-a3b5-36d3adeaed43\") " pod="kube-system/kube-proxy-dhn7k" Feb 13 19:34:49.595034 kubelet[2426]: I0213 19:34:49.594805 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e1bae205-7ba7-444a-a3b5-36d3adeaed43-kube-proxy\") pod \"kube-proxy-dhn7k\" (UID: \"e1bae205-7ba7-444a-a3b5-36d3adeaed43\") " pod="kube-system/kube-proxy-dhn7k" Feb 13 19:34:49.595034 kubelet[2426]: I0213 19:34:49.594845 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/d38ea08b-94c3-4ad2-81f0-f49f26735f9e-flannel-cfg\") pod \"kube-flannel-ds-4px8r\" (UID: \"d38ea08b-94c3-4ad2-81f0-f49f26735f9e\") " pod="kube-flannel/kube-flannel-ds-4px8r" Feb 13 19:34:49.595034 kubelet[2426]: I0213 19:34:49.594861 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d38ea08b-94c3-4ad2-81f0-f49f26735f9e-xtables-lock\") pod \"kube-flannel-ds-4px8r\" (UID: \"d38ea08b-94c3-4ad2-81f0-f49f26735f9e\") " pod="kube-flannel/kube-flannel-ds-4px8r" Feb 13 19:34:49.595034 kubelet[2426]: I0213 19:34:49.594877 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1bae205-7ba7-444a-a3b5-36d3adeaed43-xtables-lock\") pod \"kube-proxy-dhn7k\" (UID: \"e1bae205-7ba7-444a-a3b5-36d3adeaed43\") " pod="kube-system/kube-proxy-dhn7k" Feb 13 19:34:49.595180 kubelet[2426]: I0213 19:34:49.594892 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/d38ea08b-94c3-4ad2-81f0-f49f26735f9e-cni-plugin\") pod \"kube-flannel-ds-4px8r\" (UID: \"d38ea08b-94c3-4ad2-81f0-f49f26735f9e\") " pod="kube-flannel/kube-flannel-ds-4px8r" Feb 13 19:34:49.595180 kubelet[2426]: I0213 19:34:49.594907 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlcdh\" (UniqueName: \"kubernetes.io/projected/d38ea08b-94c3-4ad2-81f0-f49f26735f9e-kube-api-access-tlcdh\") pod \"kube-flannel-ds-4px8r\" (UID: \"d38ea08b-94c3-4ad2-81f0-f49f26735f9e\") " pod="kube-flannel/kube-flannel-ds-4px8r" Feb 13 19:34:49.595180 kubelet[2426]: I0213 19:34:49.594929 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d38ea08b-94c3-4ad2-81f0-f49f26735f9e-run\") pod \"kube-flannel-ds-4px8r\" (UID: \"d38ea08b-94c3-4ad2-81f0-f49f26735f9e\") " pod="kube-flannel/kube-flannel-ds-4px8r" Feb 13 19:34:49.848432 kubelet[2426]: E0213 19:34:49.848131 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:49.849032 containerd[1441]: time="2025-02-13T19:34:49.848983511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dhn7k,Uid:e1bae205-7ba7-444a-a3b5-36d3adeaed43,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:49.868727 containerd[1441]: time="2025-02-13T19:34:49.868479564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:49.868727 containerd[1441]: time="2025-02-13T19:34:49.868530282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:49.868727 containerd[1441]: time="2025-02-13T19:34:49.868541370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:49.868727 containerd[1441]: time="2025-02-13T19:34:49.868610422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:49.887203 systemd[1]: Started cri-containerd-abc5a5bc0be07292698b1d49dc8a9acc4e147a79e6472333bf00b25717db9953.scope - libcontainer container abc5a5bc0be07292698b1d49dc8a9acc4e147a79e6472333bf00b25717db9953. Feb 13 19:34:49.904075 containerd[1441]: time="2025-02-13T19:34:49.904037047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dhn7k,Uid:e1bae205-7ba7-444a-a3b5-36d3adeaed43,Namespace:kube-system,Attempt:0,} returns sandbox id \"abc5a5bc0be07292698b1d49dc8a9acc4e147a79e6472333bf00b25717db9953\"" Feb 13 19:34:49.904807 kubelet[2426]: E0213 19:34:49.904783 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:49.907732 containerd[1441]: time="2025-02-13T19:34:49.907626225Z" level=info msg="CreateContainer within sandbox \"abc5a5bc0be07292698b1d49dc8a9acc4e147a79e6472333bf00b25717db9953\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:34:49.922500 containerd[1441]: time="2025-02-13T19:34:49.922460013Z" level=info msg="CreateContainer within sandbox \"abc5a5bc0be07292698b1d49dc8a9acc4e147a79e6472333bf00b25717db9953\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1d39809faaef0f79b0c5650ab1f00beefd84121494f178c6026d30e7740cac84\"" Feb 13 19:34:49.923053 containerd[1441]: time="2025-02-13T19:34:49.923027360Z" level=info msg="StartContainer for \"1d39809faaef0f79b0c5650ab1f00beefd84121494f178c6026d30e7740cac84\"" Feb 13 19:34:49.946158 systemd[1]: Started cri-containerd-1d39809faaef0f79b0c5650ab1f00beefd84121494f178c6026d30e7740cac84.scope - libcontainer container 1d39809faaef0f79b0c5650ab1f00beefd84121494f178c6026d30e7740cac84. Feb 13 19:34:49.975843 containerd[1441]: time="2025-02-13T19:34:49.975696344Z" level=info msg="StartContainer for \"1d39809faaef0f79b0c5650ab1f00beefd84121494f178c6026d30e7740cac84\" returns successfully" Feb 13 19:34:50.104275 kubelet[2426]: E0213 19:34:50.104103 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:50.114909 kubelet[2426]: E0213 19:34:50.114873 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:50.115050 kubelet[2426]: E0213 19:34:50.114885 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:50.140424 kubelet[2426]: I0213 19:34:50.139939 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dhn7k" podStartSLOduration=1.139922366 podStartE2EDuration="1.139922366s" podCreationTimestamp="2025-02-13 19:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:50.139861283 +0000 UTC m=+6.144820520" watchObservedRunningTime="2025-02-13 19:34:50.139922366 +0000 UTC m=+6.144881563" Feb 13 19:34:50.703511 kubelet[2426]: E0213 19:34:50.703456 2426 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 19:34:50.703511 kubelet[2426]: E0213 19:34:50.703499 2426 projected.go:194] Error preparing data for projected volume kube-api-access-tlcdh for pod kube-flannel/kube-flannel-ds-4px8r: failed to sync configmap cache: timed out waiting for the condition Feb 13 19:34:50.704018 kubelet[2426]: E0213 19:34:50.703566 2426 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d38ea08b-94c3-4ad2-81f0-f49f26735f9e-kube-api-access-tlcdh podName:d38ea08b-94c3-4ad2-81f0-f49f26735f9e nodeName:}" failed. No retries permitted until 2025-02-13 19:34:51.20354539 +0000 UTC m=+7.208504587 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tlcdh" (UniqueName: "kubernetes.io/projected/d38ea08b-94c3-4ad2-81f0-f49f26735f9e-kube-api-access-tlcdh") pod "kube-flannel-ds-4px8r" (UID: "d38ea08b-94c3-4ad2-81f0-f49f26735f9e") : failed to sync configmap cache: timed out waiting for the condition Feb 13 19:34:51.357699 kubelet[2426]: E0213 19:34:51.357658 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:51.358629 containerd[1441]: time="2025-02-13T19:34:51.358244952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4px8r,Uid:d38ea08b-94c3-4ad2-81f0-f49f26735f9e,Namespace:kube-flannel,Attempt:0,}" Feb 13 19:34:51.379266 containerd[1441]: time="2025-02-13T19:34:51.378850284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:51.379266 containerd[1441]: time="2025-02-13T19:34:51.379226412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:51.379266 containerd[1441]: time="2025-02-13T19:34:51.379239701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:51.379779 containerd[1441]: time="2025-02-13T19:34:51.379321715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:51.398140 systemd[1]: Started cri-containerd-03ba6a54792fb7ccfd7eb27711de37b19570e1cb74505b19446bd657220f6bcc.scope - libcontainer container 03ba6a54792fb7ccfd7eb27711de37b19570e1cb74505b19446bd657220f6bcc. Feb 13 19:34:51.424611 containerd[1441]: time="2025-02-13T19:34:51.424561121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4px8r,Uid:d38ea08b-94c3-4ad2-81f0-f49f26735f9e,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"03ba6a54792fb7ccfd7eb27711de37b19570e1cb74505b19446bd657220f6bcc\"" Feb 13 19:34:51.425169 kubelet[2426]: E0213 19:34:51.425143 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:51.426072 containerd[1441]: time="2025-02-13T19:34:51.426045542Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 19:34:52.724890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount770866385.mount: Deactivated successfully. Feb 13 19:34:52.756390 containerd[1441]: time="2025-02-13T19:34:52.756346251Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:52.757303 containerd[1441]: time="2025-02-13T19:34:52.757276667Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673530" Feb 13 19:34:52.758168 containerd[1441]: time="2025-02-13T19:34:52.758145325Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:52.760330 containerd[1441]: time="2025-02-13T19:34:52.760299460Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:52.761146 containerd[1441]: time="2025-02-13T19:34:52.761105879Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.335029157s" Feb 13 19:34:52.761179 containerd[1441]: time="2025-02-13T19:34:52.761145944Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 19:34:52.764336 containerd[1441]: time="2025-02-13T19:34:52.764298456Z" level=info msg="CreateContainer within sandbox \"03ba6a54792fb7ccfd7eb27711de37b19570e1cb74505b19446bd657220f6bcc\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 19:34:52.775214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2383750202.mount: Deactivated successfully. Feb 13 19:34:52.776808 containerd[1441]: time="2025-02-13T19:34:52.776775224Z" level=info msg="CreateContainer within sandbox \"03ba6a54792fb7ccfd7eb27711de37b19570e1cb74505b19446bd657220f6bcc\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"0c095960df7f7562b003036533d58c21e62b9e16c83cbad8002d889bf33c1124\"" Feb 13 19:34:52.777272 containerd[1441]: time="2025-02-13T19:34:52.777177633Z" level=info msg="StartContainer for \"0c095960df7f7562b003036533d58c21e62b9e16c83cbad8002d889bf33c1124\"" Feb 13 19:34:52.802124 systemd[1]: Started cri-containerd-0c095960df7f7562b003036533d58c21e62b9e16c83cbad8002d889bf33c1124.scope - libcontainer container 0c095960df7f7562b003036533d58c21e62b9e16c83cbad8002d889bf33c1124. Feb 13 19:34:52.823337 containerd[1441]: time="2025-02-13T19:34:52.823280107Z" level=info msg="StartContainer for \"0c095960df7f7562b003036533d58c21e62b9e16c83cbad8002d889bf33c1124\" returns successfully" Feb 13 19:34:52.828396 systemd[1]: cri-containerd-0c095960df7f7562b003036533d58c21e62b9e16c83cbad8002d889bf33c1124.scope: Deactivated successfully. Feb 13 19:34:52.864956 containerd[1441]: time="2025-02-13T19:34:52.864870426Z" level=info msg="shim disconnected" id=0c095960df7f7562b003036533d58c21e62b9e16c83cbad8002d889bf33c1124 namespace=k8s.io Feb 13 19:34:52.864956 containerd[1441]: time="2025-02-13T19:34:52.864941670Z" level=warning msg="cleaning up after shim disconnected" id=0c095960df7f7562b003036533d58c21e62b9e16c83cbad8002d889bf33c1124 namespace=k8s.io Feb 13 19:34:52.864956 containerd[1441]: time="2025-02-13T19:34:52.864952197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:53.121075 kubelet[2426]: E0213 19:34:53.121011 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:53.123328 containerd[1441]: time="2025-02-13T19:34:53.122980735Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 19:34:53.673115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c095960df7f7562b003036533d58c21e62b9e16c83cbad8002d889bf33c1124-rootfs.mount: Deactivated successfully. Feb 13 19:34:54.475835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1019455458.mount: Deactivated successfully. Feb 13 19:34:55.048094 containerd[1441]: time="2025-02-13T19:34:55.048050070Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:55.051780 containerd[1441]: time="2025-02-13T19:34:55.051516639Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 19:34:55.053028 containerd[1441]: time="2025-02-13T19:34:55.052415938Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:55.055158 containerd[1441]: time="2025-02-13T19:34:55.055122360Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:55.056362 containerd[1441]: time="2025-02-13T19:34:55.056312407Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.933194192s" Feb 13 19:34:55.056362 containerd[1441]: time="2025-02-13T19:34:55.056353108Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 19:34:55.060026 containerd[1441]: time="2025-02-13T19:34:55.059260672Z" level=info msg="CreateContainer within sandbox \"03ba6a54792fb7ccfd7eb27711de37b19570e1cb74505b19446bd657220f6bcc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:34:55.075970 containerd[1441]: time="2025-02-13T19:34:55.075917853Z" level=info msg="CreateContainer within sandbox \"03ba6a54792fb7ccfd7eb27711de37b19570e1cb74505b19446bd657220f6bcc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4850fc9d201e1907571dfea108217711f4e6ab9b0fbe62e829b4570c11546651\"" Feb 13 19:34:55.076747 containerd[1441]: time="2025-02-13T19:34:55.076712739Z" level=info msg="StartContainer for \"4850fc9d201e1907571dfea108217711f4e6ab9b0fbe62e829b4570c11546651\"" Feb 13 19:34:55.109228 systemd[1]: Started cri-containerd-4850fc9d201e1907571dfea108217711f4e6ab9b0fbe62e829b4570c11546651.scope - libcontainer container 4850fc9d201e1907571dfea108217711f4e6ab9b0fbe62e829b4570c11546651. Feb 13 19:34:55.141841 systemd[1]: cri-containerd-4850fc9d201e1907571dfea108217711f4e6ab9b0fbe62e829b4570c11546651.scope: Deactivated successfully. Feb 13 19:34:55.201564 containerd[1441]: time="2025-02-13T19:34:55.201454406Z" level=info msg="StartContainer for \"4850fc9d201e1907571dfea108217711f4e6ab9b0fbe62e829b4570c11546651\" returns successfully" Feb 13 19:34:55.211424 kubelet[2426]: I0213 19:34:55.211396 2426 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:34:55.220220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4850fc9d201e1907571dfea108217711f4e6ab9b0fbe62e829b4570c11546651-rootfs.mount: Deactivated successfully. Feb 13 19:34:55.224253 containerd[1441]: time="2025-02-13T19:34:55.224140704Z" level=info msg="shim disconnected" id=4850fc9d201e1907571dfea108217711f4e6ab9b0fbe62e829b4570c11546651 namespace=k8s.io Feb 13 19:34:55.224486 containerd[1441]: time="2025-02-13T19:34:55.224273492Z" level=warning msg="cleaning up after shim disconnected" id=4850fc9d201e1907571dfea108217711f4e6ab9b0fbe62e829b4570c11546651 namespace=k8s.io Feb 13 19:34:55.224486 containerd[1441]: time="2025-02-13T19:34:55.224285498Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:55.239644 systemd[1]: Created slice kubepods-burstable-pod9d64c6e7_7220_43c3_9b97_d4662517937f.slice - libcontainer container kubepods-burstable-pod9d64c6e7_7220_43c3_9b97_d4662517937f.slice. Feb 13 19:34:55.245851 systemd[1]: Created slice kubepods-burstable-podceea40ec_9a69_4c33_972f_fa26eeb46d0a.slice - libcontainer container kubepods-burstable-podceea40ec_9a69_4c33_972f_fa26eeb46d0a.slice. Feb 13 19:34:55.334675 kubelet[2426]: I0213 19:34:55.334621 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkkdf\" (UniqueName: \"kubernetes.io/projected/ceea40ec-9a69-4c33-972f-fa26eeb46d0a-kube-api-access-fkkdf\") pod \"coredns-6f6b679f8f-2mv2v\" (UID: \"ceea40ec-9a69-4c33-972f-fa26eeb46d0a\") " pod="kube-system/coredns-6f6b679f8f-2mv2v" Feb 13 19:34:55.334675 kubelet[2426]: I0213 19:34:55.334674 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsnvd\" (UniqueName: \"kubernetes.io/projected/9d64c6e7-7220-43c3-9b97-d4662517937f-kube-api-access-lsnvd\") pod \"coredns-6f6b679f8f-bbspl\" (UID: \"9d64c6e7-7220-43c3-9b97-d4662517937f\") " pod="kube-system/coredns-6f6b679f8f-bbspl" Feb 13 19:34:55.334847 kubelet[2426]: I0213 19:34:55.334693 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ceea40ec-9a69-4c33-972f-fa26eeb46d0a-config-volume\") pod \"coredns-6f6b679f8f-2mv2v\" (UID: \"ceea40ec-9a69-4c33-972f-fa26eeb46d0a\") " pod="kube-system/coredns-6f6b679f8f-2mv2v" Feb 13 19:34:55.334847 kubelet[2426]: I0213 19:34:55.334710 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d64c6e7-7220-43c3-9b97-d4662517937f-config-volume\") pod \"coredns-6f6b679f8f-bbspl\" (UID: \"9d64c6e7-7220-43c3-9b97-d4662517937f\") " pod="kube-system/coredns-6f6b679f8f-bbspl" Feb 13 19:34:55.544059 kubelet[2426]: E0213 19:34:55.543942 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:55.548521 containerd[1441]: time="2025-02-13T19:34:55.547399011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bbspl,Uid:9d64c6e7-7220-43c3-9b97-d4662517937f,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:55.549547 kubelet[2426]: E0213 19:34:55.549501 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:55.550001 containerd[1441]: time="2025-02-13T19:34:55.549821848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2mv2v,Uid:ceea40ec-9a69-4c33-972f-fa26eeb46d0a,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:55.591144 containerd[1441]: time="2025-02-13T19:34:55.590965087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2mv2v,Uid:ceea40ec-9a69-4c33-972f-fa26eeb46d0a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28a3b3dceef869a456ec1dad3ce2d98891d752d8822afd4e2bec81be93aa38b4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:34:55.591221 kubelet[2426]: E0213 19:34:55.591188 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a3b3dceef869a456ec1dad3ce2d98891d752d8822afd4e2bec81be93aa38b4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:34:55.591268 kubelet[2426]: E0213 19:34:55.591251 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a3b3dceef869a456ec1dad3ce2d98891d752d8822afd4e2bec81be93aa38b4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-2mv2v" Feb 13 19:34:55.591295 kubelet[2426]: E0213 19:34:55.591274 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a3b3dceef869a456ec1dad3ce2d98891d752d8822afd4e2bec81be93aa38b4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-2mv2v" Feb 13 19:34:55.591378 kubelet[2426]: E0213 19:34:55.591318 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-2mv2v_kube-system(ceea40ec-9a69-4c33-972f-fa26eeb46d0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-2mv2v_kube-system(ceea40ec-9a69-4c33-972f-fa26eeb46d0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28a3b3dceef869a456ec1dad3ce2d98891d752d8822afd4e2bec81be93aa38b4\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-2mv2v" podUID="ceea40ec-9a69-4c33-972f-fa26eeb46d0a" Feb 13 19:34:55.592308 containerd[1441]: time="2025-02-13T19:34:55.591640072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bbspl,Uid:9d64c6e7-7220-43c3-9b97-d4662517937f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cafe5f03113ada1e709e14476836d136b188155a379b942a780e21128e50a3f3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:34:55.592438 kubelet[2426]: E0213 19:34:55.592390 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cafe5f03113ada1e709e14476836d136b188155a379b942a780e21128e50a3f3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:34:55.592482 kubelet[2426]: E0213 19:34:55.592447 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cafe5f03113ada1e709e14476836d136b188155a379b942a780e21128e50a3f3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-bbspl" Feb 13 19:34:55.592482 kubelet[2426]: E0213 19:34:55.592464 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cafe5f03113ada1e709e14476836d136b188155a379b942a780e21128e50a3f3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-bbspl" Feb 13 19:34:55.592539 kubelet[2426]: E0213 19:34:55.592499 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bbspl_kube-system(9d64c6e7-7220-43c3-9b97-d4662517937f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bbspl_kube-system(9d64c6e7-7220-43c3-9b97-d4662517937f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cafe5f03113ada1e709e14476836d136b188155a379b942a780e21128e50a3f3\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-bbspl" podUID="9d64c6e7-7220-43c3-9b97-d4662517937f" Feb 13 19:34:56.137023 kubelet[2426]: E0213 19:34:56.136969 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:56.139159 containerd[1441]: time="2025-02-13T19:34:56.138674735Z" level=info msg="CreateContainer within sandbox \"03ba6a54792fb7ccfd7eb27711de37b19570e1cb74505b19446bd657220f6bcc\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 19:34:56.150033 containerd[1441]: time="2025-02-13T19:34:56.149974062Z" level=info msg="CreateContainer within sandbox \"03ba6a54792fb7ccfd7eb27711de37b19570e1cb74505b19446bd657220f6bcc\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"2f3842f5f92db9208e9cd081807a7179e25455a3c4c2d859b2a1f311ce28cdf3\"" Feb 13 19:34:56.150578 containerd[1441]: time="2025-02-13T19:34:56.150537452Z" level=info msg="StartContainer for \"2f3842f5f92db9208e9cd081807a7179e25455a3c4c2d859b2a1f311ce28cdf3\"" Feb 13 19:34:56.190168 systemd[1]: Started cri-containerd-2f3842f5f92db9208e9cd081807a7179e25455a3c4c2d859b2a1f311ce28cdf3.scope - libcontainer container 2f3842f5f92db9208e9cd081807a7179e25455a3c4c2d859b2a1f311ce28cdf3. Feb 13 19:34:56.212062 containerd[1441]: time="2025-02-13T19:34:56.212013988Z" level=info msg="StartContainer for \"2f3842f5f92db9208e9cd081807a7179e25455a3c4c2d859b2a1f311ce28cdf3\" returns successfully" Feb 13 19:34:57.108941 update_engine[1433]: I20250213 19:34:57.108865 1433 update_attempter.cc:509] Updating boot flags... Feb 13 19:34:57.140419 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2993) Feb 13 19:34:57.141900 kubelet[2426]: E0213 19:34:57.141869 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:57.168016 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2993) Feb 13 19:34:57.193032 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2993) Feb 13 19:34:57.292953 systemd-networkd[1389]: flannel.1: Link UP Feb 13 19:34:57.292961 systemd-networkd[1389]: flannel.1: Gained carrier Feb 13 19:34:58.143472 kubelet[2426]: E0213 19:34:58.143412 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:58.449115 kubelet[2426]: E0213 19:34:58.448959 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:58.458964 kubelet[2426]: I0213 19:34:58.458912 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-4px8r" podStartSLOduration=5.827203791 podStartE2EDuration="9.458899718s" podCreationTimestamp="2025-02-13 19:34:49 +0000 UTC" firstStartedPulling="2025-02-13 19:34:51.42569475 +0000 UTC m=+7.430653987" lastFinishedPulling="2025-02-13 19:34:55.057390677 +0000 UTC m=+11.062349914" observedRunningTime="2025-02-13 19:34:57.15348434 +0000 UTC m=+13.158443577" watchObservedRunningTime="2025-02-13 19:34:58.458899718 +0000 UTC m=+14.463858955" Feb 13 19:34:58.559442 kubelet[2426]: E0213 19:34:58.559399 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:58.898195 systemd-networkd[1389]: flannel.1: Gained IPv6LL Feb 13 19:35:08.076726 kubelet[2426]: E0213 19:35:08.076676 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:08.077309 containerd[1441]: time="2025-02-13T19:35:08.077271539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2mv2v,Uid:ceea40ec-9a69-4c33-972f-fa26eeb46d0a,Namespace:kube-system,Attempt:0,}" Feb 13 19:35:08.100477 systemd-networkd[1389]: cni0: Link UP Feb 13 19:35:08.100483 systemd-networkd[1389]: cni0: Gained carrier Feb 13 19:35:08.101888 systemd-networkd[1389]: cni0: Lost carrier Feb 13 19:35:08.108400 systemd-networkd[1389]: veth4f0a5b36: Link UP Feb 13 19:35:08.114368 kernel: cni0: port 1(veth4f0a5b36) entered blocking state Feb 13 19:35:08.114455 kernel: cni0: port 1(veth4f0a5b36) entered disabled state Feb 13 19:35:08.114472 kernel: veth4f0a5b36: entered allmulticast mode Feb 13 19:35:08.115129 kernel: veth4f0a5b36: entered promiscuous mode Feb 13 19:35:08.117512 kernel: cni0: port 1(veth4f0a5b36) entered blocking state Feb 13 19:35:08.117569 kernel: cni0: port 1(veth4f0a5b36) entered forwarding state Feb 13 19:35:08.119079 kernel: cni0: port 1(veth4f0a5b36) entered disabled state Feb 13 19:35:08.127056 kernel: cni0: port 1(veth4f0a5b36) entered blocking state Feb 13 19:35:08.128159 kernel: cni0: port 1(veth4f0a5b36) entered forwarding state Feb 13 19:35:08.127190 systemd-networkd[1389]: veth4f0a5b36: Gained carrier Feb 13 19:35:08.127437 systemd-networkd[1389]: cni0: Gained carrier Feb 13 19:35:08.129584 containerd[1441]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001148e8), "name":"cbr0", "type":"bridge"} Feb 13 19:35:08.129584 containerd[1441]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:35:08.144014 containerd[1441]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T19:35:08.143812780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:35:08.144014 containerd[1441]: time="2025-02-13T19:35:08.143875034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:35:08.144014 containerd[1441]: time="2025-02-13T19:35:08.143885956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:35:08.144014 containerd[1441]: time="2025-02-13T19:35:08.143969815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:35:08.171160 systemd[1]: Started cri-containerd-105c62423d1d9de6d20673ebfa7d2a1516e8ffce1e2b4e74ec06a1011cdf4df7.scope - libcontainer container 105c62423d1d9de6d20673ebfa7d2a1516e8ffce1e2b4e74ec06a1011cdf4df7. Feb 13 19:35:08.181234 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:35:08.196694 containerd[1441]: time="2025-02-13T19:35:08.196659399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2mv2v,Uid:ceea40ec-9a69-4c33-972f-fa26eeb46d0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"105c62423d1d9de6d20673ebfa7d2a1516e8ffce1e2b4e74ec06a1011cdf4df7\"" Feb 13 19:35:08.197316 kubelet[2426]: E0213 19:35:08.197288 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:08.199852 containerd[1441]: time="2025-02-13T19:35:08.199819977Z" level=info msg="CreateContainer within sandbox \"105c62423d1d9de6d20673ebfa7d2a1516e8ffce1e2b4e74ec06a1011cdf4df7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:35:08.208507 containerd[1441]: time="2025-02-13T19:35:08.208474126Z" level=info msg="CreateContainer within sandbox \"105c62423d1d9de6d20673ebfa7d2a1516e8ffce1e2b4e74ec06a1011cdf4df7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1030ccd89fc22d13aeb273da9e8475f30c141b592c137fbbbe754cb61e3fe83d\"" Feb 13 19:35:08.209137 containerd[1441]: time="2025-02-13T19:35:08.208946350Z" level=info msg="StartContainer for \"1030ccd89fc22d13aeb273da9e8475f30c141b592c137fbbbe754cb61e3fe83d\"" Feb 13 19:35:08.240145 systemd[1]: Started cri-containerd-1030ccd89fc22d13aeb273da9e8475f30c141b592c137fbbbe754cb61e3fe83d.scope - libcontainer container 1030ccd89fc22d13aeb273da9e8475f30c141b592c137fbbbe754cb61e3fe83d. Feb 13 19:35:08.263438 containerd[1441]: time="2025-02-13T19:35:08.263329748Z" level=info msg="StartContainer for \"1030ccd89fc22d13aeb273da9e8475f30c141b592c137fbbbe754cb61e3fe83d\" returns successfully" Feb 13 19:35:09.164128 kubelet[2426]: E0213 19:35:09.163816 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:09.246291 kubelet[2426]: I0213 19:35:09.246213 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2mv2v" podStartSLOduration=20.246197492 podStartE2EDuration="20.246197492s" podCreationTimestamp="2025-02-13 19:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:35:09.179402076 +0000 UTC m=+25.184361273" watchObservedRunningTime="2025-02-13 19:35:09.246197492 +0000 UTC m=+25.251156729" Feb 13 19:35:09.586239 systemd-networkd[1389]: cni0: Gained IPv6LL Feb 13 19:35:09.686697 systemd[1]: Started sshd@5-10.0.0.52:22-10.0.0.1:60984.service - OpenSSH per-connection server daemon (10.0.0.1:60984). Feb 13 19:35:09.722022 sshd[3247]: Accepted publickey for core from 10.0.0.1 port 60984 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:35:09.723471 sshd[3247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:09.727690 systemd-logind[1430]: New session 6 of user core. Feb 13 19:35:09.746220 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:35:09.778146 systemd-networkd[1389]: veth4f0a5b36: Gained IPv6LL Feb 13 19:35:09.859919 sshd[3247]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:09.863257 systemd[1]: sshd@5-10.0.0.52:22-10.0.0.1:60984.service: Deactivated successfully. Feb 13 19:35:09.864899 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:35:09.866816 systemd-logind[1430]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:35:09.867839 systemd-logind[1430]: Removed session 6. Feb 13 19:35:10.077972 kubelet[2426]: E0213 19:35:10.077550 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:10.078260 containerd[1441]: time="2025-02-13T19:35:10.078209140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bbspl,Uid:9d64c6e7-7220-43c3-9b97-d4662517937f,Namespace:kube-system,Attempt:0,}" Feb 13 19:35:10.114078 systemd-networkd[1389]: veth1df7d33b: Link UP Feb 13 19:35:10.116056 kernel: cni0: port 2(veth1df7d33b) entered blocking state Feb 13 19:35:10.116141 kernel: cni0: port 2(veth1df7d33b) entered disabled state Feb 13 19:35:10.116157 kernel: veth1df7d33b: entered allmulticast mode Feb 13 19:35:10.117238 kernel: veth1df7d33b: entered promiscuous mode Feb 13 19:35:10.123019 kernel: cni0: port 2(veth1df7d33b) entered blocking state Feb 13 19:35:10.123078 kernel: cni0: port 2(veth1df7d33b) entered forwarding state Feb 13 19:35:10.123196 systemd-networkd[1389]: veth1df7d33b: Gained carrier Feb 13 19:35:10.124311 containerd[1441]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014938), "name":"cbr0", "type":"bridge"} Feb 13 19:35:10.124311 containerd[1441]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:35:10.139081 containerd[1441]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T19:35:10.138958800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:35:10.139081 containerd[1441]: time="2025-02-13T19:35:10.139033614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:35:10.139081 containerd[1441]: time="2025-02-13T19:35:10.139049337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:35:10.139322 containerd[1441]: time="2025-02-13T19:35:10.139139595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:35:10.160232 systemd[1]: Started cri-containerd-f064517a48c2633031082a5890154a01e70b38d478bb99aee957b2deb3a8a810.scope - libcontainer container f064517a48c2633031082a5890154a01e70b38d478bb99aee957b2deb3a8a810. Feb 13 19:35:10.167788 kubelet[2426]: E0213 19:35:10.167760 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:10.177878 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:35:10.195552 containerd[1441]: time="2025-02-13T19:35:10.195517327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bbspl,Uid:9d64c6e7-7220-43c3-9b97-d4662517937f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f064517a48c2633031082a5890154a01e70b38d478bb99aee957b2deb3a8a810\"" Feb 13 19:35:10.196251 kubelet[2426]: E0213 19:35:10.196229 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:10.200001 containerd[1441]: time="2025-02-13T19:35:10.199930903Z" level=info msg="CreateContainer within sandbox \"f064517a48c2633031082a5890154a01e70b38d478bb99aee957b2deb3a8a810\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:35:10.211378 containerd[1441]: time="2025-02-13T19:35:10.211286545Z" level=info msg="CreateContainer within sandbox \"f064517a48c2633031082a5890154a01e70b38d478bb99aee957b2deb3a8a810\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"922e126c6cd25ecdf89ebb39d5d26bb34bf0359cd3b02ab366101f392dfb647d\"" Feb 13 19:35:10.211778 containerd[1441]: time="2025-02-13T19:35:10.211747475Z" level=info msg="StartContainer for \"922e126c6cd25ecdf89ebb39d5d26bb34bf0359cd3b02ab366101f392dfb647d\"" Feb 13 19:35:10.235128 systemd[1]: Started cri-containerd-922e126c6cd25ecdf89ebb39d5d26bb34bf0359cd3b02ab366101f392dfb647d.scope - libcontainer container 922e126c6cd25ecdf89ebb39d5d26bb34bf0359cd3b02ab366101f392dfb647d. Feb 13 19:35:10.256373 containerd[1441]: time="2025-02-13T19:35:10.256238062Z" level=info msg="StartContainer for \"922e126c6cd25ecdf89ebb39d5d26bb34bf0359cd3b02ab366101f392dfb647d\" returns successfully" Feb 13 19:35:11.169483 kubelet[2426]: E0213 19:35:11.169399 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:11.180216 kubelet[2426]: I0213 19:35:11.178912 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bbspl" podStartSLOduration=22.178895427 podStartE2EDuration="22.178895427s" podCreationTimestamp="2025-02-13 19:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:35:11.178876904 +0000 UTC m=+27.183836141" watchObservedRunningTime="2025-02-13 19:35:11.178895427 +0000 UTC m=+27.183854664" Feb 13 19:35:11.762284 systemd-networkd[1389]: veth1df7d33b: Gained IPv6LL Feb 13 19:35:12.171232 kubelet[2426]: E0213 19:35:12.171204 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:13.173339 kubelet[2426]: E0213 19:35:13.172941 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:14.874353 systemd[1]: Started sshd@6-10.0.0.52:22-10.0.0.1:45142.service - OpenSSH per-connection server daemon (10.0.0.1:45142). Feb 13 19:35:14.908658 sshd[3399]: Accepted publickey for core from 10.0.0.1 port 45142 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:35:14.909828 sshd[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:14.913634 systemd-logind[1430]: New session 7 of user core. Feb 13 19:35:14.920127 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:35:15.038489 sshd[3399]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:15.041785 systemd[1]: sshd@6-10.0.0.52:22-10.0.0.1:45142.service: Deactivated successfully. Feb 13 19:35:15.043467 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:35:15.044130 systemd-logind[1430]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:35:15.044873 systemd-logind[1430]: Removed session 7. Feb 13 19:35:20.049632 systemd[1]: Started sshd@7-10.0.0.52:22-10.0.0.1:45146.service - OpenSSH per-connection server daemon (10.0.0.1:45146). Feb 13 19:35:20.083457 sshd[3435]: Accepted publickey for core from 10.0.0.1 port 45146 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:35:20.084619 sshd[3435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:20.088096 systemd-logind[1430]: New session 8 of user core. Feb 13 19:35:20.105150 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:35:20.213698 sshd[3435]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:20.217152 systemd[1]: sshd@7-10.0.0.52:22-10.0.0.1:45146.service: Deactivated successfully. Feb 13 19:35:20.220540 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:35:20.221184 systemd-logind[1430]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:35:20.222030 systemd-logind[1430]: Removed session 8. Feb 13 19:35:25.226449 systemd[1]: Started sshd@8-10.0.0.52:22-10.0.0.1:40628.service - OpenSSH per-connection server daemon (10.0.0.1:40628). Feb 13 19:35:25.259825 sshd[3473]: Accepted publickey for core from 10.0.0.1 port 40628 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:35:25.260964 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:25.264730 systemd-logind[1430]: New session 9 of user core. Feb 13 19:35:25.276122 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:35:25.383387 sshd[3473]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:25.386219 systemd[1]: sshd@8-10.0.0.52:22-10.0.0.1:40628.service: Deactivated successfully. Feb 13 19:35:25.387723 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:35:25.388738 systemd-logind[1430]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:35:25.389551 systemd-logind[1430]: Removed session 9. Feb 13 19:35:30.396527 systemd[1]: Started sshd@9-10.0.0.52:22-10.0.0.1:40644.service - OpenSSH per-connection server daemon (10.0.0.1:40644). Feb 13 19:35:30.430583 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 40644 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:35:30.431766 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:30.435659 systemd-logind[1430]: New session 10 of user core. Feb 13 19:35:30.444120 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:35:30.553097 sshd[3510]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:30.556055 systemd[1]: sshd@9-10.0.0.52:22-10.0.0.1:40644.service: Deactivated successfully. Feb 13 19:35:30.557717 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:35:30.559536 systemd-logind[1430]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:35:30.560468 systemd-logind[1430]: Removed session 10. Feb 13 19:35:35.568474 systemd[1]: Started sshd@10-10.0.0.52:22-10.0.0.1:48604.service - OpenSSH per-connection server daemon (10.0.0.1:48604). Feb 13 19:35:35.602435 sshd[3546]: Accepted publickey for core from 10.0.0.1 port 48604 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:35:35.603645 sshd[3546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:35.607574 systemd-logind[1430]: New session 11 of user core. Feb 13 19:35:35.614151 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:35:35.723263 sshd[3546]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:35.727353 systemd[1]: sshd@10-10.0.0.52:22-10.0.0.1:48604.service: Deactivated successfully. Feb 13 19:35:35.729080 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:35:35.729705 systemd-logind[1430]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:35:35.730594 systemd-logind[1430]: Removed session 11. Feb 13 19:35:40.733473 systemd[1]: Started sshd@11-10.0.0.52:22-10.0.0.1:48610.service - OpenSSH per-connection server daemon (10.0.0.1:48610). Feb 13 19:35:40.766792 sshd[3584]: Accepted publickey for core from 10.0.0.1 port 48610 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:35:40.767980 sshd[3584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:40.771905 systemd-logind[1430]: New session 12 of user core. Feb 13 19:35:40.781135 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:35:40.889018 sshd[3584]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:40.894135 systemd[1]: sshd@11-10.0.0.52:22-10.0.0.1:48610.service: Deactivated successfully. Feb 13 19:35:40.895753 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:35:40.896376 systemd-logind[1430]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:35:40.897350 systemd-logind[1430]: Removed session 12. Feb 13 19:35:45.899491 systemd[1]: Started sshd@12-10.0.0.52:22-10.0.0.1:50544.service - OpenSSH per-connection server daemon (10.0.0.1:50544). Feb 13 19:35:45.933189 sshd[3623]: Accepted publickey for core from 10.0.0.1 port 50544 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:35:45.934438 sshd[3623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:45.938516 systemd-logind[1430]: New session 13 of user core. Feb 13 19:35:45.945159 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:35:46.051155 sshd[3623]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:46.054451 systemd[1]: sshd@12-10.0.0.52:22-10.0.0.1:50544.service: Deactivated successfully. Feb 13 19:35:46.056362 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:35:46.057058 systemd-logind[1430]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:35:46.057884 systemd-logind[1430]: Removed session 13. Feb 13 19:35:51.061773 systemd[1]: Started sshd@13-10.0.0.52:22-10.0.0.1:50560.service - OpenSSH per-connection server daemon (10.0.0.1:50560). Feb 13 19:35:51.096703 sshd[3662]: Accepted publickey for core from 10.0.0.1 port 50560 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:35:51.098066 sshd[3662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:51.103046 systemd-logind[1430]: New session 14 of user core. Feb 13 19:35:51.111222 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:35:51.223379 sshd[3662]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:51.228458 systemd[1]: sshd@13-10.0.0.52:22-10.0.0.1:50560.service: Deactivated successfully. Feb 13 19:35:51.232711 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:35:51.233537 systemd-logind[1430]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:35:51.235041 systemd-logind[1430]: Removed session 14. Feb 13 19:35:56.233478 systemd[1]: Started sshd@14-10.0.0.52:22-10.0.0.1:37686.service - OpenSSH per-connection server daemon (10.0.0.1:37686). Feb 13 19:35:56.267066 sshd[3699]: Accepted publickey for core from 10.0.0.1 port 37686 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:35:56.268176 sshd[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:56.271493 systemd-logind[1430]: New session 15 of user core. Feb 13 19:35:56.281238 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:35:56.392599 sshd[3699]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:56.395729 systemd[1]: sshd@14-10.0.0.52:22-10.0.0.1:37686.service: Deactivated successfully. Feb 13 19:35:56.397531 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:35:56.398154 systemd-logind[1430]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:35:56.399037 systemd-logind[1430]: Removed session 15. Feb 13 19:36:01.403486 systemd[1]: Started sshd@15-10.0.0.52:22-10.0.0.1:37700.service - OpenSSH per-connection server daemon (10.0.0.1:37700). Feb 13 19:36:01.440754 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 37700 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:36:01.441951 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:36:01.446063 systemd-logind[1430]: New session 16 of user core. Feb 13 19:36:01.462162 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:36:01.570073 sshd[3735]: pam_unix(sshd:session): session closed for user core Feb 13 19:36:01.573322 systemd[1]: sshd@15-10.0.0.52:22-10.0.0.1:37700.service: Deactivated successfully. Feb 13 19:36:01.575062 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:36:01.575666 systemd-logind[1430]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:36:01.576764 systemd-logind[1430]: Removed session 16. Feb 13 19:36:03.077198 kubelet[2426]: E0213 19:36:03.077118 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:36:06.582447 systemd[1]: Started sshd@16-10.0.0.52:22-10.0.0.1:42334.service - OpenSSH per-connection server daemon (10.0.0.1:42334). Feb 13 19:36:06.616107 sshd[3772]: Accepted publickey for core from 10.0.0.1 port 42334 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:36:06.617330 sshd[3772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:36:06.621225 systemd-logind[1430]: New session 17 of user core. Feb 13 19:36:06.637139 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:36:06.747730 sshd[3772]: pam_unix(sshd:session): session closed for user core Feb 13 19:36:06.751083 systemd[1]: sshd@16-10.0.0.52:22-10.0.0.1:42334.service: Deactivated successfully. Feb 13 19:36:06.753969 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:36:06.755074 systemd-logind[1430]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:36:06.756043 systemd-logind[1430]: Removed session 17. Feb 13 19:36:09.077260 kubelet[2426]: E0213 19:36:09.077219 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:36:11.077356 kubelet[2426]: E0213 19:36:11.077313 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:36:11.758458 systemd[1]: Started sshd@17-10.0.0.52:22-10.0.0.1:42342.service - OpenSSH per-connection server daemon (10.0.0.1:42342). Feb 13 19:36:11.791853 sshd[3810]: Accepted publickey for core from 10.0.0.1 port 42342 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:36:11.793017 sshd[3810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:36:11.796623 systemd-logind[1430]: New session 18 of user core. Feb 13 19:36:11.805126 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:36:11.913034 sshd[3810]: pam_unix(sshd:session): session closed for user core Feb 13 19:36:11.917115 systemd[1]: sshd@17-10.0.0.52:22-10.0.0.1:42342.service: Deactivated successfully. Feb 13 19:36:11.920261 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:36:11.921157 systemd-logind[1430]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:36:11.922098 systemd-logind[1430]: Removed session 18. Feb 13 19:36:12.076949 kubelet[2426]: E0213 19:36:12.076547 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:36:13.077401 kubelet[2426]: E0213 19:36:13.077358 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:36:16.923535 systemd[1]: Started sshd@18-10.0.0.52:22-10.0.0.1:43074.service - OpenSSH per-connection server daemon (10.0.0.1:43074). Feb 13 19:36:16.957136 sshd[3846]: Accepted publickey for core from 10.0.0.1 port 43074 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:36:16.958392 sshd[3846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:36:16.961921 systemd-logind[1430]: New session 19 of user core. Feb 13 19:36:16.972155 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:36:17.083778 sshd[3846]: pam_unix(sshd:session): session closed for user core Feb 13 19:36:17.087183 systemd[1]: sshd@18-10.0.0.52:22-10.0.0.1:43074.service: Deactivated successfully. Feb 13 19:36:17.088832 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:36:17.090487 systemd-logind[1430]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:36:17.091257 systemd-logind[1430]: Removed session 19. Feb 13 19:36:22.103519 systemd[1]: Started sshd@19-10.0.0.52:22-10.0.0.1:43080.service - OpenSSH per-connection server daemon (10.0.0.1:43080). Feb 13 19:36:22.140423 sshd[3884]: Accepted publickey for core from 10.0.0.1 port 43080 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:36:22.141576 sshd[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:36:22.145062 systemd-logind[1430]: New session 20 of user core. Feb 13 19:36:22.154127 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:36:22.260186 sshd[3884]: pam_unix(sshd:session): session closed for user core Feb 13 19:36:22.262524 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:36:22.264382 systemd[1]: sshd@19-10.0.0.52:22-10.0.0.1:43080.service: Deactivated successfully. Feb 13 19:36:22.266739 systemd-logind[1430]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:36:22.267528 systemd-logind[1430]: Removed session 20. Feb 13 19:36:23.077619 kubelet[2426]: E0213 19:36:23.077523 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:36:24.077509 kubelet[2426]: E0213 19:36:24.077434 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:36:27.270423 systemd[1]: Started sshd@20-10.0.0.52:22-10.0.0.1:53892.service - OpenSSH per-connection server daemon (10.0.0.1:53892). Feb 13 19:36:27.305849 sshd[3921]: Accepted publickey for core from 10.0.0.1 port 53892 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:36:27.307496 sshd[3921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:36:27.311698 systemd-logind[1430]: New session 21 of user core. Feb 13 19:36:27.325152 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:36:27.440217 sshd[3921]: pam_unix(sshd:session): session closed for user core Feb 13 19:36:27.443083 systemd[1]: sshd@20-10.0.0.52:22-10.0.0.1:53892.service: Deactivated successfully. Feb 13 19:36:27.446273 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:36:27.448388 systemd-logind[1430]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:36:27.449452 systemd-logind[1430]: Removed session 21. Feb 13 19:36:32.450696 systemd[1]: Started sshd@21-10.0.0.52:22-10.0.0.1:53898.service - OpenSSH per-connection server daemon (10.0.0.1:53898). Feb 13 19:36:32.486003 sshd[3957]: Accepted publickey for core from 10.0.0.1 port 53898 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:36:32.487304 sshd[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:36:32.491064 systemd-logind[1430]: New session 22 of user core. Feb 13 19:36:32.498154 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:36:32.605083 sshd[3957]: pam_unix(sshd:session): session closed for user core Feb 13 19:36:32.608710 systemd[1]: sshd@21-10.0.0.52:22-10.0.0.1:53898.service: Deactivated successfully. Feb 13 19:36:32.610447 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:36:32.612510 systemd-logind[1430]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:36:32.613646 systemd-logind[1430]: Removed session 22. Feb 13 19:36:37.615570 systemd[1]: Started sshd@22-10.0.0.52:22-10.0.0.1:41462.service - OpenSSH per-connection server daemon (10.0.0.1:41462). Feb 13 19:36:37.649246 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 41462 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:36:37.650605 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:36:37.654403 systemd-logind[1430]: New session 23 of user core. Feb 13 19:36:37.669140 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:36:37.775779 sshd[3999]: pam_unix(sshd:session): session closed for user core Feb 13 19:36:37.778445 systemd[1]: sshd@22-10.0.0.52:22-10.0.0.1:41462.service: Deactivated successfully. Feb 13 19:36:37.781477 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:36:37.783939 systemd-logind[1430]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:36:37.784728 systemd-logind[1430]: Removed session 23. Feb 13 19:36:42.786396 systemd[1]: Started sshd@23-10.0.0.52:22-10.0.0.1:54728.service - OpenSSH per-connection server daemon (10.0.0.1:54728). Feb 13 19:36:42.819636 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 54728 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:36:42.820734 sshd[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:36:42.824187 systemd-logind[1430]: New session 24 of user core. Feb 13 19:36:42.830175 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:36:42.938902 sshd[4051]: pam_unix(sshd:session): session closed for user core Feb 13 19:36:42.941412 systemd[1]: sshd@23-10.0.0.52:22-10.0.0.1:54728.service: Deactivated successfully. Feb 13 19:36:42.942936 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:36:42.944162 systemd-logind[1430]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:36:42.945102 systemd-logind[1430]: Removed session 24. Feb 13 19:36:47.949321 systemd[1]: Started sshd@24-10.0.0.52:22-10.0.0.1:54730.service - OpenSSH per-connection server daemon (10.0.0.1:54730). Feb 13 19:36:47.982639 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 54730 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:36:47.983830 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:36:47.987642 systemd-logind[1430]: New session 25 of user core. Feb 13 19:36:48.000870 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:36:48.107192 sshd[4089]: pam_unix(sshd:session): session closed for user core Feb 13 19:36:48.110231 systemd[1]: sshd@24-10.0.0.52:22-10.0.0.1:54730.service: Deactivated successfully. Feb 13 19:36:48.111894 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:36:48.114568 systemd-logind[1430]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:36:48.115318 systemd-logind[1430]: Removed session 25. Feb 13 19:36:53.116425 systemd[1]: Started sshd@25-10.0.0.52:22-10.0.0.1:40758.service - OpenSSH per-connection server daemon (10.0.0.1:40758). Feb 13 19:36:53.150450 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 40758 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:36:53.151674 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:36:53.154936 systemd-logind[1430]: New session 26 of user core. Feb 13 19:36:53.161163 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:36:53.268107 sshd[4127]: pam_unix(sshd:session): session closed for user core Feb 13 19:36:53.271522 systemd[1]: sshd@25-10.0.0.52:22-10.0.0.1:40758.service: Deactivated successfully. Feb 13 19:36:53.273245 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:36:53.275411 systemd-logind[1430]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:36:53.276262 systemd-logind[1430]: Removed session 26. Feb 13 19:36:58.278578 systemd[1]: Started sshd@26-10.0.0.52:22-10.0.0.1:40764.service - OpenSSH per-connection server daemon (10.0.0.1:40764). Feb 13 19:36:58.313016 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 40764 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:36:58.313880 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:36:58.317742 systemd-logind[1430]: New session 27 of user core. Feb 13 19:36:58.328129 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:36:58.435782 sshd[4163]: pam_unix(sshd:session): session closed for user core Feb 13 19:36:58.438466 systemd[1]: sshd@26-10.0.0.52:22-10.0.0.1:40764.service: Deactivated successfully. Feb 13 19:36:58.440124 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:36:58.441491 systemd-logind[1430]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:36:58.442502 systemd-logind[1430]: Removed session 27. Feb 13 19:37:03.447868 systemd[1]: Started sshd@27-10.0.0.52:22-10.0.0.1:41304.service - OpenSSH per-connection server daemon (10.0.0.1:41304). Feb 13 19:37:03.481639 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 41304 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:37:03.482755 sshd[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:03.486227 systemd-logind[1430]: New session 28 of user core. Feb 13 19:37:03.500213 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:37:03.607888 sshd[4199]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:03.611502 systemd[1]: sshd@27-10.0.0.52:22-10.0.0.1:41304.service: Deactivated successfully. Feb 13 19:37:03.613320 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:37:03.613938 systemd-logind[1430]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:37:03.615701 systemd-logind[1430]: Removed session 28. Feb 13 19:37:08.618696 systemd[1]: Started sshd@28-10.0.0.52:22-10.0.0.1:41316.service - OpenSSH per-connection server daemon (10.0.0.1:41316). Feb 13 19:37:08.653058 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 41316 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:37:08.654139 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:08.658045 systemd-logind[1430]: New session 29 of user core. Feb 13 19:37:08.664158 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 19:37:08.778885 sshd[4236]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:08.782216 systemd[1]: sshd@28-10.0.0.52:22-10.0.0.1:41316.service: Deactivated successfully. Feb 13 19:37:08.784509 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 19:37:08.785215 systemd-logind[1430]: Session 29 logged out. Waiting for processes to exit. Feb 13 19:37:08.785982 systemd-logind[1430]: Removed session 29. Feb 13 19:37:13.789956 systemd[1]: Started sshd@29-10.0.0.52:22-10.0.0.1:41824.service - OpenSSH per-connection server daemon (10.0.0.1:41824). Feb 13 19:37:13.874036 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 41824 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:37:13.875620 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:13.880074 systemd-logind[1430]: New session 30 of user core. Feb 13 19:37:13.886134 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 19:37:13.998050 sshd[4275]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:14.001589 systemd[1]: sshd@29-10.0.0.52:22-10.0.0.1:41824.service: Deactivated successfully. Feb 13 19:37:14.004772 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 19:37:14.005424 systemd-logind[1430]: Session 30 logged out. Waiting for processes to exit. Feb 13 19:37:14.007367 systemd-logind[1430]: Removed session 30. Feb 13 19:37:16.077730 kubelet[2426]: E0213 19:37:16.077268 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:19.008471 systemd[1]: Started sshd@30-10.0.0.52:22-10.0.0.1:41840.service - OpenSSH per-connection server daemon (10.0.0.1:41840). Feb 13 19:37:19.042266 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 41840 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:37:19.043475 sshd[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:19.047472 systemd-logind[1430]: New session 31 of user core. Feb 13 19:37:19.060118 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 19:37:19.165423 sshd[4313]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:19.168512 systemd[1]: sshd@30-10.0.0.52:22-10.0.0.1:41840.service: Deactivated successfully. Feb 13 19:37:19.170109 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 19:37:19.172606 systemd-logind[1430]: Session 31 logged out. Waiting for processes to exit. Feb 13 19:37:19.173519 systemd-logind[1430]: Removed session 31. Feb 13 19:37:24.179542 systemd[1]: Started sshd@31-10.0.0.52:22-10.0.0.1:57576.service - OpenSSH per-connection server daemon (10.0.0.1:57576). Feb 13 19:37:24.216763 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 57576 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:37:24.217908 sshd[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:24.222191 systemd-logind[1430]: New session 32 of user core. Feb 13 19:37:24.229128 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 19:37:24.349929 sshd[4352]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:24.354582 systemd[1]: sshd@31-10.0.0.52:22-10.0.0.1:57576.service: Deactivated successfully. Feb 13 19:37:24.358771 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 19:37:24.360832 systemd-logind[1430]: Session 32 logged out. Waiting for processes to exit. Feb 13 19:37:24.361754 systemd-logind[1430]: Removed session 32. Feb 13 19:37:25.077456 kubelet[2426]: E0213 19:37:25.077419 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:29.361964 systemd[1]: Started sshd@32-10.0.0.52:22-10.0.0.1:57590.service - OpenSSH per-connection server daemon (10.0.0.1:57590). Feb 13 19:37:29.399490 sshd[4388]: Accepted publickey for core from 10.0.0.1 port 57590 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:37:29.399939 sshd[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:29.406671 systemd-logind[1430]: New session 33 of user core. Feb 13 19:37:29.415646 systemd[1]: Started session-33.scope - Session 33 of User core. Feb 13 19:37:29.530867 sshd[4388]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:29.533613 systemd[1]: sshd@32-10.0.0.52:22-10.0.0.1:57590.service: Deactivated successfully. Feb 13 19:37:29.536343 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 19:37:29.541020 systemd-logind[1430]: Session 33 logged out. Waiting for processes to exit. Feb 13 19:37:29.542348 systemd-logind[1430]: Removed session 33. Feb 13 19:37:30.077101 kubelet[2426]: E0213 19:37:30.077016 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:32.077614 kubelet[2426]: E0213 19:37:32.077583 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:34.541524 systemd[1]: Started sshd@33-10.0.0.52:22-10.0.0.1:39608.service - OpenSSH per-connection server daemon (10.0.0.1:39608). Feb 13 19:37:34.575726 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 39608 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:37:34.577022 sshd[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:34.580575 systemd-logind[1430]: New session 34 of user core. Feb 13 19:37:34.589136 systemd[1]: Started session-34.scope - Session 34 of User core. Feb 13 19:37:34.695085 sshd[4424]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:34.698319 systemd[1]: sshd@33-10.0.0.52:22-10.0.0.1:39608.service: Deactivated successfully. Feb 13 19:37:34.700450 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 19:37:34.701028 systemd-logind[1430]: Session 34 logged out. Waiting for processes to exit. Feb 13 19:37:34.701770 systemd-logind[1430]: Removed session 34. Feb 13 19:37:39.077473 kubelet[2426]: E0213 19:37:39.077427 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:39.705438 systemd[1]: Started sshd@34-10.0.0.52:22-10.0.0.1:39614.service - OpenSSH per-connection server daemon (10.0.0.1:39614). Feb 13 19:37:39.739280 sshd[4460]: Accepted publickey for core from 10.0.0.1 port 39614 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:37:39.740625 sshd[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:39.744289 systemd-logind[1430]: New session 35 of user core. Feb 13 19:37:39.750142 systemd[1]: Started session-35.scope - Session 35 of User core. Feb 13 19:37:39.855207 sshd[4460]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:39.858348 systemd[1]: sshd@34-10.0.0.52:22-10.0.0.1:39614.service: Deactivated successfully. Feb 13 19:37:39.859984 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 19:37:39.861614 systemd-logind[1430]: Session 35 logged out. Waiting for processes to exit. Feb 13 19:37:39.862645 systemd-logind[1430]: Removed session 35. Feb 13 19:37:43.077051 kubelet[2426]: E0213 19:37:43.076983 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:44.865821 systemd[1]: Started sshd@35-10.0.0.52:22-10.0.0.1:48536.service - OpenSSH per-connection server daemon (10.0.0.1:48536). Feb 13 19:37:44.899376 sshd[4499]: Accepted publickey for core from 10.0.0.1 port 48536 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:37:44.900509 sshd[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:44.904321 systemd-logind[1430]: New session 36 of user core. Feb 13 19:37:44.914201 systemd[1]: Started session-36.scope - Session 36 of User core. Feb 13 19:37:45.022562 sshd[4499]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:45.025946 systemd[1]: sshd@35-10.0.0.52:22-10.0.0.1:48536.service: Deactivated successfully. Feb 13 19:37:45.027576 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 19:37:45.029482 systemd-logind[1430]: Session 36 logged out. Waiting for processes to exit. Feb 13 19:37:45.030409 systemd-logind[1430]: Removed session 36. Feb 13 19:37:47.076919 kubelet[2426]: E0213 19:37:47.076819 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:50.033515 systemd[1]: Started sshd@36-10.0.0.52:22-10.0.0.1:48548.service - OpenSSH per-connection server daemon (10.0.0.1:48548). Feb 13 19:37:50.067508 sshd[4536]: Accepted publickey for core from 10.0.0.1 port 48548 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:37:50.068750 sshd[4536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:50.072298 systemd-logind[1430]: New session 37 of user core. Feb 13 19:37:50.086201 systemd[1]: Started session-37.scope - Session 37 of User core. Feb 13 19:37:50.193363 sshd[4536]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:50.196668 systemd[1]: sshd@36-10.0.0.52:22-10.0.0.1:48548.service: Deactivated successfully. Feb 13 19:37:50.199426 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 19:37:50.200014 systemd-logind[1430]: Session 37 logged out. Waiting for processes to exit. Feb 13 19:37:50.200788 systemd-logind[1430]: Removed session 37. Feb 13 19:37:55.203632 systemd[1]: Started sshd@37-10.0.0.52:22-10.0.0.1:40054.service - OpenSSH per-connection server daemon (10.0.0.1:40054). Feb 13 19:37:55.237634 sshd[4574]: Accepted publickey for core from 10.0.0.1 port 40054 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:37:55.238879 sshd[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:55.242948 systemd-logind[1430]: New session 38 of user core. Feb 13 19:37:55.251149 systemd[1]: Started session-38.scope - Session 38 of User core. Feb 13 19:37:55.360082 sshd[4574]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:55.363356 systemd[1]: sshd@37-10.0.0.52:22-10.0.0.1:40054.service: Deactivated successfully. Feb 13 19:37:55.366488 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 19:37:55.367080 systemd-logind[1430]: Session 38 logged out. Waiting for processes to exit. Feb 13 19:37:55.368193 systemd-logind[1430]: Removed session 38. Feb 13 19:38:00.370583 systemd[1]: Started sshd@38-10.0.0.52:22-10.0.0.1:40062.service - OpenSSH per-connection server daemon (10.0.0.1:40062). Feb 13 19:38:00.404795 sshd[4610]: Accepted publickey for core from 10.0.0.1 port 40062 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:38:00.406076 sshd[4610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:00.409550 systemd-logind[1430]: New session 39 of user core. Feb 13 19:38:00.415174 systemd[1]: Started session-39.scope - Session 39 of User core. Feb 13 19:38:00.526901 sshd[4610]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:00.530207 systemd[1]: sshd@38-10.0.0.52:22-10.0.0.1:40062.service: Deactivated successfully. Feb 13 19:38:00.531843 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 19:38:00.533263 systemd-logind[1430]: Session 39 logged out. Waiting for processes to exit. Feb 13 19:38:00.534369 systemd-logind[1430]: Removed session 39. Feb 13 19:38:05.541551 systemd[1]: Started sshd@39-10.0.0.52:22-10.0.0.1:60556.service - OpenSSH per-connection server daemon (10.0.0.1:60556). Feb 13 19:38:05.580269 sshd[4647]: Accepted publickey for core from 10.0.0.1 port 60556 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:38:05.580681 sshd[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:05.585429 systemd-logind[1430]: New session 40 of user core. Feb 13 19:38:05.596227 systemd[1]: Started session-40.scope - Session 40 of User core. Feb 13 19:38:05.712815 sshd[4647]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:05.716097 systemd[1]: sshd@39-10.0.0.52:22-10.0.0.1:60556.service: Deactivated successfully. Feb 13 19:38:05.717864 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 19:38:05.719052 systemd-logind[1430]: Session 40 logged out. Waiting for processes to exit. Feb 13 19:38:05.719943 systemd-logind[1430]: Removed session 40. Feb 13 19:38:10.723656 systemd[1]: Started sshd@40-10.0.0.52:22-10.0.0.1:60562.service - OpenSSH per-connection server daemon (10.0.0.1:60562). Feb 13 19:38:10.760040 sshd[4683]: Accepted publickey for core from 10.0.0.1 port 60562 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:38:10.761019 sshd[4683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:10.765048 systemd-logind[1430]: New session 41 of user core. Feb 13 19:38:10.772187 systemd[1]: Started session-41.scope - Session 41 of User core. Feb 13 19:38:10.888861 sshd[4683]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:10.905422 systemd[1]: sshd@40-10.0.0.52:22-10.0.0.1:60562.service: Deactivated successfully. Feb 13 19:38:10.907367 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 19:38:10.908853 systemd-logind[1430]: Session 41 logged out. Waiting for processes to exit. Feb 13 19:38:10.920478 systemd[1]: Started sshd@41-10.0.0.52:22-10.0.0.1:60574.service - OpenSSH per-connection server daemon (10.0.0.1:60574). Feb 13 19:38:10.921543 systemd-logind[1430]: Removed session 41. Feb 13 19:38:10.956591 sshd[4698]: Accepted publickey for core from 10.0.0.1 port 60574 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:38:10.957872 sshd[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:10.961699 systemd-logind[1430]: New session 42 of user core. Feb 13 19:38:10.977146 systemd[1]: Started session-42.scope - Session 42 of User core. Feb 13 19:38:11.138318 sshd[4698]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:11.146906 systemd[1]: sshd@41-10.0.0.52:22-10.0.0.1:60574.service: Deactivated successfully. Feb 13 19:38:11.150421 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 19:38:11.153581 systemd-logind[1430]: Session 42 logged out. Waiting for processes to exit. Feb 13 19:38:11.162320 systemd[1]: Started sshd@42-10.0.0.52:22-10.0.0.1:60582.service - OpenSSH per-connection server daemon (10.0.0.1:60582). Feb 13 19:38:11.164563 systemd-logind[1430]: Removed session 42. Feb 13 19:38:11.211966 sshd[4711]: Accepted publickey for core from 10.0.0.1 port 60582 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:38:11.213343 sshd[4711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:11.217681 systemd-logind[1430]: New session 43 of user core. Feb 13 19:38:11.227123 systemd[1]: Started session-43.scope - Session 43 of User core. Feb 13 19:38:11.337496 sshd[4711]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:11.341017 systemd[1]: sshd@42-10.0.0.52:22-10.0.0.1:60582.service: Deactivated successfully. Feb 13 19:38:11.344853 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 19:38:11.345793 systemd-logind[1430]: Session 43 logged out. Waiting for processes to exit. Feb 13 19:38:11.346870 systemd-logind[1430]: Removed session 43. Feb 13 19:38:16.351806 systemd[1]: Started sshd@43-10.0.0.52:22-10.0.0.1:35910.service - OpenSSH per-connection server daemon (10.0.0.1:35910). Feb 13 19:38:16.388013 sshd[4747]: Accepted publickey for core from 10.0.0.1 port 35910 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:38:16.388918 sshd[4747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:16.393096 systemd-logind[1430]: New session 44 of user core. Feb 13 19:38:16.401204 systemd[1]: Started session-44.scope - Session 44 of User core. Feb 13 19:38:16.520641 sshd[4747]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:16.524693 systemd[1]: sshd@43-10.0.0.52:22-10.0.0.1:35910.service: Deactivated successfully. Feb 13 19:38:16.527659 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 19:38:16.528676 systemd-logind[1430]: Session 44 logged out. Waiting for processes to exit. Feb 13 19:38:16.529441 systemd-logind[1430]: Removed session 44. Feb 13 19:38:21.531783 systemd[1]: Started sshd@44-10.0.0.52:22-10.0.0.1:35924.service - OpenSSH per-connection server daemon (10.0.0.1:35924). Feb 13 19:38:21.566796 sshd[4784]: Accepted publickey for core from 10.0.0.1 port 35924 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:38:21.568334 sshd[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:21.571792 systemd-logind[1430]: New session 45 of user core. Feb 13 19:38:21.588132 systemd[1]: Started session-45.scope - Session 45 of User core. Feb 13 19:38:21.696061 sshd[4784]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:21.700302 systemd[1]: sshd@44-10.0.0.52:22-10.0.0.1:35924.service: Deactivated successfully. Feb 13 19:38:21.703612 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 19:38:21.704370 systemd-logind[1430]: Session 45 logged out. Waiting for processes to exit. Feb 13 19:38:21.705355 systemd-logind[1430]: Removed session 45. Feb 13 19:38:26.706586 systemd[1]: Started sshd@45-10.0.0.52:22-10.0.0.1:52096.service - OpenSSH per-connection server daemon (10.0.0.1:52096). Feb 13 19:38:26.741515 sshd[4819]: Accepted publickey for core from 10.0.0.1 port 52096 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:38:26.742764 sshd[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:26.746954 systemd-logind[1430]: New session 46 of user core. Feb 13 19:38:26.754340 systemd[1]: Started session-46.scope - Session 46 of User core. Feb 13 19:38:26.861776 sshd[4819]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:26.865164 systemd[1]: sshd@45-10.0.0.52:22-10.0.0.1:52096.service: Deactivated successfully. Feb 13 19:38:26.868558 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 19:38:26.869386 systemd-logind[1430]: Session 46 logged out. Waiting for processes to exit. Feb 13 19:38:26.870162 systemd-logind[1430]: Removed session 46. Feb 13 19:38:31.872510 systemd[1]: Started sshd@46-10.0.0.52:22-10.0.0.1:52104.service - OpenSSH per-connection server daemon (10.0.0.1:52104). Feb 13 19:38:31.906275 sshd[4855]: Accepted publickey for core from 10.0.0.1 port 52104 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:38:31.907457 sshd[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:31.912066 systemd-logind[1430]: New session 47 of user core. Feb 13 19:38:31.924132 systemd[1]: Started session-47.scope - Session 47 of User core. Feb 13 19:38:32.034248 sshd[4855]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:32.037271 systemd[1]: sshd@46-10.0.0.52:22-10.0.0.1:52104.service: Deactivated successfully. Feb 13 19:38:32.038786 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 19:38:32.039416 systemd-logind[1430]: Session 47 logged out. Waiting for processes to exit. Feb 13 19:38:32.040315 systemd-logind[1430]: Removed session 47. Feb 13 19:38:37.045640 systemd[1]: Started sshd@47-10.0.0.52:22-10.0.0.1:45372.service - OpenSSH per-connection server daemon (10.0.0.1:45372). Feb 13 19:38:37.077041 kubelet[2426]: E0213 19:38:37.076941 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:37.077472 kubelet[2426]: E0213 19:38:37.077402 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:37.079103 sshd[4890]: Accepted publickey for core from 10.0.0.1 port 45372 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:38:37.081417 sshd[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:37.085449 systemd-logind[1430]: New session 48 of user core. Feb 13 19:38:37.094393 systemd[1]: Started session-48.scope - Session 48 of User core. Feb 13 19:38:37.201674 sshd[4890]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:37.204972 systemd[1]: sshd@47-10.0.0.52:22-10.0.0.1:45372.service: Deactivated successfully. Feb 13 19:38:37.207594 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 19:38:37.208791 systemd-logind[1430]: Session 48 logged out. Waiting for processes to exit. Feb 13 19:38:37.210304 systemd-logind[1430]: Removed session 48. Feb 13 19:38:42.212836 systemd[1]: Started sshd@48-10.0.0.52:22-10.0.0.1:45374.service - OpenSSH per-connection server daemon (10.0.0.1:45374). Feb 13 19:38:42.281717 sshd[4925]: Accepted publickey for core from 10.0.0.1 port 45374 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:38:42.283075 sshd[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:42.287634 systemd-logind[1430]: New session 49 of user core. Feb 13 19:38:42.298180 systemd[1]: Started session-49.scope - Session 49 of User core. Feb 13 19:38:42.418206 sshd[4925]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:42.420852 systemd[1]: sshd@48-10.0.0.52:22-10.0.0.1:45374.service: Deactivated successfully. Feb 13 19:38:42.422763 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 19:38:42.424257 systemd-logind[1430]: Session 49 logged out. Waiting for processes to exit. Feb 13 19:38:42.425222 systemd-logind[1430]: Removed session 49. Feb 13 19:38:45.077003 kubelet[2426]: E0213 19:38:45.076964 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:46.076967 kubelet[2426]: E0213 19:38:46.076601 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:47.076998 kubelet[2426]: E0213 19:38:47.076956 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:47.429258 systemd[1]: Started sshd@49-10.0.0.52:22-10.0.0.1:58710.service - OpenSSH per-connection server daemon (10.0.0.1:58710). Feb 13 19:38:47.463165 sshd[4963]: Accepted publickey for core from 10.0.0.1 port 58710 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:38:47.464521 sshd[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:47.467820 systemd-logind[1430]: New session 50 of user core. Feb 13 19:38:47.476118 systemd[1]: Started session-50.scope - Session 50 of User core. Feb 13 19:38:47.583157 sshd[4963]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:47.586944 systemd[1]: sshd@49-10.0.0.52:22-10.0.0.1:58710.service: Deactivated successfully. Feb 13 19:38:47.588544 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 19:38:47.589149 systemd-logind[1430]: Session 50 logged out. Waiting for processes to exit. Feb 13 19:38:47.589873 systemd-logind[1430]: Removed session 50. Feb 13 19:38:49.076947 kubelet[2426]: E0213 19:38:49.076848 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:50.077557 kubelet[2426]: E0213 19:38:50.077514 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:52.593759 systemd[1]: Started sshd@50-10.0.0.52:22-10.0.0.1:48958.service - OpenSSH per-connection server daemon (10.0.0.1:48958). Feb 13 19:38:52.628036 sshd[5001]: Accepted publickey for core from 10.0.0.1 port 48958 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:38:52.629345 sshd[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:52.633445 systemd-logind[1430]: New session 51 of user core. Feb 13 19:38:52.640137 systemd[1]: Started session-51.scope - Session 51 of User core. Feb 13 19:38:52.744860 sshd[5001]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:52.748063 systemd[1]: sshd@50-10.0.0.52:22-10.0.0.1:48958.service: Deactivated successfully. Feb 13 19:38:52.750383 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 19:38:52.751100 systemd-logind[1430]: Session 51 logged out. Waiting for processes to exit. Feb 13 19:38:52.753194 systemd-logind[1430]: Removed session 51. Feb 13 19:38:57.757022 systemd[1]: Started sshd@51-10.0.0.52:22-10.0.0.1:48970.service - OpenSSH per-connection server daemon (10.0.0.1:48970). Feb 13 19:38:57.791390 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 48970 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:38:57.792596 sshd[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:57.798907 systemd-logind[1430]: New session 52 of user core. Feb 13 19:38:57.809578 systemd[1]: Started session-52.scope - Session 52 of User core. Feb 13 19:38:57.919294 sshd[5036]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:57.922886 systemd[1]: sshd@51-10.0.0.52:22-10.0.0.1:48970.service: Deactivated successfully. Feb 13 19:38:57.925356 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 19:38:57.925937 systemd-logind[1430]: Session 52 logged out. Waiting for processes to exit. Feb 13 19:38:57.926743 systemd-logind[1430]: Removed session 52. Feb 13 19:39:02.929737 systemd[1]: Started sshd@52-10.0.0.52:22-10.0.0.1:59568.service - OpenSSH per-connection server daemon (10.0.0.1:59568). Feb 13 19:39:02.968911 sshd[5077]: Accepted publickey for core from 10.0.0.1 port 59568 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:39:02.970178 sshd[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:02.974049 systemd-logind[1430]: New session 53 of user core. Feb 13 19:39:02.985195 systemd[1]: Started session-53.scope - Session 53 of User core. Feb 13 19:39:03.097357 sshd[5077]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:03.100646 systemd[1]: sshd@52-10.0.0.52:22-10.0.0.1:59568.service: Deactivated successfully. Feb 13 19:39:03.102527 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 19:39:03.103310 systemd-logind[1430]: Session 53 logged out. Waiting for processes to exit. Feb 13 19:39:03.104116 systemd-logind[1430]: Removed session 53. Feb 13 19:39:08.108571 systemd[1]: Started sshd@53-10.0.0.52:22-10.0.0.1:59572.service - OpenSSH per-connection server daemon (10.0.0.1:59572). Feb 13 19:39:08.143591 sshd[5112]: Accepted publickey for core from 10.0.0.1 port 59572 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:39:08.145025 sshd[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:08.149231 systemd-logind[1430]: New session 54 of user core. Feb 13 19:39:08.158179 systemd[1]: Started session-54.scope - Session 54 of User core. Feb 13 19:39:08.269169 sshd[5112]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:08.276225 systemd[1]: sshd@53-10.0.0.52:22-10.0.0.1:59572.service: Deactivated successfully. Feb 13 19:39:08.277829 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 19:39:08.279829 systemd-logind[1430]: Session 54 logged out. Waiting for processes to exit. Feb 13 19:39:08.281091 systemd-logind[1430]: Removed session 54. Feb 13 19:39:13.279799 systemd[1]: Started sshd@54-10.0.0.52:22-10.0.0.1:34282.service - OpenSSH per-connection server daemon (10.0.0.1:34282). Feb 13 19:39:13.313457 sshd[5163]: Accepted publickey for core from 10.0.0.1 port 34282 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:39:13.314769 sshd[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:13.318740 systemd-logind[1430]: New session 55 of user core. Feb 13 19:39:13.327180 systemd[1]: Started session-55.scope - Session 55 of User core. Feb 13 19:39:13.430450 sshd[5163]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:13.433449 systemd[1]: sshd@54-10.0.0.52:22-10.0.0.1:34282.service: Deactivated successfully. Feb 13 19:39:13.436493 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 19:39:13.437812 systemd-logind[1430]: Session 55 logged out. Waiting for processes to exit. Feb 13 19:39:13.439556 systemd-logind[1430]: Removed session 55. Feb 13 19:39:18.448494 systemd[1]: Started sshd@55-10.0.0.52:22-10.0.0.1:34296.service - OpenSSH per-connection server daemon (10.0.0.1:34296). Feb 13 19:39:18.479800 sshd[5198]: Accepted publickey for core from 10.0.0.1 port 34296 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:39:18.481048 sshd[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:18.485545 systemd-logind[1430]: New session 56 of user core. Feb 13 19:39:18.504391 systemd[1]: Started session-56.scope - Session 56 of User core. Feb 13 19:39:18.633447 sshd[5198]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:18.636510 systemd[1]: sshd@55-10.0.0.52:22-10.0.0.1:34296.service: Deactivated successfully. Feb 13 19:39:18.639067 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 19:39:18.643553 systemd-logind[1430]: Session 56 logged out. Waiting for processes to exit. Feb 13 19:39:18.644463 systemd-logind[1430]: Removed session 56. Feb 13 19:39:23.647961 systemd[1]: Started sshd@56-10.0.0.52:22-10.0.0.1:32934.service - OpenSSH per-connection server daemon (10.0.0.1:32934). Feb 13 19:39:23.681850 sshd[5235]: Accepted publickey for core from 10.0.0.1 port 32934 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:39:23.683138 sshd[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:23.688322 systemd-logind[1430]: New session 57 of user core. Feb 13 19:39:23.698141 systemd[1]: Started session-57.scope - Session 57 of User core. Feb 13 19:39:23.826581 sshd[5235]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:23.829764 systemd[1]: sshd@56-10.0.0.52:22-10.0.0.1:32934.service: Deactivated successfully. Feb 13 19:39:23.831604 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 19:39:23.832768 systemd-logind[1430]: Session 57 logged out. Waiting for processes to exit. Feb 13 19:39:23.833793 systemd-logind[1430]: Removed session 57. Feb 13 19:39:28.839902 systemd[1]: Started sshd@57-10.0.0.52:22-10.0.0.1:32942.service - OpenSSH per-connection server daemon (10.0.0.1:32942). Feb 13 19:39:28.870949 sshd[5271]: Accepted publickey for core from 10.0.0.1 port 32942 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:39:28.872114 sshd[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:28.878825 systemd-logind[1430]: New session 58 of user core. Feb 13 19:39:28.884210 systemd[1]: Started session-58.scope - Session 58 of User core. Feb 13 19:39:28.999190 sshd[5271]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:29.002033 systemd[1]: sshd@57-10.0.0.52:22-10.0.0.1:32942.service: Deactivated successfully. Feb 13 19:39:29.004302 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 19:39:29.005848 systemd-logind[1430]: Session 58 logged out. Waiting for processes to exit. Feb 13 19:39:29.007067 systemd-logind[1430]: Removed session 58. Feb 13 19:39:34.010216 systemd[1]: Started sshd@58-10.0.0.52:22-10.0.0.1:36598.service - OpenSSH per-connection server daemon (10.0.0.1:36598). Feb 13 19:39:34.047157 sshd[5306]: Accepted publickey for core from 10.0.0.1 port 36598 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:39:34.048436 sshd[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:34.054461 systemd-logind[1430]: New session 59 of user core. Feb 13 19:39:34.060162 systemd[1]: Started session-59.scope - Session 59 of User core. Feb 13 19:39:34.167147 sshd[5306]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:34.170453 systemd[1]: sshd@58-10.0.0.52:22-10.0.0.1:36598.service: Deactivated successfully. Feb 13 19:39:34.172486 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 19:39:34.173168 systemd-logind[1430]: Session 59 logged out. Waiting for processes to exit. Feb 13 19:39:34.174077 systemd-logind[1430]: Removed session 59. Feb 13 19:39:38.077236 kubelet[2426]: E0213 19:39:38.077197 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:39.179169 systemd[1]: Started sshd@59-10.0.0.52:22-10.0.0.1:36610.service - OpenSSH per-connection server daemon (10.0.0.1:36610). Feb 13 19:39:39.212060 sshd[5342]: Accepted publickey for core from 10.0.0.1 port 36610 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:39:39.213139 sshd[5342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:39.216970 systemd-logind[1430]: New session 60 of user core. Feb 13 19:39:39.230126 systemd[1]: Started session-60.scope - Session 60 of User core. Feb 13 19:39:39.338731 sshd[5342]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:39.342525 systemd[1]: sshd@59-10.0.0.52:22-10.0.0.1:36610.service: Deactivated successfully. Feb 13 19:39:39.344157 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 19:39:39.344732 systemd-logind[1430]: Session 60 logged out. Waiting for processes to exit. Feb 13 19:39:39.345551 systemd-logind[1430]: Removed session 60. Feb 13 19:39:44.357660 systemd[1]: Started sshd@60-10.0.0.52:22-10.0.0.1:42276.service - OpenSSH per-connection server daemon (10.0.0.1:42276). Feb 13 19:39:44.395196 sshd[5382]: Accepted publickey for core from 10.0.0.1 port 42276 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:39:44.396449 sshd[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:44.400636 systemd-logind[1430]: New session 61 of user core. Feb 13 19:39:44.418133 systemd[1]: Started session-61.scope - Session 61 of User core. Feb 13 19:39:44.530590 sshd[5382]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:44.533872 systemd[1]: sshd@60-10.0.0.52:22-10.0.0.1:42276.service: Deactivated successfully. Feb 13 19:39:44.537119 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 19:39:44.537724 systemd-logind[1430]: Session 61 logged out. Waiting for processes to exit. Feb 13 19:39:44.538647 systemd-logind[1430]: Removed session 61. Feb 13 19:39:49.541670 systemd[1]: Started sshd@61-10.0.0.52:22-10.0.0.1:42290.service - OpenSSH per-connection server daemon (10.0.0.1:42290). Feb 13 19:39:49.575572 sshd[5420]: Accepted publickey for core from 10.0.0.1 port 42290 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:39:49.576776 sshd[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:49.580413 systemd-logind[1430]: New session 62 of user core. Feb 13 19:39:49.595165 systemd[1]: Started session-62.scope - Session 62 of User core. Feb 13 19:39:49.701571 sshd[5420]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:49.704951 systemd[1]: sshd@61-10.0.0.52:22-10.0.0.1:42290.service: Deactivated successfully. Feb 13 19:39:49.706738 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 19:39:49.707873 systemd-logind[1430]: Session 62 logged out. Waiting for processes to exit. Feb 13 19:39:49.708701 systemd-logind[1430]: Removed session 62. Feb 13 19:39:52.077035 kubelet[2426]: E0213 19:39:52.076643 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:54.712468 systemd[1]: Started sshd@62-10.0.0.52:22-10.0.0.1:46482.service - OpenSSH per-connection server daemon (10.0.0.1:46482). Feb 13 19:39:54.746404 sshd[5459]: Accepted publickey for core from 10.0.0.1 port 46482 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:39:54.747672 sshd[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:54.751935 systemd-logind[1430]: New session 63 of user core. Feb 13 19:39:54.766199 systemd[1]: Started session-63.scope - Session 63 of User core. Feb 13 19:39:54.872422 sshd[5459]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:54.875647 systemd[1]: sshd@62-10.0.0.52:22-10.0.0.1:46482.service: Deactivated successfully. Feb 13 19:39:54.877307 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 19:39:54.879092 systemd-logind[1430]: Session 63 logged out. Waiting for processes to exit. Feb 13 19:39:54.880201 systemd-logind[1430]: Removed session 63. Feb 13 19:39:55.076613 kubelet[2426]: E0213 19:39:55.076586 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:56.077565 kubelet[2426]: E0213 19:39:56.077531 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:58.077887 kubelet[2426]: E0213 19:39:58.077484 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:59.076788 kubelet[2426]: E0213 19:39:59.076688 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:59.882496 systemd[1]: Started sshd@63-10.0.0.52:22-10.0.0.1:46484.service - OpenSSH per-connection server daemon (10.0.0.1:46484). Feb 13 19:39:59.918475 sshd[5495]: Accepted publickey for core from 10.0.0.1 port 46484 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:39:59.919635 sshd[5495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:59.924232 systemd-logind[1430]: New session 64 of user core. Feb 13 19:39:59.934049 systemd[1]: Started session-64.scope - Session 64 of User core. Feb 13 19:40:00.040889 sshd[5495]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:00.044134 systemd[1]: sshd@63-10.0.0.52:22-10.0.0.1:46484.service: Deactivated successfully. Feb 13 19:40:00.045715 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 19:40:00.046410 systemd-logind[1430]: Session 64 logged out. Waiting for processes to exit. Feb 13 19:40:00.047199 systemd-logind[1430]: Removed session 64. Feb 13 19:40:00.077556 kubelet[2426]: E0213 19:40:00.077513 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:05.051672 systemd[1]: Started sshd@64-10.0.0.52:22-10.0.0.1:59106.service - OpenSSH per-connection server daemon (10.0.0.1:59106). Feb 13 19:40:05.086000 sshd[5532]: Accepted publickey for core from 10.0.0.1 port 59106 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:05.087195 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:05.091053 systemd-logind[1430]: New session 65 of user core. Feb 13 19:40:05.107139 systemd[1]: Started session-65.scope - Session 65 of User core. Feb 13 19:40:05.211252 sshd[5532]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:05.215090 systemd[1]: sshd@64-10.0.0.52:22-10.0.0.1:59106.service: Deactivated successfully. Feb 13 19:40:05.216678 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 19:40:05.218358 systemd-logind[1430]: Session 65 logged out. Waiting for processes to exit. Feb 13 19:40:05.219161 systemd-logind[1430]: Removed session 65. Feb 13 19:40:10.221837 systemd[1]: Started sshd@65-10.0.0.52:22-10.0.0.1:59112.service - OpenSSH per-connection server daemon (10.0.0.1:59112). Feb 13 19:40:10.255912 sshd[5567]: Accepted publickey for core from 10.0.0.1 port 59112 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:10.257230 sshd[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:10.261669 systemd-logind[1430]: New session 66 of user core. Feb 13 19:40:10.271157 systemd[1]: Started session-66.scope - Session 66 of User core. Feb 13 19:40:10.379812 sshd[5567]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:10.383213 systemd[1]: sshd@65-10.0.0.52:22-10.0.0.1:59112.service: Deactivated successfully. Feb 13 19:40:10.385827 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 19:40:10.386680 systemd-logind[1430]: Session 66 logged out. Waiting for processes to exit. Feb 13 19:40:10.387653 systemd-logind[1430]: Removed session 66. Feb 13 19:40:15.390588 systemd[1]: Started sshd@66-10.0.0.52:22-10.0.0.1:53980.service - OpenSSH per-connection server daemon (10.0.0.1:53980). Feb 13 19:40:15.424650 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 53980 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:15.425974 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:15.429867 systemd-logind[1430]: New session 67 of user core. Feb 13 19:40:15.444202 systemd[1]: Started session-67.scope - Session 67 of User core. Feb 13 19:40:15.555206 sshd[5602]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:15.558645 systemd[1]: sshd@66-10.0.0.52:22-10.0.0.1:53980.service: Deactivated successfully. Feb 13 19:40:15.560413 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 19:40:15.560967 systemd-logind[1430]: Session 67 logged out. Waiting for processes to exit. Feb 13 19:40:15.561970 systemd-logind[1430]: Removed session 67. Feb 13 19:40:20.571941 systemd[1]: Started sshd@67-10.0.0.52:22-10.0.0.1:53992.service - OpenSSH per-connection server daemon (10.0.0.1:53992). Feb 13 19:40:20.604234 sshd[5639]: Accepted publickey for core from 10.0.0.1 port 53992 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:20.605441 sshd[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:20.611489 systemd-logind[1430]: New session 68 of user core. Feb 13 19:40:20.620133 systemd[1]: Started session-68.scope - Session 68 of User core. Feb 13 19:40:20.737975 sshd[5639]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:20.741415 systemd[1]: sshd@67-10.0.0.52:22-10.0.0.1:53992.service: Deactivated successfully. Feb 13 19:40:20.743043 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 19:40:20.744659 systemd-logind[1430]: Session 68 logged out. Waiting for processes to exit. Feb 13 19:40:20.745786 systemd-logind[1430]: Removed session 68. Feb 13 19:40:25.748664 systemd[1]: Started sshd@68-10.0.0.52:22-10.0.0.1:54764.service - OpenSSH per-connection server daemon (10.0.0.1:54764). Feb 13 19:40:25.783024 sshd[5675]: Accepted publickey for core from 10.0.0.1 port 54764 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:25.784369 sshd[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:25.789004 systemd-logind[1430]: New session 69 of user core. Feb 13 19:40:25.796191 systemd[1]: Started session-69.scope - Session 69 of User core. Feb 13 19:40:25.901979 sshd[5675]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:25.905222 systemd[1]: sshd@68-10.0.0.52:22-10.0.0.1:54764.service: Deactivated successfully. Feb 13 19:40:25.906982 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 19:40:25.907650 systemd-logind[1430]: Session 69 logged out. Waiting for processes to exit. Feb 13 19:40:25.908613 systemd-logind[1430]: Removed session 69. Feb 13 19:40:30.915638 systemd[1]: Started sshd@69-10.0.0.52:22-10.0.0.1:54772.service - OpenSSH per-connection server daemon (10.0.0.1:54772). Feb 13 19:40:30.949855 sshd[5711]: Accepted publickey for core from 10.0.0.1 port 54772 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:30.951023 sshd[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:30.955660 systemd-logind[1430]: New session 70 of user core. Feb 13 19:40:30.964154 systemd[1]: Started session-70.scope - Session 70 of User core. Feb 13 19:40:31.081458 sshd[5711]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:31.085382 systemd[1]: sshd@69-10.0.0.52:22-10.0.0.1:54772.service: Deactivated successfully. Feb 13 19:40:31.087684 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 19:40:31.088946 systemd-logind[1430]: Session 70 logged out. Waiting for processes to exit. Feb 13 19:40:31.090286 systemd-logind[1430]: Removed session 70. Feb 13 19:40:36.095944 systemd[1]: Started sshd@70-10.0.0.52:22-10.0.0.1:45526.service - OpenSSH per-connection server daemon (10.0.0.1:45526). Feb 13 19:40:36.128635 sshd[5747]: Accepted publickey for core from 10.0.0.1 port 45526 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:36.129852 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:36.135015 systemd-logind[1430]: New session 71 of user core. Feb 13 19:40:36.142143 systemd[1]: Started session-71.scope - Session 71 of User core. Feb 13 19:40:36.251662 sshd[5747]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:36.256601 systemd-logind[1430]: Session 71 logged out. Waiting for processes to exit. Feb 13 19:40:36.257212 systemd[1]: sshd@70-10.0.0.52:22-10.0.0.1:45526.service: Deactivated successfully. Feb 13 19:40:36.258938 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 19:40:36.260905 systemd-logind[1430]: Removed session 71. Feb 13 19:40:41.078686 kubelet[2426]: E0213 19:40:41.078645 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:41.261687 systemd[1]: Started sshd@71-10.0.0.52:22-10.0.0.1:45538.service - OpenSSH per-connection server daemon (10.0.0.1:45538). Feb 13 19:40:41.297530 sshd[5782]: Accepted publickey for core from 10.0.0.1 port 45538 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:41.298685 sshd[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:41.302179 systemd-logind[1430]: New session 72 of user core. Feb 13 19:40:41.313137 systemd[1]: Started session-72.scope - Session 72 of User core. Feb 13 19:40:41.425292 sshd[5782]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:41.427872 systemd[1]: sshd@71-10.0.0.52:22-10.0.0.1:45538.service: Deactivated successfully. Feb 13 19:40:41.431391 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 19:40:41.432821 systemd-logind[1430]: Session 72 logged out. Waiting for processes to exit. Feb 13 19:40:41.434191 systemd-logind[1430]: Removed session 72. Feb 13 19:40:46.436668 systemd[1]: Started sshd@72-10.0.0.52:22-10.0.0.1:34160.service - OpenSSH per-connection server daemon (10.0.0.1:34160). Feb 13 19:40:46.470496 sshd[5820]: Accepted publickey for core from 10.0.0.1 port 34160 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:46.471741 sshd[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:46.475998 systemd-logind[1430]: New session 73 of user core. Feb 13 19:40:46.485149 systemd[1]: Started session-73.scope - Session 73 of User core. Feb 13 19:40:46.595893 sshd[5820]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:46.599154 systemd[1]: sshd@72-10.0.0.52:22-10.0.0.1:34160.service: Deactivated successfully. Feb 13 19:40:46.600821 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 19:40:46.602556 systemd-logind[1430]: Session 73 logged out. Waiting for processes to exit. Feb 13 19:40:46.603338 systemd-logind[1430]: Removed session 73. Feb 13 19:40:51.609683 systemd[1]: Started sshd@73-10.0.0.52:22-10.0.0.1:34170.service - OpenSSH per-connection server daemon (10.0.0.1:34170). Feb 13 19:40:51.643328 sshd[5857]: Accepted publickey for core from 10.0.0.1 port 34170 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:51.644487 sshd[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:51.648121 systemd-logind[1430]: New session 74 of user core. Feb 13 19:40:51.655163 systemd[1]: Started session-74.scope - Session 74 of User core. Feb 13 19:40:51.763362 sshd[5857]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:51.766308 systemd[1]: sshd@73-10.0.0.52:22-10.0.0.1:34170.service: Deactivated successfully. Feb 13 19:40:51.767890 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 19:40:51.770947 systemd-logind[1430]: Session 74 logged out. Waiting for processes to exit. Feb 13 19:40:51.773060 systemd-logind[1430]: Removed session 74. Feb 13 19:40:56.774310 systemd[1]: Started sshd@74-10.0.0.52:22-10.0.0.1:54176.service - OpenSSH per-connection server daemon (10.0.0.1:54176). Feb 13 19:40:56.814657 sshd[5893]: Accepted publickey for core from 10.0.0.1 port 54176 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:56.815229 sshd[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:56.819394 systemd-logind[1430]: New session 75 of user core. Feb 13 19:40:56.830161 systemd[1]: Started session-75.scope - Session 75 of User core. Feb 13 19:40:56.947032 sshd[5893]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:56.950383 systemd[1]: sshd@74-10.0.0.52:22-10.0.0.1:54176.service: Deactivated successfully. Feb 13 19:40:56.953727 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 19:40:56.957235 systemd-logind[1430]: Session 75 logged out. Waiting for processes to exit. Feb 13 19:40:56.959179 systemd-logind[1430]: Removed session 75. Feb 13 19:41:01.957412 systemd[1]: Started sshd@75-10.0.0.52:22-10.0.0.1:54190.service - OpenSSH per-connection server daemon (10.0.0.1:54190). Feb 13 19:41:01.994248 sshd[5928]: Accepted publickey for core from 10.0.0.1 port 54190 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:41:01.995414 sshd[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:01.999787 systemd-logind[1430]: New session 76 of user core. Feb 13 19:41:02.017148 systemd[1]: Started session-76.scope - Session 76 of User core. Feb 13 19:41:02.077012 kubelet[2426]: E0213 19:41:02.076922 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:02.124812 sshd[5928]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:02.127975 systemd[1]: sshd@75-10.0.0.52:22-10.0.0.1:54190.service: Deactivated successfully. Feb 13 19:41:02.130087 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 19:41:02.130718 systemd-logind[1430]: Session 76 logged out. Waiting for processes to exit. Feb 13 19:41:02.131432 systemd-logind[1430]: Removed session 76. Feb 13 19:41:03.077514 kubelet[2426]: E0213 19:41:03.077413 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:03.077866 kubelet[2426]: E0213 19:41:03.077627 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:04.076780 kubelet[2426]: E0213 19:41:04.076699 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:07.135576 systemd[1]: Started sshd@76-10.0.0.52:22-10.0.0.1:39388.service - OpenSSH per-connection server daemon (10.0.0.1:39388). Feb 13 19:41:07.178670 sshd[5963]: Accepted publickey for core from 10.0.0.1 port 39388 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:41:07.179914 sshd[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:07.187071 systemd-logind[1430]: New session 77 of user core. Feb 13 19:41:07.193133 systemd[1]: Started session-77.scope - Session 77 of User core. Feb 13 19:41:07.297844 sshd[5963]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:07.300486 systemd[1]: sshd@76-10.0.0.52:22-10.0.0.1:39388.service: Deactivated successfully. Feb 13 19:41:07.303845 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 19:41:07.306103 systemd-logind[1430]: Session 77 logged out. Waiting for processes to exit. Feb 13 19:41:07.306929 systemd-logind[1430]: Removed session 77. Feb 13 19:41:12.314097 systemd[1]: Started sshd@77-10.0.0.52:22-10.0.0.1:39400.service - OpenSSH per-connection server daemon (10.0.0.1:39400). Feb 13 19:41:12.351282 sshd[5998]: Accepted publickey for core from 10.0.0.1 port 39400 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:41:12.352690 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:12.357375 systemd-logind[1430]: New session 78 of user core. Feb 13 19:41:12.366166 systemd[1]: Started session-78.scope - Session 78 of User core. Feb 13 19:41:12.487132 sshd[5998]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:12.496604 systemd[1]: sshd@77-10.0.0.52:22-10.0.0.1:39400.service: Deactivated successfully. Feb 13 19:41:12.498152 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 19:41:12.499889 systemd-logind[1430]: Session 78 logged out. Waiting for processes to exit. Feb 13 19:41:12.500840 systemd[1]: Started sshd@78-10.0.0.52:22-10.0.0.1:50104.service - OpenSSH per-connection server daemon (10.0.0.1:50104). Feb 13 19:41:12.501942 systemd-logind[1430]: Removed session 78. Feb 13 19:41:12.553402 sshd[6012]: Accepted publickey for core from 10.0.0.1 port 50104 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:41:12.553170 sshd[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:12.561054 systemd-logind[1430]: New session 79 of user core. Feb 13 19:41:12.576176 systemd[1]: Started session-79.scope - Session 79 of User core. Feb 13 19:41:12.780092 sshd[6012]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:12.788492 systemd[1]: sshd@78-10.0.0.52:22-10.0.0.1:50104.service: Deactivated successfully. Feb 13 19:41:12.790740 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 19:41:12.793197 systemd-logind[1430]: Session 79 logged out. Waiting for processes to exit. Feb 13 19:41:12.803305 systemd[1]: Started sshd@79-10.0.0.52:22-10.0.0.1:50116.service - OpenSSH per-connection server daemon (10.0.0.1:50116). Feb 13 19:41:12.804591 systemd-logind[1430]: Removed session 79. Feb 13 19:41:12.839870 sshd[6025]: Accepted publickey for core from 10.0.0.1 port 50116 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:41:12.841520 sshd[6025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:12.847312 systemd-logind[1430]: New session 80 of user core. Feb 13 19:41:12.855232 systemd[1]: Started session-80.scope - Session 80 of User core. Feb 13 19:41:13.980194 sshd[6025]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:13.989131 systemd[1]: sshd@79-10.0.0.52:22-10.0.0.1:50116.service: Deactivated successfully. Feb 13 19:41:13.994647 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 19:41:13.999042 systemd-logind[1430]: Session 80 logged out. Waiting for processes to exit. Feb 13 19:41:14.009467 systemd[1]: Started sshd@80-10.0.0.52:22-10.0.0.1:50120.service - OpenSSH per-connection server daemon (10.0.0.1:50120). Feb 13 19:41:14.010550 systemd-logind[1430]: Removed session 80. Feb 13 19:41:14.039024 sshd[6069]: Accepted publickey for core from 10.0.0.1 port 50120 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:41:14.040269 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:14.044046 systemd-logind[1430]: New session 81 of user core. Feb 13 19:41:14.050171 systemd[1]: Started session-81.scope - Session 81 of User core. Feb 13 19:41:14.257909 sshd[6069]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:14.265605 systemd[1]: sshd@80-10.0.0.52:22-10.0.0.1:50120.service: Deactivated successfully. Feb 13 19:41:14.269191 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 19:41:14.272316 systemd-logind[1430]: Session 81 logged out. Waiting for processes to exit. Feb 13 19:41:14.288332 systemd[1]: Started sshd@81-10.0.0.52:22-10.0.0.1:50136.service - OpenSSH per-connection server daemon (10.0.0.1:50136). Feb 13 19:41:14.289356 systemd-logind[1430]: Removed session 81. Feb 13 19:41:14.318227 sshd[6082]: Accepted publickey for core from 10.0.0.1 port 50136 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:41:14.319459 sshd[6082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:14.323155 systemd-logind[1430]: New session 82 of user core. Feb 13 19:41:14.330158 systemd[1]: Started session-82.scope - Session 82 of User core. Feb 13 19:41:14.437157 sshd[6082]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:14.439924 systemd-logind[1430]: Session 82 logged out. Waiting for processes to exit. Feb 13 19:41:14.440104 systemd[1]: sshd@81-10.0.0.52:22-10.0.0.1:50136.service: Deactivated successfully. Feb 13 19:41:14.441514 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 19:41:14.443040 systemd-logind[1430]: Removed session 82. Feb 13 19:41:16.076785 kubelet[2426]: E0213 19:41:16.076750 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:19.447646 systemd[1]: Started sshd@82-10.0.0.52:22-10.0.0.1:50148.service - OpenSSH per-connection server daemon (10.0.0.1:50148). Feb 13 19:41:19.485324 sshd[6117]: Accepted publickey for core from 10.0.0.1 port 50148 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:41:19.486645 sshd[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:19.490900 systemd-logind[1430]: New session 83 of user core. Feb 13 19:41:19.503177 systemd[1]: Started session-83.scope - Session 83 of User core. Feb 13 19:41:19.614012 sshd[6117]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:19.617720 systemd[1]: sshd@82-10.0.0.52:22-10.0.0.1:50148.service: Deactivated successfully. Feb 13 19:41:19.619323 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 19:41:19.619927 systemd-logind[1430]: Session 83 logged out. Waiting for processes to exit. Feb 13 19:41:19.620967 systemd-logind[1430]: Removed session 83. Feb 13 19:41:24.624439 systemd[1]: Started sshd@83-10.0.0.52:22-10.0.0.1:51242.service - OpenSSH per-connection server daemon (10.0.0.1:51242). Feb 13 19:41:24.658364 sshd[6154]: Accepted publickey for core from 10.0.0.1 port 51242 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:41:24.659872 sshd[6154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:24.664211 systemd-logind[1430]: New session 84 of user core. Feb 13 19:41:24.673145 systemd[1]: Started session-84.scope - Session 84 of User core. Feb 13 19:41:24.775535 sshd[6154]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:24.779249 systemd[1]: sshd@83-10.0.0.52:22-10.0.0.1:51242.service: Deactivated successfully. Feb 13 19:41:24.781457 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 19:41:24.782253 systemd-logind[1430]: Session 84 logged out. Waiting for processes to exit. Feb 13 19:41:24.783164 systemd-logind[1430]: Removed session 84. Feb 13 19:41:27.077907 kubelet[2426]: E0213 19:41:27.077539 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:29.790085 systemd[1]: Started sshd@84-10.0.0.52:22-10.0.0.1:51244.service - OpenSSH per-connection server daemon (10.0.0.1:51244). Feb 13 19:41:29.826508 sshd[6189]: Accepted publickey for core from 10.0.0.1 port 51244 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:41:29.827724 sshd[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:29.832048 systemd-logind[1430]: New session 85 of user core. Feb 13 19:41:29.843129 systemd[1]: Started session-85.scope - Session 85 of User core. Feb 13 19:41:29.948193 sshd[6189]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:29.951357 systemd[1]: sshd@84-10.0.0.52:22-10.0.0.1:51244.service: Deactivated successfully. Feb 13 19:41:29.954594 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 19:41:29.955518 systemd-logind[1430]: Session 85 logged out. Waiting for processes to exit. Feb 13 19:41:29.956432 systemd-logind[1430]: Removed session 85. Feb 13 19:41:34.958979 systemd[1]: Started sshd@85-10.0.0.52:22-10.0.0.1:43614.service - OpenSSH per-connection server daemon (10.0.0.1:43614). Feb 13 19:41:34.995121 sshd[6225]: Accepted publickey for core from 10.0.0.1 port 43614 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:41:34.996864 sshd[6225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:35.001704 systemd-logind[1430]: New session 86 of user core. Feb 13 19:41:35.009192 systemd[1]: Started session-86.scope - Session 86 of User core. Feb 13 19:41:35.117851 sshd[6225]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:35.121074 systemd[1]: sshd@85-10.0.0.52:22-10.0.0.1:43614.service: Deactivated successfully. Feb 13 19:41:35.122823 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 19:41:35.123701 systemd-logind[1430]: Session 86 logged out. Waiting for processes to exit. Feb 13 19:41:35.124530 systemd-logind[1430]: Removed session 86. Feb 13 19:41:40.128544 systemd[1]: Started sshd@86-10.0.0.52:22-10.0.0.1:43626.service - OpenSSH per-connection server daemon (10.0.0.1:43626). Feb 13 19:41:40.162715 sshd[6262]: Accepted publickey for core from 10.0.0.1 port 43626 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:41:40.163905 sshd[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:40.167455 systemd-logind[1430]: New session 87 of user core. Feb 13 19:41:40.177158 systemd[1]: Started session-87.scope - Session 87 of User core. Feb 13 19:41:40.284067 sshd[6262]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:40.287386 systemd[1]: sshd@86-10.0.0.52:22-10.0.0.1:43626.service: Deactivated successfully. Feb 13 19:41:40.289058 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 19:41:40.289659 systemd-logind[1430]: Session 87 logged out. Waiting for processes to exit. Feb 13 19:41:40.290488 systemd-logind[1430]: Removed session 87. Feb 13 19:41:45.299490 systemd[1]: Started sshd@87-10.0.0.52:22-10.0.0.1:35566.service - OpenSSH per-connection server daemon (10.0.0.1:35566). Feb 13 19:41:45.334141 sshd[6299]: Accepted publickey for core from 10.0.0.1 port 35566 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:41:45.335430 sshd[6299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:45.338964 systemd-logind[1430]: New session 88 of user core. Feb 13 19:41:45.346130 systemd[1]: Started session-88.scope - Session 88 of User core. Feb 13 19:41:45.451351 sshd[6299]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:45.454605 systemd[1]: sshd@87-10.0.0.52:22-10.0.0.1:35566.service: Deactivated successfully. Feb 13 19:41:45.456232 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 19:41:45.456784 systemd-logind[1430]: Session 88 logged out. Waiting for processes to exit. Feb 13 19:41:45.457496 systemd-logind[1430]: Removed session 88. Feb 13 19:41:50.462545 systemd[1]: Started sshd@88-10.0.0.52:22-10.0.0.1:35580.service - OpenSSH per-connection server daemon (10.0.0.1:35580). Feb 13 19:41:50.496840 sshd[6336]: Accepted publickey for core from 10.0.0.1 port 35580 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:41:50.498263 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:50.501683 systemd-logind[1430]: New session 89 of user core. Feb 13 19:41:50.509131 systemd[1]: Started session-89.scope - Session 89 of User core. Feb 13 19:41:50.614668 sshd[6336]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:50.618026 systemd[1]: sshd@88-10.0.0.52:22-10.0.0.1:35580.service: Deactivated successfully. Feb 13 19:41:50.620383 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 19:41:50.621027 systemd-logind[1430]: Session 89 logged out. Waiting for processes to exit. Feb 13 19:41:50.621717 systemd-logind[1430]: Removed session 89. Feb 13 19:41:51.077336 kubelet[2426]: E0213 19:41:51.077296 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:55.625378 systemd[1]: Started sshd@89-10.0.0.52:22-10.0.0.1:42854.service - OpenSSH per-connection server daemon (10.0.0.1:42854). Feb 13 19:41:55.671747 sshd[6371]: Accepted publickey for core from 10.0.0.1 port 42854 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:41:55.672930 sshd[6371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:55.676801 systemd-logind[1430]: New session 90 of user core. Feb 13 19:41:55.694123 systemd[1]: Started session-90.scope - Session 90 of User core. Feb 13 19:41:55.809196 sshd[6371]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:55.812454 systemd[1]: sshd@89-10.0.0.52:22-10.0.0.1:42854.service: Deactivated successfully. Feb 13 19:41:55.814142 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 19:41:55.814805 systemd-logind[1430]: Session 90 logged out. Waiting for processes to exit. Feb 13 19:41:55.815757 systemd-logind[1430]: Removed session 90. Feb 13 19:42:00.819774 systemd[1]: Started sshd@90-10.0.0.52:22-10.0.0.1:42858.service - OpenSSH per-connection server daemon (10.0.0.1:42858). Feb 13 19:42:00.854106 sshd[6406]: Accepted publickey for core from 10.0.0.1 port 42858 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:42:00.855331 sshd[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:00.859915 systemd-logind[1430]: New session 91 of user core. Feb 13 19:42:00.870154 systemd[1]: Started session-91.scope - Session 91 of User core. Feb 13 19:42:00.983089 sshd[6406]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:00.986822 systemd[1]: sshd@90-10.0.0.52:22-10.0.0.1:42858.service: Deactivated successfully. Feb 13 19:42:00.989475 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 19:42:00.990071 systemd-logind[1430]: Session 91 logged out. Waiting for processes to exit. Feb 13 19:42:00.991381 systemd-logind[1430]: Removed session 91. Feb 13 19:42:05.077001 kubelet[2426]: E0213 19:42:05.076956 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:05.993646 systemd[1]: Started sshd@91-10.0.0.52:22-10.0.0.1:48606.service - OpenSSH per-connection server daemon (10.0.0.1:48606). Feb 13 19:42:06.028486 sshd[6441]: Accepted publickey for core from 10.0.0.1 port 48606 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:42:06.029736 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:06.033527 systemd-logind[1430]: New session 92 of user core. Feb 13 19:42:06.043118 systemd[1]: Started session-92.scope - Session 92 of User core. Feb 13 19:42:06.147402 sshd[6441]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:06.149971 systemd[1]: sshd@91-10.0.0.52:22-10.0.0.1:48606.service: Deactivated successfully. Feb 13 19:42:06.151664 systemd[1]: session-92.scope: Deactivated successfully. Feb 13 19:42:06.152920 systemd-logind[1430]: Session 92 logged out. Waiting for processes to exit. Feb 13 19:42:06.153887 systemd-logind[1430]: Removed session 92. Feb 13 19:42:11.157440 systemd[1]: Started sshd@92-10.0.0.52:22-10.0.0.1:48614.service - OpenSSH per-connection server daemon (10.0.0.1:48614). Feb 13 19:42:11.191004 sshd[6476]: Accepted publickey for core from 10.0.0.1 port 48614 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:42:11.192134 sshd[6476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:11.195732 systemd-logind[1430]: New session 93 of user core. Feb 13 19:42:11.202118 systemd[1]: Started session-93.scope - Session 93 of User core. Feb 13 19:42:11.305984 sshd[6476]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:11.309127 systemd[1]: sshd@92-10.0.0.52:22-10.0.0.1:48614.service: Deactivated successfully. Feb 13 19:42:11.310776 systemd[1]: session-93.scope: Deactivated successfully. Feb 13 19:42:11.311337 systemd-logind[1430]: Session 93 logged out. Waiting for processes to exit. Feb 13 19:42:11.312177 systemd-logind[1430]: Removed session 93. Feb 13 19:42:14.078726 kubelet[2426]: E0213 19:42:14.078514 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:16.316503 systemd[1]: Started sshd@93-10.0.0.52:22-10.0.0.1:38986.service - OpenSSH per-connection server daemon (10.0.0.1:38986). Feb 13 19:42:16.350536 sshd[6512]: Accepted publickey for core from 10.0.0.1 port 38986 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:42:16.351738 sshd[6512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:16.355270 systemd-logind[1430]: New session 94 of user core. Feb 13 19:42:16.372146 systemd[1]: Started session-94.scope - Session 94 of User core. Feb 13 19:42:16.477922 sshd[6512]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:16.481598 systemd[1]: sshd@93-10.0.0.52:22-10.0.0.1:38986.service: Deactivated successfully. Feb 13 19:42:16.483306 systemd[1]: session-94.scope: Deactivated successfully. Feb 13 19:42:16.483911 systemd-logind[1430]: Session 94 logged out. Waiting for processes to exit. Feb 13 19:42:16.484801 systemd-logind[1430]: Removed session 94. Feb 13 19:42:21.488693 systemd[1]: Started sshd@94-10.0.0.52:22-10.0.0.1:39000.service - OpenSSH per-connection server daemon (10.0.0.1:39000). Feb 13 19:42:21.522585 sshd[6550]: Accepted publickey for core from 10.0.0.1 port 39000 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:42:21.523773 sshd[6550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:21.527648 systemd-logind[1430]: New session 95 of user core. Feb 13 19:42:21.543129 systemd[1]: Started session-95.scope - Session 95 of User core. Feb 13 19:42:21.648564 sshd[6550]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:21.651622 systemd[1]: sshd@94-10.0.0.52:22-10.0.0.1:39000.service: Deactivated successfully. Feb 13 19:42:21.653861 systemd[1]: session-95.scope: Deactivated successfully. Feb 13 19:42:21.654705 systemd-logind[1430]: Session 95 logged out. Waiting for processes to exit. Feb 13 19:42:21.655553 systemd-logind[1430]: Removed session 95. Feb 13 19:42:22.077034 kubelet[2426]: E0213 19:42:22.076545 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:25.077335 kubelet[2426]: E0213 19:42:25.077289 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:26.077041 kubelet[2426]: E0213 19:42:26.076983 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:26.659598 systemd[1]: Started sshd@95-10.0.0.52:22-10.0.0.1:51904.service - OpenSSH per-connection server daemon (10.0.0.1:51904). Feb 13 19:42:26.693378 sshd[6589]: Accepted publickey for core from 10.0.0.1 port 51904 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:42:26.694590 sshd[6589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:26.698782 systemd-logind[1430]: New session 96 of user core. Feb 13 19:42:26.714147 systemd[1]: Started session-96.scope - Session 96 of User core. Feb 13 19:42:26.818639 sshd[6589]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:26.821795 systemd[1]: sshd@95-10.0.0.52:22-10.0.0.1:51904.service: Deactivated successfully. Feb 13 19:42:26.823331 systemd[1]: session-96.scope: Deactivated successfully. Feb 13 19:42:26.825050 systemd-logind[1430]: Session 96 logged out. Waiting for processes to exit. Feb 13 19:42:26.825885 systemd-logind[1430]: Removed session 96. Feb 13 19:42:31.829418 systemd[1]: Started sshd@96-10.0.0.52:22-10.0.0.1:51920.service - OpenSSH per-connection server daemon (10.0.0.1:51920). Feb 13 19:42:31.863284 sshd[6624]: Accepted publickey for core from 10.0.0.1 port 51920 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:42:31.864405 sshd[6624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:31.867765 systemd-logind[1430]: New session 97 of user core. Feb 13 19:42:31.879110 systemd[1]: Started session-97.scope - Session 97 of User core. Feb 13 19:42:31.983656 sshd[6624]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:31.986643 systemd[1]: sshd@96-10.0.0.52:22-10.0.0.1:51920.service: Deactivated successfully. Feb 13 19:42:31.988190 systemd[1]: session-97.scope: Deactivated successfully. Feb 13 19:42:31.989546 systemd-logind[1430]: Session 97 logged out. Waiting for processes to exit. Feb 13 19:42:31.990682 systemd-logind[1430]: Removed session 97. Feb 13 19:42:33.077491 kubelet[2426]: E0213 19:42:33.077444 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:36.994805 systemd[1]: Started sshd@97-10.0.0.52:22-10.0.0.1:39032.service - OpenSSH per-connection server daemon (10.0.0.1:39032). Feb 13 19:42:37.029482 sshd[6660]: Accepted publickey for core from 10.0.0.1 port 39032 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:42:37.030662 sshd[6660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:37.034169 systemd-logind[1430]: New session 98 of user core. Feb 13 19:42:37.045103 systemd[1]: Started session-98.scope - Session 98 of User core. Feb 13 19:42:37.149259 sshd[6660]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:37.152209 systemd[1]: sshd@97-10.0.0.52:22-10.0.0.1:39032.service: Deactivated successfully. Feb 13 19:42:37.153758 systemd[1]: session-98.scope: Deactivated successfully. Feb 13 19:42:37.155447 systemd-logind[1430]: Session 98 logged out. Waiting for processes to exit. Feb 13 19:42:37.156332 systemd-logind[1430]: Removed session 98. Feb 13 19:42:42.159890 systemd[1]: Started sshd@98-10.0.0.52:22-10.0.0.1:39042.service - OpenSSH per-connection server daemon (10.0.0.1:39042). Feb 13 19:42:42.194777 sshd[6695]: Accepted publickey for core from 10.0.0.1 port 39042 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:42:42.196141 sshd[6695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:42.199831 systemd-logind[1430]: New session 99 of user core. Feb 13 19:42:42.210147 systemd[1]: Started session-99.scope - Session 99 of User core. Feb 13 19:42:42.323461 sshd[6695]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:42.326671 systemd[1]: sshd@98-10.0.0.52:22-10.0.0.1:39042.service: Deactivated successfully. Feb 13 19:42:42.328927 systemd[1]: session-99.scope: Deactivated successfully. Feb 13 19:42:42.329854 systemd-logind[1430]: Session 99 logged out. Waiting for processes to exit. Feb 13 19:42:42.331176 systemd-logind[1430]: Removed session 99. Feb 13 19:42:47.334268 systemd[1]: Started sshd@99-10.0.0.52:22-10.0.0.1:54430.service - OpenSSH per-connection server daemon (10.0.0.1:54430). Feb 13 19:42:47.368573 sshd[6732]: Accepted publickey for core from 10.0.0.1 port 54430 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:42:47.369866 sshd[6732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:47.373928 systemd-logind[1430]: New session 100 of user core. Feb 13 19:42:47.386171 systemd[1]: Started session-100.scope - Session 100 of User core. Feb 13 19:42:47.506409 sshd[6732]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:47.509110 systemd[1]: sshd@99-10.0.0.52:22-10.0.0.1:54430.service: Deactivated successfully. Feb 13 19:42:47.510764 systemd[1]: session-100.scope: Deactivated successfully. Feb 13 19:42:47.512120 systemd-logind[1430]: Session 100 logged out. Waiting for processes to exit. Feb 13 19:42:47.512919 systemd-logind[1430]: Removed session 100. Feb 13 19:42:52.521490 systemd[1]: Started sshd@100-10.0.0.52:22-10.0.0.1:57042.service - OpenSSH per-connection server daemon (10.0.0.1:57042). Feb 13 19:42:52.556100 sshd[6769]: Accepted publickey for core from 10.0.0.1 port 57042 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:42:52.557354 sshd[6769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:52.560879 systemd-logind[1430]: New session 101 of user core. Feb 13 19:42:52.570190 systemd[1]: Started session-101.scope - Session 101 of User core. Feb 13 19:42:52.675068 sshd[6769]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:52.678360 systemd[1]: sshd@100-10.0.0.52:22-10.0.0.1:57042.service: Deactivated successfully. Feb 13 19:42:52.681409 systemd[1]: session-101.scope: Deactivated successfully. Feb 13 19:42:52.682022 systemd-logind[1430]: Session 101 logged out. Waiting for processes to exit. Feb 13 19:42:52.682818 systemd-logind[1430]: Removed session 101. Feb 13 19:42:57.689529 systemd[1]: Started sshd@101-10.0.0.52:22-10.0.0.1:57044.service - OpenSSH per-connection server daemon (10.0.0.1:57044). Feb 13 19:42:57.723675 sshd[6804]: Accepted publickey for core from 10.0.0.1 port 57044 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:42:57.724942 sshd[6804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:57.728854 systemd-logind[1430]: New session 102 of user core. Feb 13 19:42:57.742143 systemd[1]: Started session-102.scope - Session 102 of User core. Feb 13 19:42:57.849317 sshd[6804]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:57.852484 systemd[1]: sshd@101-10.0.0.52:22-10.0.0.1:57044.service: Deactivated successfully. Feb 13 19:42:57.854150 systemd[1]: session-102.scope: Deactivated successfully. Feb 13 19:42:57.854741 systemd-logind[1430]: Session 102 logged out. Waiting for processes to exit. Feb 13 19:42:57.855629 systemd-logind[1430]: Removed session 102. Feb 13 19:43:02.860481 systemd[1]: Started sshd@102-10.0.0.52:22-10.0.0.1:36110.service - OpenSSH per-connection server daemon (10.0.0.1:36110). Feb 13 19:43:02.895884 sshd[6840]: Accepted publickey for core from 10.0.0.1 port 36110 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:43:02.897084 sshd[6840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:02.900862 systemd-logind[1430]: New session 103 of user core. Feb 13 19:43:02.916126 systemd[1]: Started session-103.scope - Session 103 of User core. Feb 13 19:43:03.023031 sshd[6840]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:03.026588 systemd[1]: sshd@102-10.0.0.52:22-10.0.0.1:36110.service: Deactivated successfully. Feb 13 19:43:03.028817 systemd[1]: session-103.scope: Deactivated successfully. Feb 13 19:43:03.029913 systemd-logind[1430]: Session 103 logged out. Waiting for processes to exit. Feb 13 19:43:03.031244 systemd-logind[1430]: Removed session 103. Feb 13 19:43:05.077180 kubelet[2426]: E0213 19:43:05.077147 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:08.034114 systemd[1]: Started sshd@103-10.0.0.52:22-10.0.0.1:36118.service - OpenSSH per-connection server daemon (10.0.0.1:36118). Feb 13 19:43:08.069495 sshd[6875]: Accepted publickey for core from 10.0.0.1 port 36118 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:43:08.070964 sshd[6875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:08.075155 systemd-logind[1430]: New session 104 of user core. Feb 13 19:43:08.083165 systemd[1]: Started session-104.scope - Session 104 of User core. Feb 13 19:43:08.195818 sshd[6875]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:08.199343 systemd[1]: sshd@103-10.0.0.52:22-10.0.0.1:36118.service: Deactivated successfully. Feb 13 19:43:08.201190 systemd[1]: session-104.scope: Deactivated successfully. Feb 13 19:43:08.201810 systemd-logind[1430]: Session 104 logged out. Waiting for processes to exit. Feb 13 19:43:08.202736 systemd-logind[1430]: Removed session 104. Feb 13 19:43:13.206572 systemd[1]: Started sshd@104-10.0.0.52:22-10.0.0.1:45202.service - OpenSSH per-connection server daemon (10.0.0.1:45202). Feb 13 19:43:13.240628 sshd[6911]: Accepted publickey for core from 10.0.0.1 port 45202 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:43:13.241935 sshd[6911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:13.245529 systemd-logind[1430]: New session 105 of user core. Feb 13 19:43:13.253151 systemd[1]: Started session-105.scope - Session 105 of User core. Feb 13 19:43:13.358013 sshd[6911]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:13.361547 systemd[1]: sshd@104-10.0.0.52:22-10.0.0.1:45202.service: Deactivated successfully. Feb 13 19:43:13.363328 systemd[1]: session-105.scope: Deactivated successfully. Feb 13 19:43:13.363934 systemd-logind[1430]: Session 105 logged out. Waiting for processes to exit. Feb 13 19:43:13.364787 systemd-logind[1430]: Removed session 105. Feb 13 19:43:15.076703 kubelet[2426]: E0213 19:43:15.076662 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:18.368346 systemd[1]: Started sshd@105-10.0.0.52:22-10.0.0.1:45216.service - OpenSSH per-connection server daemon (10.0.0.1:45216). Feb 13 19:43:18.402532 sshd[6952]: Accepted publickey for core from 10.0.0.1 port 45216 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:43:18.403830 sshd[6952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:18.407475 systemd-logind[1430]: New session 106 of user core. Feb 13 19:43:18.422137 systemd[1]: Started session-106.scope - Session 106 of User core. Feb 13 19:43:18.529804 sshd[6952]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:18.533231 systemd[1]: sshd@105-10.0.0.52:22-10.0.0.1:45216.service: Deactivated successfully. Feb 13 19:43:18.535044 systemd[1]: session-106.scope: Deactivated successfully. Feb 13 19:43:18.536691 systemd-logind[1430]: Session 106 logged out. Waiting for processes to exit. Feb 13 19:43:18.538035 systemd-logind[1430]: Removed session 106. Feb 13 19:43:23.541529 systemd[1]: Started sshd@106-10.0.0.52:22-10.0.0.1:37300.service - OpenSSH per-connection server daemon (10.0.0.1:37300). Feb 13 19:43:23.575979 sshd[6990]: Accepted publickey for core from 10.0.0.1 port 37300 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:43:23.577283 sshd[6990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:23.581418 systemd-logind[1430]: New session 107 of user core. Feb 13 19:43:23.588132 systemd[1]: Started session-107.scope - Session 107 of User core. Feb 13 19:43:23.694314 sshd[6990]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:23.697624 systemd[1]: sshd@106-10.0.0.52:22-10.0.0.1:37300.service: Deactivated successfully. Feb 13 19:43:23.699333 systemd[1]: session-107.scope: Deactivated successfully. Feb 13 19:43:23.700522 systemd-logind[1430]: Session 107 logged out. Waiting for processes to exit. Feb 13 19:43:23.701290 systemd-logind[1430]: Removed session 107. Feb 13 19:43:27.077410 kubelet[2426]: E0213 19:43:27.077376 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:28.704482 systemd[1]: Started sshd@107-10.0.0.52:22-10.0.0.1:37310.service - OpenSSH per-connection server daemon (10.0.0.1:37310). Feb 13 19:43:28.738234 sshd[7025]: Accepted publickey for core from 10.0.0.1 port 37310 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:43:28.739521 sshd[7025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:28.743379 systemd-logind[1430]: New session 108 of user core. Feb 13 19:43:28.752151 systemd[1]: Started session-108.scope - Session 108 of User core. Feb 13 19:43:28.858183 sshd[7025]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:28.860777 systemd[1]: sshd@107-10.0.0.52:22-10.0.0.1:37310.service: Deactivated successfully. Feb 13 19:43:28.863408 systemd[1]: session-108.scope: Deactivated successfully. Feb 13 19:43:28.864658 systemd-logind[1430]: Session 108 logged out. Waiting for processes to exit. Feb 13 19:43:28.865681 systemd-logind[1430]: Removed session 108. Feb 13 19:43:29.077492 kubelet[2426]: E0213 19:43:29.077400 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:33.868442 systemd[1]: Started sshd@108-10.0.0.52:22-10.0.0.1:38262.service - OpenSSH per-connection server daemon (10.0.0.1:38262). Feb 13 19:43:33.902641 sshd[7060]: Accepted publickey for core from 10.0.0.1 port 38262 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:43:33.903801 sshd[7060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:33.907368 systemd-logind[1430]: New session 109 of user core. Feb 13 19:43:33.919213 systemd[1]: Started session-109.scope - Session 109 of User core. Feb 13 19:43:34.022252 sshd[7060]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:34.025443 systemd[1]: sshd@108-10.0.0.52:22-10.0.0.1:38262.service: Deactivated successfully. Feb 13 19:43:34.027050 systemd[1]: session-109.scope: Deactivated successfully. Feb 13 19:43:34.027658 systemd-logind[1430]: Session 109 logged out. Waiting for processes to exit. Feb 13 19:43:34.028836 systemd-logind[1430]: Removed session 109. Feb 13 19:43:34.077400 kubelet[2426]: E0213 19:43:34.077310 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:38.077664 kubelet[2426]: E0213 19:43:38.077569 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:39.040709 systemd[1]: Started sshd@109-10.0.0.52:22-10.0.0.1:38268.service - OpenSSH per-connection server daemon (10.0.0.1:38268). Feb 13 19:43:39.076660 sshd[7095]: Accepted publickey for core from 10.0.0.1 port 38268 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:43:39.077880 sshd[7095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:39.083098 systemd-logind[1430]: New session 110 of user core. Feb 13 19:43:39.094185 systemd[1]: Started session-110.scope - Session 110 of User core. Feb 13 19:43:39.212472 sshd[7095]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:39.215105 systemd[1]: sshd@109-10.0.0.52:22-10.0.0.1:38268.service: Deactivated successfully. Feb 13 19:43:39.216777 systemd[1]: session-110.scope: Deactivated successfully. Feb 13 19:43:39.217975 systemd-logind[1430]: Session 110 logged out. Waiting for processes to exit. Feb 13 19:43:39.219049 systemd-logind[1430]: Removed session 110. Feb 13 19:43:44.226512 systemd[1]: Started sshd@110-10.0.0.52:22-10.0.0.1:35336.service - OpenSSH per-connection server daemon (10.0.0.1:35336). Feb 13 19:43:44.259814 sshd[7148]: Accepted publickey for core from 10.0.0.1 port 35336 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:43:44.260947 sshd[7148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:44.264742 systemd-logind[1430]: New session 111 of user core. Feb 13 19:43:44.271187 systemd[1]: Started session-111.scope - Session 111 of User core. Feb 13 19:43:44.376964 sshd[7148]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:44.380088 systemd[1]: sshd@110-10.0.0.52:22-10.0.0.1:35336.service: Deactivated successfully. Feb 13 19:43:44.382561 systemd[1]: session-111.scope: Deactivated successfully. Feb 13 19:43:44.383288 systemd-logind[1430]: Session 111 logged out. Waiting for processes to exit. Feb 13 19:43:44.384233 systemd-logind[1430]: Removed session 111. Feb 13 19:43:49.387467 systemd[1]: Started sshd@111-10.0.0.52:22-10.0.0.1:35346.service - OpenSSH per-connection server daemon (10.0.0.1:35346). Feb 13 19:43:49.421393 sshd[7184]: Accepted publickey for core from 10.0.0.1 port 35346 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:43:49.422583 sshd[7184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:49.426528 systemd-logind[1430]: New session 112 of user core. Feb 13 19:43:49.433122 systemd[1]: Started session-112.scope - Session 112 of User core. Feb 13 19:43:49.537767 sshd[7184]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:49.541055 systemd[1]: sshd@111-10.0.0.52:22-10.0.0.1:35346.service: Deactivated successfully. Feb 13 19:43:49.542508 systemd[1]: session-112.scope: Deactivated successfully. Feb 13 19:43:49.543372 systemd-logind[1430]: Session 112 logged out. Waiting for processes to exit. Feb 13 19:43:49.544204 systemd-logind[1430]: Removed session 112. Feb 13 19:43:51.076971 kubelet[2426]: E0213 19:43:51.076933 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:54.550292 systemd[1]: Started sshd@112-10.0.0.52:22-10.0.0.1:50302.service - OpenSSH per-connection server daemon (10.0.0.1:50302). Feb 13 19:43:54.585174 sshd[7222]: Accepted publickey for core from 10.0.0.1 port 50302 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:43:54.586517 sshd[7222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:54.590938 systemd-logind[1430]: New session 113 of user core. Feb 13 19:43:54.606182 systemd[1]: Started session-113.scope - Session 113 of User core. Feb 13 19:43:54.715931 sshd[7222]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:54.719639 systemd[1]: sshd@112-10.0.0.52:22-10.0.0.1:50302.service: Deactivated successfully. Feb 13 19:43:54.722516 systemd[1]: session-113.scope: Deactivated successfully. Feb 13 19:43:54.723480 systemd-logind[1430]: Session 113 logged out. Waiting for processes to exit. Feb 13 19:43:54.725356 systemd-logind[1430]: Removed session 113. Feb 13 19:43:59.727168 systemd[1]: Started sshd@113-10.0.0.52:22-10.0.0.1:50308.service - OpenSSH per-connection server daemon (10.0.0.1:50308). Feb 13 19:43:59.763337 sshd[7257]: Accepted publickey for core from 10.0.0.1 port 50308 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:43:59.764666 sshd[7257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:59.769645 systemd-logind[1430]: New session 114 of user core. Feb 13 19:43:59.779409 systemd[1]: Started session-114.scope - Session 114 of User core. Feb 13 19:43:59.898355 sshd[7257]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:59.901795 systemd[1]: sshd@113-10.0.0.52:22-10.0.0.1:50308.service: Deactivated successfully. Feb 13 19:43:59.903430 systemd[1]: session-114.scope: Deactivated successfully. Feb 13 19:43:59.905694 systemd-logind[1430]: Session 114 logged out. Waiting for processes to exit. Feb 13 19:43:59.906814 systemd-logind[1430]: Removed session 114. Feb 13 19:44:04.912583 systemd[1]: Started sshd@114-10.0.0.52:22-10.0.0.1:33416.service - OpenSSH per-connection server daemon (10.0.0.1:33416). Feb 13 19:44:04.947322 sshd[7292]: Accepted publickey for core from 10.0.0.1 port 33416 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:44:04.948429 sshd[7292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:04.952623 systemd-logind[1430]: New session 115 of user core. Feb 13 19:44:04.967115 systemd[1]: Started session-115.scope - Session 115 of User core. Feb 13 19:44:05.076783 sshd[7292]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:05.079513 systemd-logind[1430]: Session 115 logged out. Waiting for processes to exit. Feb 13 19:44:05.080173 systemd[1]: sshd@114-10.0.0.52:22-10.0.0.1:33416.service: Deactivated successfully. Feb 13 19:44:05.082219 systemd[1]: session-115.scope: Deactivated successfully. Feb 13 19:44:05.085556 systemd-logind[1430]: Removed session 115. Feb 13 19:44:08.107843 update_engine[1433]: I20250213 19:44:08.107780 1433 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 19:44:08.107843 update_engine[1433]: I20250213 19:44:08.107831 1433 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 19:44:08.108230 update_engine[1433]: I20250213 19:44:08.108093 1433 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 19:44:08.108518 update_engine[1433]: I20250213 19:44:08.108481 1433 omaha_request_params.cc:62] Current group set to lts Feb 13 19:44:08.108603 update_engine[1433]: I20250213 19:44:08.108582 1433 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 19:44:08.108603 update_engine[1433]: I20250213 19:44:08.108593 1433 update_attempter.cc:643] Scheduling an action processor start. Feb 13 19:44:08.108657 update_engine[1433]: I20250213 19:44:08.108610 1433 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:44:08.108657 update_engine[1433]: I20250213 19:44:08.108637 1433 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 19:44:08.108712 update_engine[1433]: I20250213 19:44:08.108686 1433 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:44:08.108712 update_engine[1433]: I20250213 19:44:08.108695 1433 omaha_request_action.cc:272] Request: Feb 13 19:44:08.108712 update_engine[1433]: Feb 13 19:44:08.108712 update_engine[1433]: Feb 13 19:44:08.108712 update_engine[1433]: Feb 13 19:44:08.108712 update_engine[1433]: Feb 13 19:44:08.108712 update_engine[1433]: Feb 13 19:44:08.108712 update_engine[1433]: Feb 13 19:44:08.108712 update_engine[1433]: Feb 13 19:44:08.108712 update_engine[1433]: Feb 13 19:44:08.108712 update_engine[1433]: I20250213 19:44:08.108700 1433 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:44:08.108938 locksmithd[1470]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 19:44:08.119437 update_engine[1433]: I20250213 19:44:08.119398 1433 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:44:08.121474 update_engine[1433]: I20250213 19:44:08.121435 1433 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:44:08.158947 update_engine[1433]: E20250213 19:44:08.158890 1433 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:44:08.159065 update_engine[1433]: I20250213 19:44:08.159008 1433 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 19:44:10.086419 systemd[1]: Started sshd@115-10.0.0.52:22-10.0.0.1:33418.service - OpenSSH per-connection server daemon (10.0.0.1:33418). Feb 13 19:44:10.120228 sshd[7327]: Accepted publickey for core from 10.0.0.1 port 33418 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:44:10.121652 sshd[7327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:10.125792 systemd-logind[1430]: New session 116 of user core. Feb 13 19:44:10.140214 systemd[1]: Started session-116.scope - Session 116 of User core. Feb 13 19:44:10.243857 sshd[7327]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:10.247557 systemd[1]: sshd@115-10.0.0.52:22-10.0.0.1:33418.service: Deactivated successfully. Feb 13 19:44:10.249291 systemd[1]: session-116.scope: Deactivated successfully. Feb 13 19:44:10.250203 systemd-logind[1430]: Session 116 logged out. Waiting for processes to exit. Feb 13 19:44:10.250983 systemd-logind[1430]: Removed session 116. Feb 13 19:44:15.254387 systemd[1]: Started sshd@116-10.0.0.52:22-10.0.0.1:39608.service - OpenSSH per-connection server daemon (10.0.0.1:39608). Feb 13 19:44:15.288774 sshd[7364]: Accepted publickey for core from 10.0.0.1 port 39608 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:44:15.289933 sshd[7364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:15.293183 systemd-logind[1430]: New session 117 of user core. Feb 13 19:44:15.304121 systemd[1]: Started session-117.scope - Session 117 of User core. Feb 13 19:44:15.414351 sshd[7364]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:15.417351 systemd[1]: sshd@116-10.0.0.52:22-10.0.0.1:39608.service: Deactivated successfully. Feb 13 19:44:15.419728 systemd[1]: session-117.scope: Deactivated successfully. Feb 13 19:44:15.420396 systemd-logind[1430]: Session 117 logged out. Waiting for processes to exit. Feb 13 19:44:15.421221 systemd-logind[1430]: Removed session 117.