Apr 13 23:56:15.611747 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 23:56:15.611774 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 23:56:15.611788 kernel: BIOS-provided physical RAM map: Apr 13 23:56:15.611796 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 13 23:56:15.611803 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 13 23:56:15.611809 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 13 23:56:15.611818 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 13 23:56:15.611825 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 13 23:56:15.611832 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 13 23:56:15.611839 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 13 23:56:15.611848 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 13 23:56:15.611856 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 13 23:56:15.611864 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 13 23:56:15.611871 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 13 23:56:15.611880 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 13 23:56:15.611888 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 13 23:56:15.611897 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 13 23:56:15.611905 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 13 23:56:15.611913 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 13 23:56:15.611920 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 13 23:56:15.611927 kernel: NX (Execute Disable) protection: active Apr 13 23:56:15.611934 kernel: APIC: Static calls initialized Apr 13 23:56:15.611942 kernel: efi: EFI v2.7 by EDK II Apr 13 23:56:15.611949 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Apr 13 23:56:15.611956 kernel: SMBIOS 2.8 present. Apr 13 23:56:15.611963 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 13 23:56:15.611970 kernel: Hypervisor detected: KVM Apr 13 23:56:15.611980 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 23:56:15.611988 kernel: kvm-clock: using sched offset of 7557696795 cycles Apr 13 23:56:15.611997 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 23:56:15.612006 kernel: tsc: Detected 2793.438 MHz processor Apr 13 23:56:15.612014 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 23:56:15.612022 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 23:56:15.612030 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 13 23:56:15.612038 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 13 23:56:15.612046 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 23:56:15.612055 kernel: Using GB pages for direct mapping Apr 13 23:56:15.612063 kernel: Secure boot disabled Apr 13 23:56:15.612070 kernel: ACPI: Early table checksum verification disabled Apr 13 23:56:15.612079 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 13 23:56:15.612090 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 13 23:56:15.612099 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:56:15.612108 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:56:15.612118 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 13 23:56:15.612127 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:56:15.612135 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:56:15.612142 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:56:15.612150 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:56:15.612158 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 13 23:56:15.612166 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 13 23:56:15.612177 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 13 23:56:15.612186 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 13 23:56:15.612194 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 13 23:56:15.612202 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 13 23:56:15.612210 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 13 23:56:15.612218 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 13 23:56:15.612226 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 13 23:56:15.612234 kernel: No NUMA configuration found Apr 13 23:56:15.612242 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 13 23:56:15.612251 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 13 23:56:15.612259 kernel: Zone ranges: Apr 13 23:56:15.612266 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 23:56:15.612336 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 13 23:56:15.612346 kernel: Normal empty Apr 13 23:56:15.612354 kernel: Movable zone start for each node Apr 13 23:56:15.612362 kernel: Early memory node ranges Apr 13 23:56:15.612370 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 13 23:56:15.612378 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 13 23:56:15.612386 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 13 23:56:15.612397 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 13 23:56:15.612406 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 13 23:56:15.612415 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 13 23:56:15.612423 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 13 23:56:15.612432 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 23:56:15.612440 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 13 23:56:15.612449 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 13 23:56:15.612458 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 23:56:15.612467 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 13 23:56:15.612478 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 13 23:56:15.612487 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 13 23:56:15.612495 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 23:56:15.612504 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 23:56:15.612513 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 23:56:15.612522 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 23:56:15.612531 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 23:56:15.612540 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 23:56:15.612548 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 23:56:15.612558 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 23:56:15.612567 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 23:56:15.612576 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 23:56:15.612585 kernel: TSC deadline timer available Apr 13 23:56:15.612593 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 13 23:56:15.612602 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 23:56:15.612611 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 13 23:56:15.612619 kernel: kvm-guest: setup PV sched yield Apr 13 23:56:15.612628 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 13 23:56:15.612636 kernel: Booting paravirtualized kernel on KVM Apr 13 23:56:15.612647 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 23:56:15.612656 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 13 23:56:15.612665 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 13 23:56:15.612674 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 13 23:56:15.612682 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 13 23:56:15.612691 kernel: kvm-guest: PV spinlocks enabled Apr 13 23:56:15.612700 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 23:56:15.612709 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 23:56:15.612720 kernel: random: crng init done Apr 13 23:56:15.612729 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 23:56:15.612737 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 23:56:15.612745 kernel: Fallback order for Node 0: 0 Apr 13 23:56:15.612753 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 13 23:56:15.612761 kernel: Policy zone: DMA32 Apr 13 23:56:15.612769 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 23:56:15.612777 kernel: Memory: 2394672K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 172124K reserved, 0K cma-reserved) Apr 13 23:56:15.612785 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 13 23:56:15.612795 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 23:56:15.612802 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 23:56:15.612811 kernel: Dynamic Preempt: voluntary Apr 13 23:56:15.612819 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 23:56:15.612837 kernel: rcu: RCU event tracing is enabled. Apr 13 23:56:15.612849 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 13 23:56:15.612858 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 23:56:15.612868 kernel: Rude variant of Tasks RCU enabled. Apr 13 23:56:15.612878 kernel: Tracing variant of Tasks RCU enabled. Apr 13 23:56:15.612887 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 23:56:15.612896 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 13 23:56:15.612908 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 13 23:56:15.612918 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 23:56:15.612927 kernel: Console: colour dummy device 80x25 Apr 13 23:56:15.612937 kernel: printk: console [ttyS0] enabled Apr 13 23:56:15.612947 kernel: ACPI: Core revision 20230628 Apr 13 23:56:15.612959 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 13 23:56:15.612969 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 23:56:15.612979 kernel: x2apic enabled Apr 13 23:56:15.612988 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 23:56:15.612997 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 13 23:56:15.613007 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 13 23:56:15.613016 kernel: kvm-guest: setup PV IPIs Apr 13 23:56:15.613025 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 13 23:56:15.613035 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 13 23:56:15.613048 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 13 23:56:15.613057 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 23:56:15.613067 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 13 23:56:15.613076 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 13 23:56:15.613086 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 23:56:15.613095 kernel: Spectre V2 : Mitigation: Retpolines Apr 13 23:56:15.613104 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 23:56:15.613113 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 13 23:56:15.613123 kernel: RETBleed: Vulnerable Apr 13 23:56:15.613134 kernel: Speculative Store Bypass: Vulnerable Apr 13 23:56:15.613144 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 23:56:15.613154 kernel: GDS: Unknown: Dependent on hypervisor status Apr 13 23:56:15.613164 kernel: active return thunk: its_return_thunk Apr 13 23:56:15.613175 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 13 23:56:15.613185 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 23:56:15.613196 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 23:56:15.613207 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 23:56:15.613217 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 13 23:56:15.613230 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 13 23:56:15.613240 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 13 23:56:15.613250 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 23:56:15.613261 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 13 23:56:15.613424 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 13 23:56:15.613439 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 13 23:56:15.613449 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 13 23:56:15.613460 kernel: Freeing SMP alternatives memory: 32K Apr 13 23:56:15.613470 kernel: pid_max: default: 32768 minimum: 301 Apr 13 23:56:15.613484 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 23:56:15.613495 kernel: landlock: Up and running. Apr 13 23:56:15.613505 kernel: SELinux: Initializing. Apr 13 23:56:15.613514 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 23:56:15.613523 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 23:56:15.613533 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 13 23:56:15.613543 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 23:56:15.613553 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 23:56:15.613562 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 23:56:15.613573 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 13 23:56:15.613581 kernel: signal: max sigframe size: 3632 Apr 13 23:56:15.613589 kernel: rcu: Hierarchical SRCU implementation. Apr 13 23:56:15.613599 kernel: rcu: Max phase no-delay instances is 400. Apr 13 23:56:15.613610 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 13 23:56:15.613620 kernel: smp: Bringing up secondary CPUs ... Apr 13 23:56:15.613629 kernel: smpboot: x86: Booting SMP configuration: Apr 13 23:56:15.613638 kernel: .... node #0, CPUs: #1 #2 #3 Apr 13 23:56:15.613647 kernel: smp: Brought up 1 node, 4 CPUs Apr 13 23:56:15.613659 kernel: smpboot: Max logical packages: 1 Apr 13 23:56:15.613668 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 13 23:56:15.613678 kernel: devtmpfs: initialized Apr 13 23:56:15.613688 kernel: x86/mm: Memory block size: 128MB Apr 13 23:56:15.613698 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 13 23:56:15.613709 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 13 23:56:15.613719 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 13 23:56:15.613728 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 13 23:56:15.613738 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 13 23:56:15.613749 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 23:56:15.613758 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 13 23:56:15.613767 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 23:56:15.613777 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 23:56:15.613786 kernel: audit: initializing netlink subsys (disabled) Apr 13 23:56:15.613795 kernel: audit: type=2000 audit(1776124573.273:1): state=initialized audit_enabled=0 res=1 Apr 13 23:56:15.613805 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 23:56:15.613814 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 23:56:15.613823 kernel: cpuidle: using governor menu Apr 13 23:56:15.613835 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 23:56:15.613845 kernel: dca service started, version 1.12.1 Apr 13 23:56:15.613854 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 13 23:56:15.613863 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 13 23:56:15.613873 kernel: PCI: Using configuration type 1 for base access Apr 13 23:56:15.613882 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 23:56:15.613891 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 23:56:15.613900 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 23:56:15.613911 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 23:56:15.613921 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 23:56:15.613929 kernel: ACPI: Added _OSI(Module Device) Apr 13 23:56:15.613939 kernel: ACPI: Added _OSI(Processor Device) Apr 13 23:56:15.613948 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 23:56:15.613957 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 23:56:15.613966 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 23:56:15.613975 kernel: ACPI: Interpreter enabled Apr 13 23:56:15.613984 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 23:56:15.613993 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 23:56:15.614004 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 23:56:15.614014 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 23:56:15.614023 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 23:56:15.614033 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 23:56:15.614182 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 23:56:15.614411 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 13 23:56:15.614515 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 13 23:56:15.614531 kernel: PCI host bridge to bus 0000:00 Apr 13 23:56:15.614640 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 23:56:15.614723 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 23:56:15.614797 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 23:56:15.614867 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 13 23:56:15.614936 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 13 23:56:15.615007 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 13 23:56:15.615082 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 23:56:15.615188 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 23:56:15.618461 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 13 23:56:15.618617 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 13 23:56:15.618706 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 13 23:56:15.618789 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 13 23:56:15.618875 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 13 23:56:15.618953 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 23:56:15.619080 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 13 23:56:15.619169 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 13 23:56:15.619254 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 13 23:56:15.619430 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 13 23:56:15.619552 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 13 23:56:15.619638 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 13 23:56:15.619716 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 13 23:56:15.619795 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 13 23:56:15.619890 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 13 23:56:15.619982 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 13 23:56:15.620099 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 13 23:56:15.620181 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 13 23:56:15.620268 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 13 23:56:15.620414 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 23:56:15.620492 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 23:56:15.620576 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 23:56:15.620653 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 13 23:56:15.620729 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 13 23:56:15.620808 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 23:56:15.620886 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 13 23:56:15.620898 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 23:56:15.620907 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 23:56:15.620916 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 23:56:15.620925 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 23:56:15.620934 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 23:56:15.620943 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 23:56:15.620951 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 23:56:15.620962 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 23:56:15.620971 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 23:56:15.620981 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 23:56:15.620989 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 23:56:15.620998 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 23:56:15.621006 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 23:56:15.621014 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 23:56:15.621022 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 23:56:15.621031 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 23:56:15.621041 kernel: iommu: Default domain type: Translated Apr 13 23:56:15.621050 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 23:56:15.621059 kernel: efivars: Registered efivars operations Apr 13 23:56:15.621068 kernel: PCI: Using ACPI for IRQ routing Apr 13 23:56:15.621076 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 23:56:15.621085 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 13 23:56:15.621093 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 13 23:56:15.621101 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 13 23:56:15.621108 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 13 23:56:15.621182 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 23:56:15.621259 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 23:56:15.621444 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 23:56:15.621457 kernel: vgaarb: loaded Apr 13 23:56:15.621467 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 13 23:56:15.621478 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 13 23:56:15.621488 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 23:56:15.621497 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 23:56:15.621507 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 23:56:15.621517 kernel: pnp: PnP ACPI init Apr 13 23:56:15.621618 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 13 23:56:15.621631 kernel: pnp: PnP ACPI: found 6 devices Apr 13 23:56:15.621641 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 23:56:15.621650 kernel: NET: Registered PF_INET protocol family Apr 13 23:56:15.621659 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 23:56:15.621669 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 23:56:15.621678 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 23:56:15.621687 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 23:56:15.621699 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 23:56:15.621709 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 23:56:15.621719 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 23:56:15.621728 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 23:56:15.621738 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 23:56:15.621747 kernel: NET: Registered PF_XDP protocol family Apr 13 23:56:15.621829 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 13 23:56:15.621909 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 13 23:56:15.621987 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 23:56:15.622055 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 23:56:15.622121 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 23:56:15.622187 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 13 23:56:15.622253 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 13 23:56:15.622423 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 13 23:56:15.622437 kernel: PCI: CLS 0 bytes, default 64 Apr 13 23:56:15.622447 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 13 23:56:15.622457 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 13 23:56:15.622469 kernel: Initialise system trusted keyrings Apr 13 23:56:15.622478 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 23:56:15.622487 kernel: Key type asymmetric registered Apr 13 23:56:15.622495 kernel: Asymmetric key parser 'x509' registered Apr 13 23:56:15.622504 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 23:56:15.622515 kernel: io scheduler mq-deadline registered Apr 13 23:56:15.622526 kernel: io scheduler kyber registered Apr 13 23:56:15.622535 kernel: io scheduler bfq registered Apr 13 23:56:15.622547 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 23:56:15.622557 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 23:56:15.622567 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 23:56:15.622576 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 13 23:56:15.622586 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 23:56:15.622595 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 23:56:15.622604 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 23:56:15.622614 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 23:56:15.622624 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 23:56:15.622724 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 13 23:56:15.622740 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 23:56:15.622814 kernel: rtc_cmos 00:04: registered as rtc0 Apr 13 23:56:15.622885 kernel: rtc_cmos 00:04: setting system clock to 2026-04-13T23:56:14 UTC (1776124574) Apr 13 23:56:15.622961 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 13 23:56:15.622973 kernel: intel_pstate: CPU model not supported Apr 13 23:56:15.622983 kernel: efifb: probing for efifb Apr 13 23:56:15.622992 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 13 23:56:15.623006 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 13 23:56:15.623016 kernel: efifb: scrolling: redraw Apr 13 23:56:15.623025 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 13 23:56:15.623035 kernel: Console: switching to colour frame buffer device 100x37 Apr 13 23:56:15.623046 kernel: fb0: EFI VGA frame buffer device Apr 13 23:56:15.623070 kernel: pstore: Using crash dump compression: deflate Apr 13 23:56:15.623082 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 23:56:15.623092 kernel: NET: Registered PF_INET6 protocol family Apr 13 23:56:15.623101 kernel: Segment Routing with IPv6 Apr 13 23:56:15.623113 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 23:56:15.623123 kernel: NET: Registered PF_PACKET protocol family Apr 13 23:56:15.623132 kernel: Key type dns_resolver registered Apr 13 23:56:15.623141 kernel: IPI shorthand broadcast: enabled Apr 13 23:56:15.623151 kernel: sched_clock: Marking stable (1579114419, 408220748)->(2374223077, -386887910) Apr 13 23:56:15.623160 kernel: registered taskstats version 1 Apr 13 23:56:15.623170 kernel: Loading compiled-in X.509 certificates Apr 13 23:56:15.623181 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 23:56:15.623190 kernel: Key type .fscrypt registered Apr 13 23:56:15.623201 kernel: Key type fscrypt-provisioning registered Apr 13 23:56:15.623211 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 23:56:15.623221 kernel: ima: Allocated hash algorithm: sha1 Apr 13 23:56:15.623230 kernel: ima: No architecture policies found Apr 13 23:56:15.623239 kernel: clk: Disabling unused clocks Apr 13 23:56:15.623249 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 23:56:15.623258 kernel: Write protecting the kernel read-only data: 36864k Apr 13 23:56:15.623268 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 23:56:15.623346 kernel: Run /init as init process Apr 13 23:56:15.623357 kernel: with arguments: Apr 13 23:56:15.623369 kernel: /init Apr 13 23:56:15.623378 kernel: with environment: Apr 13 23:56:15.623387 kernel: HOME=/ Apr 13 23:56:15.623396 kernel: TERM=linux Apr 13 23:56:15.623410 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 23:56:15.623423 systemd[1]: Detected virtualization kvm. Apr 13 23:56:15.623435 systemd[1]: Detected architecture x86-64. Apr 13 23:56:15.623445 systemd[1]: Running in initrd. Apr 13 23:56:15.623455 systemd[1]: No hostname configured, using default hostname. Apr 13 23:56:15.623464 systemd[1]: Hostname set to . Apr 13 23:56:15.623476 systemd[1]: Initializing machine ID from VM UUID. Apr 13 23:56:15.623485 systemd[1]: Queued start job for default target initrd.target. Apr 13 23:56:15.623497 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 23:56:15.623506 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 23:56:15.623517 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 23:56:15.623527 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 23:56:15.623538 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 23:56:15.623548 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 23:56:15.623562 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 23:56:15.623574 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 23:56:15.623585 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 23:56:15.623596 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 23:56:15.623606 systemd[1]: Reached target paths.target - Path Units. Apr 13 23:56:15.623617 systemd[1]: Reached target slices.target - Slice Units. Apr 13 23:56:15.623628 systemd[1]: Reached target swap.target - Swaps. Apr 13 23:56:15.623639 systemd[1]: Reached target timers.target - Timer Units. Apr 13 23:56:15.623649 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 23:56:15.623663 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 23:56:15.623673 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 23:56:15.623683 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 23:56:15.623695 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 23:56:15.623706 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 23:56:15.623717 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 23:56:15.623728 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 23:56:15.623739 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 23:56:15.623749 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 23:56:15.623762 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 23:56:15.623772 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 23:56:15.623782 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 23:56:15.623793 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 23:56:15.623803 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:56:15.623814 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 23:56:15.623824 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 23:56:15.623835 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 23:56:15.623848 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 23:56:15.623886 systemd-journald[194]: Collecting audit messages is disabled. Apr 13 23:56:15.623914 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 23:56:15.623925 systemd-journald[194]: Journal started Apr 13 23:56:15.623950 systemd-journald[194]: Runtime Journal (/run/log/journal/44dc70a3dd834a5fb8f46c1139820868) is 6.0M, max 48.3M, 42.2M free. Apr 13 23:56:15.629327 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 23:56:15.632351 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 23:56:15.636786 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 23:56:15.640062 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:56:15.644877 systemd-modules-load[195]: Inserted module 'overlay' Apr 13 23:56:15.651961 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 23:56:15.666113 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 23:56:15.728196 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 23:56:15.750502 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:56:15.773845 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 23:56:15.819115 dracut-cmdline[226]: dracut-dracut-053 Apr 13 23:56:15.830420 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 23:56:15.860800 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 23:56:15.866519 kernel: Bridge firewalling registered Apr 13 23:56:15.867170 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 13 23:56:15.873673 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 23:56:15.937683 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 23:56:15.980723 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:56:16.063776 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 23:56:16.117071 systemd-resolved[278]: Positive Trust Anchors: Apr 13 23:56:16.117110 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 23:56:16.117144 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 23:56:16.120147 systemd-resolved[278]: Defaulting to hostname 'linux'. Apr 13 23:56:16.121496 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 23:56:16.127749 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 23:56:16.254019 kernel: SCSI subsystem initialized Apr 13 23:56:16.270980 kernel: Loading iSCSI transport class v2.0-870. Apr 13 23:56:16.306184 kernel: iscsi: registered transport (tcp) Apr 13 23:56:16.352990 kernel: iscsi: registered transport (qla4xxx) Apr 13 23:56:16.353234 kernel: QLogic iSCSI HBA Driver Apr 13 23:56:16.546320 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 23:56:16.572914 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 23:56:16.625087 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 23:56:16.625188 kernel: device-mapper: uevent: version 1.0.3 Apr 13 23:56:16.628191 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 23:56:16.793666 kernel: raid6: avx512x4 gen() 28952 MB/s Apr 13 23:56:16.837134 kernel: raid6: avx512x2 gen() 9309 MB/s Apr 13 23:56:16.856069 kernel: raid6: avx512x1 gen() 24707 MB/s Apr 13 23:56:16.875041 kernel: raid6: avx2x4 gen() 17441 MB/s Apr 13 23:56:16.905832 kernel: raid6: avx2x2 gen() 16280 MB/s Apr 13 23:56:16.924552 kernel: raid6: avx2x1 gen() 3057 MB/s Apr 13 23:56:16.924637 kernel: raid6: using algorithm avx512x4 gen() 28952 MB/s Apr 13 23:56:16.944748 kernel: raid6: .... xor() 8060 MB/s, rmw enabled Apr 13 23:56:16.944826 kernel: raid6: using avx512x2 recovery algorithm Apr 13 23:56:16.987297 kernel: xor: automatically using best checksumming function avx Apr 13 23:56:17.734000 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 23:56:17.788003 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 23:56:17.837591 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 23:56:17.881195 systemd-udevd[416]: Using default interface naming scheme 'v255'. Apr 13 23:56:17.904199 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 23:56:17.935956 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 23:56:18.001706 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Apr 13 23:56:18.130859 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 23:56:18.155808 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 23:56:18.262614 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 23:56:18.333228 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 23:56:18.355250 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 23:56:18.367049 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 23:56:18.374460 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 23:56:18.391222 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 23:56:18.432586 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 23:56:18.443830 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 23:56:18.454978 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 23:56:18.455010 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 13 23:56:18.443999 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:56:18.454827 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 23:56:18.475157 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 13 23:56:18.475210 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 23:56:18.475550 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:56:18.510221 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 23:56:18.510255 kernel: GPT:9289727 != 19775487 Apr 13 23:56:18.510447 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 23:56:18.510463 kernel: GPT:9289727 != 19775487 Apr 13 23:56:18.510474 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 23:56:18.487902 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:56:18.517022 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:56:18.516924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:56:18.536355 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 23:56:18.576347 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Apr 13 23:56:18.583558 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (463) Apr 13 23:56:18.586026 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 13 23:56:18.645239 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 13 23:56:18.650228 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 13 23:56:18.661708 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 13 23:56:18.677922 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 13 23:56:18.725718 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 23:56:18.727649 kernel: libata version 3.00 loaded. Apr 13 23:56:18.748943 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 23:56:18.751699 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 23:56:18.776033 kernel: AES CTR mode by8 optimization enabled Apr 13 23:56:18.751765 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:56:18.754652 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:56:18.789576 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:56:18.789609 disk-uuid[505]: Primary Header is updated. Apr 13 23:56:18.789609 disk-uuid[505]: Secondary Entries is updated. Apr 13 23:56:18.789609 disk-uuid[505]: Secondary Header is updated. Apr 13 23:56:18.825807 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:56:18.774107 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:56:18.842580 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 23:56:18.842861 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 23:56:18.857141 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 23:56:18.857537 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 23:56:18.882068 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:56:18.926712 kernel: scsi host0: ahci Apr 13 23:56:18.931324 kernel: scsi host1: ahci Apr 13 23:56:18.934389 kernel: scsi host2: ahci Apr 13 23:56:18.938722 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 23:56:18.951968 kernel: scsi host3: ahci Apr 13 23:56:18.967659 kernel: scsi host4: ahci Apr 13 23:56:18.976865 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:56:19.018402 kernel: scsi host5: ahci Apr 13 23:56:19.018598 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 13 23:56:19.018614 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 13 23:56:19.018640 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 13 23:56:19.018652 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 13 23:56:19.018664 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 13 23:56:19.018675 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 13 23:56:19.321698 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 13 23:56:19.321815 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 13 23:56:19.323811 kernel: ata3.00: applying bridge limits Apr 13 23:56:19.326526 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 13 23:56:19.335738 kernel: ata3.00: configured for UDMA/100 Apr 13 23:56:19.341931 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 23:56:19.342008 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 23:56:19.354489 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 23:56:19.356516 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 23:56:19.356586 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 13 23:56:19.480211 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 13 23:56:19.484153 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 23:56:19.503671 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 13 23:56:19.840653 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:56:19.841252 disk-uuid[513]: The operation has completed successfully. Apr 13 23:56:20.071879 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 23:56:20.072000 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 23:56:20.095698 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 23:56:20.116174 sh[599]: Success Apr 13 23:56:20.158611 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 13 23:56:20.366043 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 23:56:20.370511 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 23:56:20.391960 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 23:56:20.419131 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 23:56:20.419226 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:56:20.419240 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 23:56:20.422185 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 23:56:20.422261 kernel: BTRFS info (device dm-0): using free space tree Apr 13 23:56:20.476794 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 23:56:20.485004 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 23:56:20.516762 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 23:56:20.531362 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 23:56:20.569198 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:56:20.569313 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:56:20.569343 kernel: BTRFS info (device vda6): using free space tree Apr 13 23:56:20.624040 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 23:56:20.669739 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 23:56:20.675225 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:56:20.731764 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 23:56:20.747480 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 23:56:20.887780 ignition[721]: Ignition 2.19.0 Apr 13 23:56:20.887805 ignition[721]: Stage: fetch-offline Apr 13 23:56:20.887840 ignition[721]: no configs at "/usr/lib/ignition/base.d" Apr 13 23:56:20.887848 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:56:20.887937 ignition[721]: parsed url from cmdline: "" Apr 13 23:56:20.887940 ignition[721]: no config URL provided Apr 13 23:56:20.887945 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 23:56:20.887952 ignition[721]: no config at "/usr/lib/ignition/user.ign" Apr 13 23:56:20.887984 ignition[721]: op(1): [started] loading QEMU firmware config module Apr 13 23:56:20.887989 ignition[721]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 13 23:56:20.926666 ignition[721]: op(1): [finished] loading QEMU firmware config module Apr 13 23:56:20.945311 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 23:56:20.952728 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 23:56:20.995048 systemd-networkd[787]: lo: Link UP Apr 13 23:56:20.996741 systemd-networkd[787]: lo: Gained carrier Apr 13 23:56:20.998264 systemd-networkd[787]: Enumeration completed Apr 13 23:56:20.999240 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:56:20.999244 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 23:56:20.999546 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 23:56:21.007886 systemd[1]: Reached target network.target - Network. Apr 13 23:56:21.010529 systemd-networkd[787]: eth0: Link UP Apr 13 23:56:21.010534 systemd-networkd[787]: eth0: Gained carrier Apr 13 23:56:21.010549 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:56:21.069062 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.37/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 13 23:56:21.242928 ignition[721]: parsing config with SHA512: 15e0a9582760e80fd55386e66eaaa6362092dae282f56c70b212784d0460b000b11bb85ebc44ca7d420ea73b38101d670b7c58cdb31f0c73b1a7c09e30407392 Apr 13 23:56:21.248795 unknown[721]: fetched base config from "system" Apr 13 23:56:21.248829 unknown[721]: fetched user config from "qemu" Apr 13 23:56:21.249428 ignition[721]: fetch-offline: fetch-offline passed Apr 13 23:56:21.254452 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 23:56:21.249496 ignition[721]: Ignition finished successfully Apr 13 23:56:21.261109 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 13 23:56:21.274205 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 23:56:21.309589 ignition[791]: Ignition 2.19.0 Apr 13 23:56:21.309611 ignition[791]: Stage: kargs Apr 13 23:56:21.309806 ignition[791]: no configs at "/usr/lib/ignition/base.d" Apr 13 23:56:21.309816 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:56:21.311071 ignition[791]: kargs: kargs passed Apr 13 23:56:21.319234 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 23:56:21.311168 ignition[791]: Ignition finished successfully Apr 13 23:56:21.346951 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 23:56:21.373742 ignition[799]: Ignition 2.19.0 Apr 13 23:56:21.373775 ignition[799]: Stage: disks Apr 13 23:56:21.374022 ignition[799]: no configs at "/usr/lib/ignition/base.d" Apr 13 23:56:21.378647 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 23:56:21.374030 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:56:21.385193 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 23:56:21.374827 ignition[799]: disks: disks passed Apr 13 23:56:21.425068 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 23:56:21.374868 ignition[799]: Ignition finished successfully Apr 13 23:56:21.439236 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 23:56:21.441769 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 23:56:21.445506 systemd[1]: Reached target basic.target - Basic System. Apr 13 23:56:21.463984 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 23:56:21.486725 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 23:56:21.517595 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 23:56:21.533564 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 23:56:21.751459 kernel: hrtimer: interrupt took 22171179 ns Apr 13 23:56:22.046660 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 23:56:22.047065 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 23:56:22.049238 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 23:56:22.101609 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 23:56:22.116674 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 23:56:22.137187 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 23:56:22.137540 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 23:56:22.164586 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Apr 13 23:56:22.137580 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 23:56:22.176983 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:56:22.177043 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:56:22.177054 kernel: BTRFS info (device vda6): using free space tree Apr 13 23:56:22.187656 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 23:56:22.224936 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 23:56:22.228875 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 23:56:22.232787 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 23:56:22.355710 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 23:56:22.372352 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Apr 13 23:56:22.390578 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 23:56:22.402058 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 23:56:22.705734 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 23:56:22.744505 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 23:56:22.749621 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 23:56:22.752962 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 23:56:22.759192 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:56:22.847037 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 23:56:22.858765 ignition[932]: INFO : Ignition 2.19.0 Apr 13 23:56:22.858765 ignition[932]: INFO : Stage: mount Apr 13 23:56:22.858765 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 23:56:22.858765 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:56:22.858765 ignition[932]: INFO : mount: mount passed Apr 13 23:56:22.858765 ignition[932]: INFO : Ignition finished successfully Apr 13 23:56:22.859838 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 23:56:22.884773 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 23:56:22.893883 systemd-networkd[787]: eth0: Gained IPv6LL Apr 13 23:56:23.065873 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 23:56:23.108717 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Apr 13 23:56:23.114719 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:56:23.114786 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:56:23.117202 kernel: BTRFS info (device vda6): using free space tree Apr 13 23:56:23.138346 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 23:56:23.143168 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 23:56:23.224952 ignition[962]: INFO : Ignition 2.19.0 Apr 13 23:56:23.224952 ignition[962]: INFO : Stage: files Apr 13 23:56:23.230128 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 23:56:23.230128 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:56:23.230128 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Apr 13 23:56:23.230128 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 23:56:23.230128 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 23:56:23.247763 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 23:56:23.247763 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 23:56:23.255904 unknown[962]: wrote ssh authorized keys file for user: core Apr 13 23:56:23.260726 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 23:56:23.260726 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 23:56:23.270714 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 23:56:23.419687 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 23:56:23.625404 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 23:56:23.637007 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 23:56:23.637007 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 13 23:56:23.933324 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 23:56:24.163538 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 23:56:24.163538 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 13 23:56:24.173934 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 23:56:24.173934 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 23:56:24.173934 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 23:56:24.173934 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 23:56:24.173934 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 23:56:24.173934 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 23:56:24.173934 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 23:56:24.173934 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 23:56:24.173934 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 23:56:24.173934 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 23:56:24.173934 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 23:56:24.173934 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 23:56:24.173934 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 13 23:56:24.358100 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 13 23:56:26.505239 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 23:56:26.505239 ignition[962]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 13 23:56:26.513564 ignition[962]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 23:56:26.513564 ignition[962]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 23:56:26.513564 ignition[962]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 13 23:56:26.513564 ignition[962]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 13 23:56:26.513564 ignition[962]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 13 23:56:26.513564 ignition[962]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 13 23:56:26.513564 ignition[962]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 13 23:56:26.513564 ignition[962]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 13 23:56:26.646830 ignition[962]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 13 23:56:26.655729 ignition[962]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 13 23:56:26.667203 ignition[962]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 13 23:56:26.672858 ignition[962]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 13 23:56:26.672858 ignition[962]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 23:56:26.680199 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 23:56:26.680199 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 23:56:26.680199 ignition[962]: INFO : files: files passed Apr 13 23:56:26.680199 ignition[962]: INFO : Ignition finished successfully Apr 13 23:56:26.685245 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 23:56:26.739031 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 23:56:26.745039 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 23:56:26.748100 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 23:56:26.748235 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 23:56:26.778368 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Apr 13 23:56:26.786257 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 23:56:26.786257 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 23:56:26.826470 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 23:56:26.792259 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 23:56:26.827569 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 23:56:26.850812 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 23:56:26.927031 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 23:56:26.928232 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 23:56:26.940177 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 23:56:26.944795 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 23:56:26.955989 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 23:56:26.981561 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 23:56:26.999790 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 23:56:27.009074 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 23:56:27.022163 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 23:56:27.048986 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 23:56:27.054487 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 23:56:27.057464 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 23:56:27.057730 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 23:56:27.069208 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 23:56:27.072267 systemd[1]: Stopped target basic.target - Basic System. Apr 13 23:56:27.081894 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 23:56:27.089580 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 23:56:27.115893 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 23:56:27.126594 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 23:56:27.129816 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 23:56:27.135065 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 23:56:27.147074 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 23:56:27.154914 systemd[1]: Stopped target swap.target - Swaps. Apr 13 23:56:27.165123 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 23:56:27.165331 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 23:56:27.170835 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 23:56:27.178168 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 23:56:27.193973 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 23:56:27.219723 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 23:56:27.220803 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 23:56:27.221007 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 23:56:27.249516 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 23:56:27.249733 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 23:56:27.263417 systemd[1]: Stopped target paths.target - Path Units. Apr 13 23:56:27.271775 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 23:56:27.290244 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 23:56:27.293945 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 23:56:27.298726 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 23:56:27.305637 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 23:56:27.305748 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 23:56:27.321473 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 23:56:27.321844 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 23:56:27.345327 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 23:56:27.345527 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 23:56:27.356005 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 23:56:27.359455 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 23:56:27.425927 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 23:56:27.437143 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 23:56:27.439182 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 23:56:27.443567 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 23:56:27.448668 ignition[1015]: INFO : Ignition 2.19.0 Apr 13 23:56:27.448668 ignition[1015]: INFO : Stage: umount Apr 13 23:56:27.448668 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 23:56:27.448668 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:56:27.448668 ignition[1015]: INFO : umount: umount passed Apr 13 23:56:27.448668 ignition[1015]: INFO : Ignition finished successfully Apr 13 23:56:27.448855 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 23:56:27.455820 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 23:56:27.477755 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 23:56:27.480107 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 23:56:27.489085 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 23:56:27.544002 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 23:56:27.544488 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 23:56:27.551443 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 23:56:27.553328 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 23:56:27.557045 systemd[1]: Stopped target network.target - Network. Apr 13 23:56:27.567654 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 23:56:27.567767 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 23:56:27.572726 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 23:56:27.572807 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 23:56:27.581897 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 23:56:27.581966 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 23:56:27.584496 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 23:56:27.584566 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 23:56:27.594875 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 23:56:27.594956 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 23:56:27.605565 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 23:56:27.616018 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 23:56:27.624436 systemd-networkd[787]: eth0: DHCPv6 lease lost Apr 13 23:56:27.631839 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 23:56:27.631950 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 23:56:27.643600 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 23:56:27.643774 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 23:56:27.658735 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 23:56:27.658870 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 23:56:27.678853 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 23:56:27.681521 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 23:56:27.681623 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 23:56:27.687930 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 23:56:27.688005 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:56:27.695519 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 23:56:27.735117 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 23:56:27.744553 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 23:56:27.744646 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 23:56:27.748160 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 23:56:27.782852 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 23:56:27.783944 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 23:56:27.789746 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 23:56:27.789825 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 23:56:27.799672 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 23:56:27.799724 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 23:56:27.805120 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 23:56:27.805200 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 23:56:27.818999 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 23:56:27.819228 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 23:56:27.829450 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 23:56:27.829528 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:56:27.857563 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 23:56:27.864204 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 23:56:27.864543 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 23:56:27.917612 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 23:56:27.917673 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 23:56:27.925960 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 23:56:27.926040 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 23:56:27.935231 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 23:56:27.935392 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:56:27.943646 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 23:56:27.943793 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 23:56:27.949181 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 23:56:27.949340 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 23:56:27.985633 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 23:56:28.059738 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 23:56:28.077111 systemd[1]: Switching root. Apr 13 23:56:28.125407 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 13 23:56:28.125500 systemd-journald[194]: Journal stopped Apr 13 23:56:30.568580 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 23:56:30.568658 kernel: SELinux: policy capability open_perms=1 Apr 13 23:56:30.568672 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 23:56:30.568691 kernel: SELinux: policy capability always_check_network=0 Apr 13 23:56:30.568702 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 23:56:30.568713 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 23:56:30.568724 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 23:56:30.568735 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 23:56:30.568746 kernel: audit: type=1403 audit(1776124588.535:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 23:56:30.568759 systemd[1]: Successfully loaded SELinux policy in 109.945ms. Apr 13 23:56:30.568781 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.455ms. Apr 13 23:56:30.568797 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 23:56:30.568811 systemd[1]: Detected virtualization kvm. Apr 13 23:56:30.568823 systemd[1]: Detected architecture x86-64. Apr 13 23:56:30.568834 systemd[1]: Detected first boot. Apr 13 23:56:30.568846 systemd[1]: Initializing machine ID from VM UUID. Apr 13 23:56:30.568858 zram_generator::config[1061]: No configuration found. Apr 13 23:56:30.568871 systemd[1]: Populated /etc with preset unit settings. Apr 13 23:56:30.568886 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 23:56:30.568899 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 23:56:30.568915 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 23:56:30.568928 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 23:56:30.568941 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 23:56:30.568953 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 23:56:30.568966 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 23:56:30.568977 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 23:56:30.568990 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 23:56:30.569003 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 23:56:30.569018 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 23:56:30.569031 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 23:56:30.569044 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 23:56:30.569057 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 23:56:30.569070 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 23:56:30.569082 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 23:56:30.569095 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 23:56:30.569107 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 23:56:30.569119 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 23:56:30.569132 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 23:56:30.569144 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 23:56:30.569158 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 23:56:30.569171 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 23:56:30.569183 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 23:56:30.569195 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 23:56:30.569206 systemd[1]: Reached target slices.target - Slice Units. Apr 13 23:56:30.569219 systemd[1]: Reached target swap.target - Swaps. Apr 13 23:56:30.569232 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 23:56:30.569244 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 23:56:30.569256 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 23:56:30.569312 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 23:56:30.569333 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 23:56:30.569345 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 23:56:30.569372 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 23:56:30.569386 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 23:56:30.569397 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 23:56:30.569416 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:56:30.569427 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 23:56:30.569439 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 23:56:30.569451 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 23:56:30.569464 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 23:56:30.569478 systemd[1]: Reached target machines.target - Containers. Apr 13 23:56:30.569492 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 23:56:30.569506 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 23:56:30.569521 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 23:56:30.569535 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 23:56:30.569547 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 23:56:30.569559 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 23:56:30.569571 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 23:56:30.569583 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 23:56:30.569595 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 23:56:30.569608 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 23:56:30.569619 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 23:56:30.569633 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 23:56:30.569644 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 23:56:30.569655 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 23:56:30.569666 kernel: fuse: init (API version 7.39) Apr 13 23:56:30.569677 kernel: loop: module loaded Apr 13 23:56:30.569688 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 23:56:30.569700 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 23:56:30.569711 kernel: ACPI: bus type drm_connector registered Apr 13 23:56:30.569724 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 23:56:30.569760 systemd-journald[1145]: Collecting audit messages is disabled. Apr 13 23:56:30.569788 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 23:56:30.569801 systemd-journald[1145]: Journal started Apr 13 23:56:30.569826 systemd-journald[1145]: Runtime Journal (/run/log/journal/44dc70a3dd834a5fb8f46c1139820868) is 6.0M, max 48.3M, 42.2M free. Apr 13 23:56:29.851619 systemd[1]: Queued start job for default target multi-user.target. Apr 13 23:56:29.895201 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 13 23:56:29.933114 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 23:56:30.577742 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 23:56:30.583342 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 23:56:30.583421 systemd[1]: Stopped verity-setup.service. Apr 13 23:56:30.591504 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:56:30.614905 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 23:56:30.619890 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 23:56:30.623410 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 23:56:30.626466 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 23:56:30.629493 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 23:56:30.632151 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 23:56:30.635127 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 23:56:30.637843 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 23:56:30.640719 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 23:56:30.643980 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 23:56:30.644572 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 23:56:30.647774 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 23:56:30.648513 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 23:56:30.653090 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 23:56:30.653977 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 23:56:30.657652 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 23:56:30.658051 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 23:56:30.661605 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 23:56:30.661766 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 23:56:30.664465 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 23:56:30.664628 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 23:56:30.667925 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 23:56:30.671137 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 23:56:30.674665 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 23:56:30.715564 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 23:56:30.726910 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 23:56:30.730102 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 23:56:30.732813 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 23:56:30.732975 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 23:56:30.737810 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 23:56:30.750607 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 23:56:30.754809 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 23:56:30.756992 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 23:56:30.758297 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 23:56:30.763487 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 23:56:30.765952 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 23:56:30.767414 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 23:56:30.785910 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 23:56:30.790656 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 23:56:30.794721 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 23:56:30.799690 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 23:56:30.807954 systemd-journald[1145]: Time spent on flushing to /var/log/journal/44dc70a3dd834a5fb8f46c1139820868 is 59.074ms for 1002 entries. Apr 13 23:56:30.807954 systemd-journald[1145]: System Journal (/var/log/journal/44dc70a3dd834a5fb8f46c1139820868) is 8.0M, max 195.6M, 187.6M free. Apr 13 23:56:30.892865 systemd-journald[1145]: Received client request to flush runtime journal. Apr 13 23:56:30.892906 kernel: loop0: detected capacity change from 0 to 142488 Apr 13 23:56:30.808387 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 23:56:30.814052 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 23:56:30.817081 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 23:56:30.820933 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 23:56:30.862666 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 23:56:30.877783 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 23:56:30.879426 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Apr 13 23:56:30.879440 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Apr 13 23:56:30.880854 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 23:56:30.893489 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 23:56:30.940631 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 23:56:30.926054 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 23:56:30.929741 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:56:30.933040 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 23:56:30.951262 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 23:56:30.954655 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 23:56:30.972327 kernel: loop1: detected capacity change from 0 to 140768 Apr 13 23:56:30.983651 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 23:56:30.990471 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 23:56:31.062775 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 23:56:31.077758 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 23:56:31.097651 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 13 23:56:31.098168 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 13 23:56:31.104615 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 23:56:31.120319 kernel: loop2: detected capacity change from 0 to 228704 Apr 13 23:56:31.195500 kernel: loop3: detected capacity change from 0 to 142488 Apr 13 23:56:31.274422 kernel: loop4: detected capacity change from 0 to 140768 Apr 13 23:56:31.298344 kernel: loop5: detected capacity change from 0 to 228704 Apr 13 23:56:31.342778 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 13 23:56:31.343882 (sd-merge)[1204]: Merged extensions into '/usr'. Apr 13 23:56:31.351524 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 23:56:31.351548 systemd[1]: Reloading... Apr 13 23:56:31.462420 zram_generator::config[1226]: No configuration found. Apr 13 23:56:31.762350 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:56:31.829499 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 23:56:31.834057 systemd[1]: Reloading finished in 481 ms. Apr 13 23:56:31.886982 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 23:56:31.894516 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 23:56:31.950774 systemd[1]: Starting ensure-sysext.service... Apr 13 23:56:31.954883 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 23:56:31.958085 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 23:56:31.977940 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 23:56:31.982134 systemd[1]: Reloading requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Apr 13 23:56:31.983474 systemd[1]: Reloading... Apr 13 23:56:32.015594 systemd-udevd[1270]: Using default interface naming scheme 'v255'. Apr 13 23:56:32.021017 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 23:56:32.021352 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 23:56:32.022229 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 23:56:32.022569 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Apr 13 23:56:32.022619 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Apr 13 23:56:32.029491 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 23:56:32.029907 systemd-tmpfiles[1268]: Skipping /boot Apr 13 23:56:32.064535 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 23:56:32.064679 systemd-tmpfiles[1268]: Skipping /boot Apr 13 23:56:32.094488 zram_generator::config[1292]: No configuration found. Apr 13 23:56:32.194495 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1311) Apr 13 23:56:32.267328 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 13 23:56:32.278347 kernel: ACPI: button: Power Button [PWRF] Apr 13 23:56:32.331493 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 13 23:56:32.338473 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 13 23:56:32.349849 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 13 23:56:32.350103 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 13 23:56:32.350243 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 13 23:56:32.344986 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:56:32.373457 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 23:56:32.517259 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 23:56:32.517411 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 13 23:56:32.520414 systemd[1]: Reloading finished in 536 ms. Apr 13 23:56:32.654942 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 23:56:32.686033 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 23:56:32.777799 systemd[1]: Finished ensure-sysext.service. Apr 13 23:56:32.855717 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 23:56:32.876856 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:56:32.891339 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 23:56:32.918757 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 23:56:32.922533 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 23:56:32.923899 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 23:56:32.930783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 23:56:32.939325 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 23:56:32.945009 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 23:56:32.950571 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 23:56:32.952144 lvm[1370]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 23:56:32.957627 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 23:56:32.975070 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 23:56:32.991970 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 23:56:33.062511 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 23:56:33.068555 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 23:56:33.071236 augenrules[1392]: No rules Apr 13 23:56:33.073423 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 23:56:33.080817 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 23:56:33.090689 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:56:33.094304 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:56:33.095386 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 23:56:33.098655 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 23:56:33.115822 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 23:56:33.116040 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 23:56:33.121088 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 23:56:33.121267 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 23:56:33.126944 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 23:56:33.127113 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 23:56:33.132324 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 23:56:33.132507 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 23:56:33.136129 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 23:56:33.139777 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 23:56:33.149999 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 23:56:33.164976 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 23:56:33.165112 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 23:56:33.165357 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 23:56:33.170782 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 23:56:33.173506 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 23:56:33.174741 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 23:56:33.179590 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 23:56:33.186843 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 23:56:33.189768 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 23:56:33.254520 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 23:56:33.262195 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 23:56:33.310595 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 23:56:33.315661 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:56:33.384806 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 23:56:33.392973 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 23:56:33.435139 systemd-networkd[1390]: lo: Link UP Apr 13 23:56:33.435161 systemd-networkd[1390]: lo: Gained carrier Apr 13 23:56:33.436173 systemd-resolved[1393]: Positive Trust Anchors: Apr 13 23:56:33.436236 systemd-resolved[1393]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 23:56:33.436301 systemd-resolved[1393]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 23:56:33.438078 systemd-networkd[1390]: Enumeration completed Apr 13 23:56:33.438234 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 23:56:33.439426 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:56:33.439575 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 23:56:33.440903 systemd-networkd[1390]: eth0: Link UP Apr 13 23:56:33.440913 systemd-networkd[1390]: eth0: Gained carrier Apr 13 23:56:33.440928 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:56:33.452825 systemd-resolved[1393]: Defaulting to hostname 'linux'. Apr 13 23:56:33.455025 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 23:56:33.468123 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 23:56:33.471411 systemd[1]: Reached target network.target - Network. Apr 13 23:56:33.472506 systemd-networkd[1390]: eth0: DHCPv4 address 10.0.0.37/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 13 23:56:33.473990 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. Apr 13 23:56:34.325638 systemd-timesyncd[1398]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 13 23:56:34.325697 systemd-timesyncd[1398]: Initial clock synchronization to Mon 2026-04-13 23:56:34.325529 UTC. Apr 13 23:56:34.326071 systemd-resolved[1393]: Clock change detected. Flushing caches. Apr 13 23:56:34.326138 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 23:56:34.330501 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 23:56:34.333768 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 23:56:34.338361 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 23:56:34.341462 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 23:56:34.345501 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 23:56:34.349680 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 23:56:34.360979 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 23:56:34.362968 systemd[1]: Reached target paths.target - Path Units. Apr 13 23:56:34.367350 systemd[1]: Reached target timers.target - Timer Units. Apr 13 23:56:34.371526 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 23:56:34.377928 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 23:56:34.402989 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 23:56:34.407783 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 23:56:34.410774 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 23:56:34.413815 systemd[1]: Reached target basic.target - Basic System. Apr 13 23:56:34.416099 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 23:56:34.416135 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 23:56:34.450514 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 23:56:34.455135 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 23:56:34.459617 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 23:56:34.468826 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 23:56:34.472030 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 23:56:34.476402 jq[1435]: false Apr 13 23:56:34.477550 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 23:56:34.491380 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 23:56:34.499203 extend-filesystems[1436]: Found loop3 Apr 13 23:56:34.499203 extend-filesystems[1436]: Found loop4 Apr 13 23:56:34.524752 extend-filesystems[1436]: Found loop5 Apr 13 23:56:34.524752 extend-filesystems[1436]: Found sr0 Apr 13 23:56:34.524752 extend-filesystems[1436]: Found vda Apr 13 23:56:34.524752 extend-filesystems[1436]: Found vda1 Apr 13 23:56:34.524752 extend-filesystems[1436]: Found vda2 Apr 13 23:56:34.524752 extend-filesystems[1436]: Found vda3 Apr 13 23:56:34.524752 extend-filesystems[1436]: Found usr Apr 13 23:56:34.524752 extend-filesystems[1436]: Found vda4 Apr 13 23:56:34.524752 extend-filesystems[1436]: Found vda6 Apr 13 23:56:34.521642 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 23:56:34.531730 dbus-daemon[1434]: [system] SELinux support is enabled Apr 13 23:56:34.609905 extend-filesystems[1436]: Found vda7 Apr 13 23:56:34.609905 extend-filesystems[1436]: Found vda9 Apr 13 23:56:34.609905 extend-filesystems[1436]: Checking size of /dev/vda9 Apr 13 23:56:34.539768 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 23:56:34.614335 extend-filesystems[1436]: Resized partition /dev/vda9 Apr 13 23:56:34.607674 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 23:56:34.610344 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 23:56:34.610896 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 23:56:34.624123 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) Apr 13 23:56:34.639361 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1302) Apr 13 23:56:34.639391 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 13 23:56:34.627646 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 23:56:34.639188 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 23:56:34.650739 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 23:56:34.667074 update_engine[1455]: I20260413 23:56:34.666421 1455 main.cc:92] Flatcar Update Engine starting Apr 13 23:56:34.669942 update_engine[1455]: I20260413 23:56:34.669687 1455 update_check_scheduler.cc:74] Next update check in 3m58s Apr 13 23:56:34.671514 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 23:56:34.671683 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 23:56:34.671922 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 23:56:34.672078 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 23:56:34.679726 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 23:56:34.680769 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 23:56:34.692922 jq[1458]: true Apr 13 23:56:34.720600 jq[1461]: true Apr 13 23:56:34.719822 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 23:56:34.727216 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 13 23:56:34.785541 tar[1460]: linux-amd64/LICENSE Apr 13 23:56:34.814475 tar[1460]: linux-amd64/helm Apr 13 23:56:34.802386 systemd[1]: Started update-engine.service - Update Engine. Apr 13 23:56:34.812956 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Apr 13 23:56:34.812973 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 23:56:34.813475 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 23:56:34.813599 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 23:56:34.824918 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 23:56:34.827587 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 23:56:34.835739 systemd-logind[1450]: New seat seat0. Apr 13 23:56:34.847708 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 13 23:56:34.847708 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 13 23:56:34.847708 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 13 23:56:34.868039 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Apr 13 23:56:34.853577 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 23:56:34.866053 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 23:56:34.875746 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 23:56:34.876002 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 23:56:34.914583 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Apr 13 23:56:34.918677 sshd_keygen[1452]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 23:56:34.923232 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 23:56:34.936452 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 13 23:56:34.974749 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 23:56:34.992926 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 23:56:35.012014 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 23:56:35.038498 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 23:56:35.057298 systemd[1]: Started sshd@0-10.0.0.37:22-10.0.0.1:55558.service - OpenSSH per-connection server daemon (10.0.0.1:55558). Apr 13 23:56:35.060038 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 23:56:35.060311 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 23:56:35.077646 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 23:56:35.122063 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 23:56:35.141985 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 23:56:35.197770 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 23:56:35.205439 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 23:56:35.281470 containerd[1465]: time="2026-04-13T23:56:35.281360087Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 23:56:35.284996 sshd[1512]: Accepted publickey for core from 10.0.0.1 port 55558 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:56:35.286175 sshd[1512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:56:35.304033 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 23:56:35.315375 containerd[1465]: time="2026-04-13T23:56:35.312203625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:56:35.315375 containerd[1465]: time="2026-04-13T23:56:35.314502091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:56:35.315375 containerd[1465]: time="2026-04-13T23:56:35.314544808Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 23:56:35.315375 containerd[1465]: time="2026-04-13T23:56:35.314563112Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 23:56:35.315375 containerd[1465]: time="2026-04-13T23:56:35.314723893Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 23:56:35.315375 containerd[1465]: time="2026-04-13T23:56:35.314740144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 23:56:35.315375 containerd[1465]: time="2026-04-13T23:56:35.314795066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:56:35.315375 containerd[1465]: time="2026-04-13T23:56:35.314807730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:56:35.315375 containerd[1465]: time="2026-04-13T23:56:35.314983629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:56:35.315375 containerd[1465]: time="2026-04-13T23:56:35.315000373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 23:56:35.315375 containerd[1465]: time="2026-04-13T23:56:35.315015950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:56:35.315639 containerd[1465]: time="2026-04-13T23:56:35.315027298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 23:56:35.315639 containerd[1465]: time="2026-04-13T23:56:35.315096851Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:56:35.315639 containerd[1465]: time="2026-04-13T23:56:35.315585159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:56:35.315939 containerd[1465]: time="2026-04-13T23:56:35.315720962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:56:35.316019 containerd[1465]: time="2026-04-13T23:56:35.315941965Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 23:56:35.316121 containerd[1465]: time="2026-04-13T23:56:35.316086210Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 23:56:35.316307 containerd[1465]: time="2026-04-13T23:56:35.316267230Z" level=info msg="metadata content store policy set" policy=shared Apr 13 23:56:35.316994 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 23:56:35.327754 systemd-logind[1450]: New session 1 of user core. Apr 13 23:56:35.330955 containerd[1465]: time="2026-04-13T23:56:35.330593668Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 23:56:35.330955 containerd[1465]: time="2026-04-13T23:56:35.330666577Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 23:56:35.330955 containerd[1465]: time="2026-04-13T23:56:35.330696598Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 23:56:35.330955 containerd[1465]: time="2026-04-13T23:56:35.330714360Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 23:56:35.330955 containerd[1465]: time="2026-04-13T23:56:35.330732357Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 23:56:35.330955 containerd[1465]: time="2026-04-13T23:56:35.330899388Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 23:56:35.331353 containerd[1465]: time="2026-04-13T23:56:35.331278153Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 23:56:35.331435 containerd[1465]: time="2026-04-13T23:56:35.331409615Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 23:56:35.331460 containerd[1465]: time="2026-04-13T23:56:35.331439949Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 23:56:35.331488 containerd[1465]: time="2026-04-13T23:56:35.331457433Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 23:56:35.331488 containerd[1465]: time="2026-04-13T23:56:35.331474722Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 23:56:35.331530 containerd[1465]: time="2026-04-13T23:56:35.331489312Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 23:56:35.331530 containerd[1465]: time="2026-04-13T23:56:35.331505445Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 23:56:35.331530 containerd[1465]: time="2026-04-13T23:56:35.331520978Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 23:56:35.331591 containerd[1465]: time="2026-04-13T23:56:35.331536803Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 23:56:35.331591 containerd[1465]: time="2026-04-13T23:56:35.331552009Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 23:56:35.331591 containerd[1465]: time="2026-04-13T23:56:35.331565146Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 23:56:35.331591 containerd[1465]: time="2026-04-13T23:56:35.331581744Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 23:56:35.331718 containerd[1465]: time="2026-04-13T23:56:35.331609806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.331718 containerd[1465]: time="2026-04-13T23:56:35.331625196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.331718 containerd[1465]: time="2026-04-13T23:56:35.331639458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.331718 containerd[1465]: time="2026-04-13T23:56:35.331653612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.331718 containerd[1465]: time="2026-04-13T23:56:35.331667001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.331718 containerd[1465]: time="2026-04-13T23:56:35.331683171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.331718 containerd[1465]: time="2026-04-13T23:56:35.331697711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.331718 containerd[1465]: time="2026-04-13T23:56:35.331711902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.331885 containerd[1465]: time="2026-04-13T23:56:35.331732279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.331885 containerd[1465]: time="2026-04-13T23:56:35.331748187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.331885 containerd[1465]: time="2026-04-13T23:56:35.331759901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.331885 containerd[1465]: time="2026-04-13T23:56:35.331771811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.331885 containerd[1465]: time="2026-04-13T23:56:35.331784704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.331885 containerd[1465]: time="2026-04-13T23:56:35.331803080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 23:56:35.331885 containerd[1465]: time="2026-04-13T23:56:35.331824566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.331885 containerd[1465]: time="2026-04-13T23:56:35.331838889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.331885 containerd[1465]: time="2026-04-13T23:56:35.331850549Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 23:56:35.332116 containerd[1465]: time="2026-04-13T23:56:35.332061628Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 23:56:35.332215 containerd[1465]: time="2026-04-13T23:56:35.332122571Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 23:56:35.332215 containerd[1465]: time="2026-04-13T23:56:35.332143742Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 23:56:35.332295 containerd[1465]: time="2026-04-13T23:56:35.332232302Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 23:56:35.332295 containerd[1465]: time="2026-04-13T23:56:35.332244608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.332295 containerd[1465]: time="2026-04-13T23:56:35.332281709Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 23:56:35.332350 containerd[1465]: time="2026-04-13T23:56:35.332300934Z" level=info msg="NRI interface is disabled by configuration." Apr 13 23:56:35.332350 containerd[1465]: time="2026-04-13T23:56:35.332312802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 23:56:35.332880 containerd[1465]: time="2026-04-13T23:56:35.332644618Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 23:56:35.333437 containerd[1465]: time="2026-04-13T23:56:35.332881661Z" level=info msg="Connect containerd service" Apr 13 23:56:35.333437 containerd[1465]: time="2026-04-13T23:56:35.332949992Z" level=info msg="using legacy CRI server" Apr 13 23:56:35.333437 containerd[1465]: time="2026-04-13T23:56:35.332957692Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 23:56:35.334925 containerd[1465]: time="2026-04-13T23:56:35.333143198Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 23:56:35.340734 containerd[1465]: time="2026-04-13T23:56:35.338057325Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 23:56:35.340734 containerd[1465]: time="2026-04-13T23:56:35.338315378Z" level=info msg="Start subscribing containerd event" Apr 13 23:56:35.340734 containerd[1465]: time="2026-04-13T23:56:35.338387147Z" level=info msg="Start recovering state" Apr 13 23:56:35.340734 containerd[1465]: time="2026-04-13T23:56:35.338462375Z" level=info msg="Start event monitor" Apr 13 23:56:35.340734 containerd[1465]: time="2026-04-13T23:56:35.338475422Z" level=info msg="Start snapshots syncer" Apr 13 23:56:35.340734 containerd[1465]: time="2026-04-13T23:56:35.338486096Z" level=info msg="Start cni network conf syncer for default" Apr 13 23:56:35.340734 containerd[1465]: time="2026-04-13T23:56:35.338493373Z" level=info msg="Start streaming server" Apr 13 23:56:35.340734 containerd[1465]: time="2026-04-13T23:56:35.338775145Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 23:56:35.340734 containerd[1465]: time="2026-04-13T23:56:35.339920207Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 23:56:35.340280 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 23:56:35.345502 containerd[1465]: time="2026-04-13T23:56:35.345441460Z" level=info msg="containerd successfully booted in 0.066553s" Apr 13 23:56:35.347634 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 23:56:35.360823 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 23:56:35.398781 (systemd)[1525]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 23:56:35.669232 systemd[1525]: Queued start job for default target default.target. Apr 13 23:56:35.697960 systemd[1525]: Created slice app.slice - User Application Slice. Apr 13 23:56:35.698144 systemd[1525]: Reached target paths.target - Paths. Apr 13 23:56:35.698197 systemd[1525]: Reached target timers.target - Timers. Apr 13 23:56:35.700093 systemd[1525]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 23:56:35.720627 systemd[1525]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 23:56:35.720764 systemd[1525]: Reached target sockets.target - Sockets. Apr 13 23:56:35.720779 systemd[1525]: Reached target basic.target - Basic System. Apr 13 23:56:35.720832 systemd[1525]: Reached target default.target - Main User Target. Apr 13 23:56:35.720858 systemd[1525]: Startup finished in 312ms. Apr 13 23:56:35.720956 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 23:56:35.741890 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 23:56:35.878820 systemd[1]: Started sshd@1-10.0.0.37:22-10.0.0.1:52402.service - OpenSSH per-connection server daemon (10.0.0.1:52402). Apr 13 23:56:35.930198 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 52402 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:56:35.932293 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:56:35.934323 tar[1460]: linux-amd64/README.md Apr 13 23:56:35.974269 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 23:56:35.987200 systemd-logind[1450]: New session 2 of user core. Apr 13 23:56:36.001106 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 23:56:36.131598 sshd[1536]: pam_unix(sshd:session): session closed for user core Apr 13 23:56:36.188491 systemd[1]: sshd@1-10.0.0.37:22-10.0.0.1:52402.service: Deactivated successfully. Apr 13 23:56:36.195120 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 23:56:36.201683 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Apr 13 23:56:36.212107 systemd[1]: Started sshd@2-10.0.0.37:22-10.0.0.1:52410.service - OpenSSH per-connection server daemon (10.0.0.1:52410). Apr 13 23:56:36.216310 systemd-logind[1450]: Removed session 2. Apr 13 23:56:36.221422 systemd-networkd[1390]: eth0: Gained IPv6LL Apr 13 23:56:36.229470 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 23:56:36.236013 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 23:56:36.288784 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 13 23:56:36.299981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:56:36.310559 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 23:56:36.401924 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 13 23:56:36.402434 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 13 23:56:36.407107 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 23:56:36.420142 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 23:56:36.437798 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 52410 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:56:36.440121 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:56:36.458110 systemd-logind[1450]: New session 3 of user core. Apr 13 23:56:36.467463 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 23:56:36.605413 sshd[1546]: pam_unix(sshd:session): session closed for user core Apr 13 23:56:36.610496 systemd[1]: sshd@2-10.0.0.37:22-10.0.0.1:52410.service: Deactivated successfully. Apr 13 23:56:36.612934 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 23:56:36.620178 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Apr 13 23:56:36.624478 systemd-logind[1450]: Removed session 3. Apr 13 23:56:38.646956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:56:38.650582 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 23:56:38.652743 (kubelet)[1574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:56:38.656295 systemd[1]: Startup finished in 1.874s (kernel) + 13.517s (initrd) + 9.380s (userspace) = 24.773s. Apr 13 23:56:40.343503 kubelet[1574]: E0413 23:56:40.342067 1574 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:56:40.372444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:56:40.372603 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:56:40.374753 systemd[1]: kubelet.service: Consumed 1.546s CPU time. Apr 13 23:56:46.635877 systemd[1]: Started sshd@3-10.0.0.37:22-10.0.0.1:51954.service - OpenSSH per-connection server daemon (10.0.0.1:51954). Apr 13 23:56:46.680317 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 51954 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:56:46.682707 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:56:46.712553 systemd-logind[1450]: New session 4 of user core. Apr 13 23:56:46.724460 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 23:56:46.811100 sshd[1588]: pam_unix(sshd:session): session closed for user core Apr 13 23:56:46.828486 systemd[1]: sshd@3-10.0.0.37:22-10.0.0.1:51954.service: Deactivated successfully. Apr 13 23:56:46.831030 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 23:56:46.835122 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Apr 13 23:56:46.875813 systemd[1]: Started sshd@4-10.0.0.37:22-10.0.0.1:51966.service - OpenSSH per-connection server daemon (10.0.0.1:51966). Apr 13 23:56:46.878099 systemd-logind[1450]: Removed session 4. Apr 13 23:56:46.966658 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 51966 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:56:46.968053 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:56:46.995291 systemd-logind[1450]: New session 5 of user core. Apr 13 23:56:47.021398 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 23:56:47.123200 sshd[1595]: pam_unix(sshd:session): session closed for user core Apr 13 23:56:47.147829 systemd[1]: sshd@4-10.0.0.37:22-10.0.0.1:51966.service: Deactivated successfully. Apr 13 23:56:47.150516 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 23:56:47.166625 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Apr 13 23:56:47.204068 systemd[1]: Started sshd@5-10.0.0.37:22-10.0.0.1:51970.service - OpenSSH per-connection server daemon (10.0.0.1:51970). Apr 13 23:56:47.212498 systemd-logind[1450]: Removed session 5. Apr 13 23:56:47.295099 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 51970 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:56:47.300053 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:56:47.334892 systemd-logind[1450]: New session 6 of user core. Apr 13 23:56:47.375907 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 23:56:47.503678 sshd[1602]: pam_unix(sshd:session): session closed for user core Apr 13 23:56:47.536587 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Apr 13 23:56:47.536748 systemd[1]: sshd@5-10.0.0.37:22-10.0.0.1:51970.service: Deactivated successfully. Apr 13 23:56:47.538790 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 23:56:47.587935 systemd[1]: Started sshd@6-10.0.0.37:22-10.0.0.1:51984.service - OpenSSH per-connection server daemon (10.0.0.1:51984). Apr 13 23:56:47.591480 systemd-logind[1450]: Removed session 6. Apr 13 23:56:47.733782 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 51984 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:56:47.735239 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:56:47.786495 systemd-logind[1450]: New session 7 of user core. Apr 13 23:56:47.807885 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 23:56:47.937104 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 23:56:47.937830 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:56:47.973823 sudo[1613]: pam_unix(sudo:session): session closed for user root Apr 13 23:56:47.983039 sshd[1610]: pam_unix(sshd:session): session closed for user core Apr 13 23:56:48.008880 systemd[1]: sshd@6-10.0.0.37:22-10.0.0.1:51984.service: Deactivated successfully. Apr 13 23:56:48.011627 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 23:56:48.016989 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Apr 13 23:56:48.031772 systemd[1]: Started sshd@7-10.0.0.37:22-10.0.0.1:51994.service - OpenSSH per-connection server daemon (10.0.0.1:51994). Apr 13 23:56:48.086916 systemd-logind[1450]: Removed session 7. Apr 13 23:56:48.173886 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 51994 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:56:48.180673 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:56:48.201462 systemd-logind[1450]: New session 8 of user core. Apr 13 23:56:48.218524 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 23:56:48.359101 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 23:56:48.359762 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:56:48.374976 sudo[1622]: pam_unix(sudo:session): session closed for user root Apr 13 23:56:48.386787 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 23:56:48.387560 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:56:48.468006 auditctl[1625]: No rules Apr 13 23:56:48.468289 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 23:56:48.470473 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 23:56:48.473678 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 23:56:48.481747 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 23:56:48.614130 augenrules[1643]: No rules Apr 13 23:56:48.618246 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 23:56:48.619886 sudo[1621]: pam_unix(sudo:session): session closed for user root Apr 13 23:56:48.630607 sshd[1618]: pam_unix(sshd:session): session closed for user core Apr 13 23:56:48.678752 systemd[1]: Started sshd@8-10.0.0.37:22-10.0.0.1:51996.service - OpenSSH per-connection server daemon (10.0.0.1:51996). Apr 13 23:56:48.679561 systemd[1]: sshd@7-10.0.0.37:22-10.0.0.1:51994.service: Deactivated successfully. Apr 13 23:56:48.683464 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 23:56:48.685569 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Apr 13 23:56:48.688948 systemd-logind[1450]: Removed session 8. Apr 13 23:56:48.740440 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 51996 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:56:48.740438 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:56:48.777909 systemd-logind[1450]: New session 9 of user core. Apr 13 23:56:48.803261 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 23:56:48.934957 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 23:56:48.937368 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:56:50.098908 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 23:56:50.099897 (dockerd)[1673]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 23:56:50.597527 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 23:56:50.625467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:56:50.981139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:56:50.987528 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:56:51.209440 kubelet[1686]: E0413 23:56:51.208551 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:56:51.219185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:56:51.219361 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:56:51.225395 dockerd[1673]: time="2026-04-13T23:56:51.223350636Z" level=info msg="Starting up" Apr 13 23:56:51.670357 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3918392260-merged.mount: Deactivated successfully. Apr 13 23:56:51.791072 dockerd[1673]: time="2026-04-13T23:56:51.791014730Z" level=info msg="Loading containers: start." Apr 13 23:56:52.453905 kernel: Initializing XFRM netlink socket Apr 13 23:56:52.982902 systemd-networkd[1390]: docker0: Link UP Apr 13 23:56:53.106193 dockerd[1673]: time="2026-04-13T23:56:53.106059605Z" level=info msg="Loading containers: done." Apr 13 23:56:53.228079 dockerd[1673]: time="2026-04-13T23:56:53.227895832Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 23:56:53.228661 dockerd[1673]: time="2026-04-13T23:56:53.228564323Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 23:56:53.228741 dockerd[1673]: time="2026-04-13T23:56:53.228715081Z" level=info msg="Daemon has completed initialization" Apr 13 23:56:53.442613 dockerd[1673]: time="2026-04-13T23:56:53.442484349Z" level=info msg="API listen on /run/docker.sock" Apr 13 23:56:53.452882 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 23:56:55.300914 containerd[1465]: time="2026-04-13T23:56:55.300754685Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 23:56:56.386729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2195887878.mount: Deactivated successfully. Apr 13 23:57:00.788141 containerd[1465]: time="2026-04-13T23:57:00.787617551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:00.790437 containerd[1465]: time="2026-04-13T23:57:00.790308450Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29988857" Apr 13 23:57:00.795878 containerd[1465]: time="2026-04-13T23:57:00.795009941Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:00.831698 containerd[1465]: time="2026-04-13T23:57:00.831483525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:00.834679 containerd[1465]: time="2026-04-13T23:57:00.833794000Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 5.532935877s" Apr 13 23:57:00.834679 containerd[1465]: time="2026-04-13T23:57:00.834073786Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 13 23:57:00.835795 containerd[1465]: time="2026-04-13T23:57:00.835411776Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 23:57:01.380635 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 23:57:01.390707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:57:01.804958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:57:01.805105 (kubelet)[1901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:57:01.994632 kubelet[1901]: E0413 23:57:01.993620 1901 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:57:02.007614 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:57:02.009857 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:57:04.692772 containerd[1465]: time="2026-04-13T23:57:04.692130506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:04.695908 containerd[1465]: time="2026-04-13T23:57:04.695830267Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021841" Apr 13 23:57:04.698708 containerd[1465]: time="2026-04-13T23:57:04.698327469Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:04.704415 containerd[1465]: time="2026-04-13T23:57:04.704200240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:04.708063 containerd[1465]: time="2026-04-13T23:57:04.706139780Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 3.870696559s" Apr 13 23:57:04.708063 containerd[1465]: time="2026-04-13T23:57:04.706202812Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 13 23:57:04.710011 containerd[1465]: time="2026-04-13T23:57:04.709835648Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 23:57:06.511833 containerd[1465]: time="2026-04-13T23:57:06.511613202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:06.514329 containerd[1465]: time="2026-04-13T23:57:06.513682935Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162685" Apr 13 23:57:06.515477 containerd[1465]: time="2026-04-13T23:57:06.515428454Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:06.523181 containerd[1465]: time="2026-04-13T23:57:06.522694416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:06.526262 containerd[1465]: time="2026-04-13T23:57:06.526122505Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 1.816246863s" Apr 13 23:57:06.526262 containerd[1465]: time="2026-04-13T23:57:06.526227341Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 13 23:57:06.527723 containerd[1465]: time="2026-04-13T23:57:06.527622135Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 23:57:08.604121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1301861683.mount: Deactivated successfully. Apr 13 23:57:10.529590 containerd[1465]: time="2026-04-13T23:57:10.529287623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:10.530745 containerd[1465]: time="2026-04-13T23:57:10.530601144Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828657" Apr 13 23:57:10.598296 containerd[1465]: time="2026-04-13T23:57:10.597939805Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:10.603817 containerd[1465]: time="2026-04-13T23:57:10.603658206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:10.605081 containerd[1465]: time="2026-04-13T23:57:10.604926291Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 4.077260478s" Apr 13 23:57:10.605081 containerd[1465]: time="2026-04-13T23:57:10.605081803Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 13 23:57:10.606235 containerd[1465]: time="2026-04-13T23:57:10.606018211Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 23:57:11.337724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681025039.mount: Deactivated successfully. Apr 13 23:57:12.097653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 23:57:12.119874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:57:12.411903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:57:12.429828 (kubelet)[1987]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:57:12.529618 kubelet[1987]: E0413 23:57:12.529436 1987 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:57:12.532627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:57:12.532789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:57:13.209266 containerd[1465]: time="2026-04-13T23:57:13.207904172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:13.210641 containerd[1465]: time="2026-04-13T23:57:13.210495070Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 13 23:57:13.212140 containerd[1465]: time="2026-04-13T23:57:13.212058205Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:13.224438 containerd[1465]: time="2026-04-13T23:57:13.224102070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:13.227704 containerd[1465]: time="2026-04-13T23:57:13.227109820Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.6210592s" Apr 13 23:57:13.227704 containerd[1465]: time="2026-04-13T23:57:13.227670487Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 13 23:57:13.231330 containerd[1465]: time="2026-04-13T23:57:13.231253019Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 23:57:13.823821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount723301330.mount: Deactivated successfully. Apr 13 23:57:13.834619 containerd[1465]: time="2026-04-13T23:57:13.834527302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:13.835863 containerd[1465]: time="2026-04-13T23:57:13.835776250Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 13 23:57:13.837474 containerd[1465]: time="2026-04-13T23:57:13.837247514Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:13.842565 containerd[1465]: time="2026-04-13T23:57:13.842370623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:13.891038 containerd[1465]: time="2026-04-13T23:57:13.890763106Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 659.430674ms" Apr 13 23:57:13.891038 containerd[1465]: time="2026-04-13T23:57:13.891008865Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 13 23:57:13.892691 containerd[1465]: time="2026-04-13T23:57:13.892639604Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 23:57:14.538856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount494987404.mount: Deactivated successfully. Apr 13 23:57:16.870498 containerd[1465]: time="2026-04-13T23:57:16.870336975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:16.871808 containerd[1465]: time="2026-04-13T23:57:16.871672746Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718278" Apr 13 23:57:16.873266 containerd[1465]: time="2026-04-13T23:57:16.872921392Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:16.877034 containerd[1465]: time="2026-04-13T23:57:16.876942329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:57:16.878757 containerd[1465]: time="2026-04-13T23:57:16.878605447Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.985914272s" Apr 13 23:57:16.878866 containerd[1465]: time="2026-04-13T23:57:16.878788002Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 13 23:57:19.915534 update_engine[1455]: I20260413 23:57:19.914327 1455 update_attempter.cc:509] Updating boot flags... Apr 13 23:57:19.998618 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2097) Apr 13 23:57:22.599289 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 13 23:57:22.614772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:57:22.971486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:57:22.976499 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:57:23.035031 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:57:23.037588 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 23:57:23.037972 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:57:23.091028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:57:23.133426 systemd[1]: Reloading requested from client PID 2126 ('systemctl') (unit session-9.scope)... Apr 13 23:57:23.133446 systemd[1]: Reloading... Apr 13 23:57:23.305197 zram_generator::config[2168]: No configuration found. Apr 13 23:57:23.545687 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:57:23.697747 systemd[1]: Reloading finished in 563 ms. Apr 13 23:57:23.820731 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 23:57:23.820821 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 23:57:23.821365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:57:23.826362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:57:24.092818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:57:24.112848 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 23:57:24.289315 kubelet[2213]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 23:57:24.289751 kubelet[2213]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 23:57:24.289751 kubelet[2213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 23:57:24.289751 kubelet[2213]: I0413 23:57:24.289498 2213 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 23:57:25.692715 kubelet[2213]: I0413 23:57:25.692632 2213 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 23:57:25.692715 kubelet[2213]: I0413 23:57:25.692690 2213 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 23:57:25.693432 kubelet[2213]: I0413 23:57:25.692995 2213 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 23:57:25.742856 kubelet[2213]: E0413 23:57:25.742775 2213 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:57:25.767423 kubelet[2213]: I0413 23:57:25.766544 2213 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 23:57:25.776684 kubelet[2213]: E0413 23:57:25.776616 2213 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 23:57:25.776684 kubelet[2213]: I0413 23:57:25.776686 2213 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 23:57:25.783483 kubelet[2213]: I0413 23:57:25.783132 2213 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 23:57:25.783859 kubelet[2213]: I0413 23:57:25.783704 2213 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 23:57:25.784425 kubelet[2213]: I0413 23:57:25.783882 2213 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 23:57:25.784425 kubelet[2213]: I0413 23:57:25.784406 2213 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 23:57:25.784425 kubelet[2213]: I0413 23:57:25.784416 2213 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 23:57:25.784684 kubelet[2213]: I0413 23:57:25.784556 2213 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:57:25.792582 kubelet[2213]: I0413 23:57:25.792482 2213 kubelet.go:480] "Attempting to sync node with API server" Apr 13 23:57:25.792582 kubelet[2213]: I0413 23:57:25.792579 2213 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 23:57:25.792929 kubelet[2213]: I0413 23:57:25.792632 2213 kubelet.go:386] "Adding apiserver pod source" Apr 13 23:57:25.792929 kubelet[2213]: I0413 23:57:25.792665 2213 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 23:57:25.800318 kubelet[2213]: E0413 23:57:25.800277 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:57:25.800318 kubelet[2213]: E0413 23:57:25.800276 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:57:25.801874 kubelet[2213]: I0413 23:57:25.801631 2213 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 23:57:25.804046 kubelet[2213]: I0413 23:57:25.803773 2213 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 23:57:25.810427 kubelet[2213]: W0413 23:57:25.808391 2213 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 23:57:25.817038 kubelet[2213]: I0413 23:57:25.816992 2213 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 23:57:25.817143 kubelet[2213]: I0413 23:57:25.817074 2213 server.go:1289] "Started kubelet" Apr 13 23:57:25.817230 kubelet[2213]: I0413 23:57:25.817141 2213 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 23:57:25.817984 kubelet[2213]: I0413 23:57:25.817824 2213 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 23:57:25.818581 kubelet[2213]: I0413 23:57:25.818379 2213 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 23:57:25.820292 kubelet[2213]: I0413 23:57:25.818845 2213 server.go:317] "Adding debug handlers to kubelet server" Apr 13 23:57:25.822380 kubelet[2213]: I0413 23:57:25.822075 2213 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 23:57:25.822816 kubelet[2213]: I0413 23:57:25.822556 2213 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 23:57:25.824220 kubelet[2213]: I0413 23:57:25.823945 2213 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 23:57:25.825778 kubelet[2213]: E0413 23:57:25.824071 2213 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.37:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.37:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a60ff47ebd7e20 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:57:25.817024032 +0000 UTC m=+1.686243531,LastTimestamp:2026-04-13 23:57:25.817024032 +0000 UTC m=+1.686243531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:57:25.828183 kubelet[2213]: I0413 23:57:25.826907 2213 reconciler.go:26] "Reconciler: start to sync state" Apr 13 23:57:25.828183 kubelet[2213]: I0413 23:57:25.827064 2213 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 23:57:25.828183 kubelet[2213]: E0413 23:57:25.827637 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:57:25.828183 kubelet[2213]: E0413 23:57:25.827705 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="200ms" Apr 13 23:57:25.828183 kubelet[2213]: E0413 23:57:25.827762 2213 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:57:25.831194 kubelet[2213]: E0413 23:57:25.831143 2213 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 23:57:25.831349 kubelet[2213]: I0413 23:57:25.831260 2213 factory.go:223] Registration of the systemd container factory successfully Apr 13 23:57:25.831705 kubelet[2213]: I0413 23:57:25.831581 2213 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 23:57:25.834105 kubelet[2213]: I0413 23:57:25.834063 2213 factory.go:223] Registration of the containerd container factory successfully Apr 13 23:57:25.914995 kubelet[2213]: I0413 23:57:25.911886 2213 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 23:57:25.914995 kubelet[2213]: I0413 23:57:25.911903 2213 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 23:57:25.914995 kubelet[2213]: I0413 23:57:25.911933 2213 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:57:25.926540 kubelet[2213]: I0413 23:57:25.926091 2213 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 23:57:25.928641 kubelet[2213]: E0413 23:57:25.928586 2213 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:57:25.931194 kubelet[2213]: I0413 23:57:25.931131 2213 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 23:57:25.931384 kubelet[2213]: I0413 23:57:25.931364 2213 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 23:57:25.931474 kubelet[2213]: I0413 23:57:25.931414 2213 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 23:57:25.931474 kubelet[2213]: I0413 23:57:25.931423 2213 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 23:57:25.931527 kubelet[2213]: E0413 23:57:25.931494 2213 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:57:25.932000 kubelet[2213]: E0413 23:57:25.931966 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:57:25.964822 kubelet[2213]: I0413 23:57:25.964528 2213 policy_none.go:49] "None policy: Start" Apr 13 23:57:25.964822 kubelet[2213]: I0413 23:57:25.964615 2213 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 23:57:25.964822 kubelet[2213]: I0413 23:57:25.964635 2213 state_mem.go:35] "Initializing new in-memory state store" Apr 13 23:57:26.020724 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 23:57:26.029093 kubelet[2213]: E0413 23:57:26.028854 2213 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:57:26.031813 kubelet[2213]: E0413 23:57:26.031702 2213 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:57:26.038753 kubelet[2213]: E0413 23:57:26.038107 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="400ms" Apr 13 23:57:26.128918 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 23:57:26.130412 kubelet[2213]: E0413 23:57:26.130269 2213 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:57:26.155487 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 23:57:26.193672 kubelet[2213]: E0413 23:57:26.193390 2213 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 23:57:26.194186 kubelet[2213]: I0413 23:57:26.193889 2213 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 23:57:26.194186 kubelet[2213]: I0413 23:57:26.194022 2213 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 23:57:26.194607 kubelet[2213]: I0413 23:57:26.194551 2213 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 23:57:26.195641 kubelet[2213]: E0413 23:57:26.195564 2213 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 23:57:26.195726 kubelet[2213]: E0413 23:57:26.195651 2213 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:57:26.295804 kubelet[2213]: I0413 23:57:26.295588 2213 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:57:26.296090 kubelet[2213]: E0413 23:57:26.296040 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Apr 13 23:57:26.300759 systemd[1]: Created slice kubepods-burstable-podec4e5c5105558a5e2d8b4f9772d220dc.slice - libcontainer container kubepods-burstable-podec4e5c5105558a5e2d8b4f9772d220dc.slice. Apr 13 23:57:26.325254 kubelet[2213]: E0413 23:57:26.325185 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:57:26.331039 kubelet[2213]: I0413 23:57:26.330994 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec4e5c5105558a5e2d8b4f9772d220dc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec4e5c5105558a5e2d8b4f9772d220dc\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:57:26.331039 kubelet[2213]: I0413 23:57:26.331056 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec4e5c5105558a5e2d8b4f9772d220dc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec4e5c5105558a5e2d8b4f9772d220dc\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:57:26.331445 kubelet[2213]: I0413 23:57:26.331083 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec4e5c5105558a5e2d8b4f9772d220dc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ec4e5c5105558a5e2d8b4f9772d220dc\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:57:26.331445 kubelet[2213]: I0413 23:57:26.331107 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:26.331532 kubelet[2213]: I0413 23:57:26.331503 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:26.331640 kubelet[2213]: I0413 23:57:26.331579 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:26.331640 kubelet[2213]: I0413 23:57:26.331603 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:26.331710 kubelet[2213]: I0413 23:57:26.331655 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 13 23:57:26.331710 kubelet[2213]: I0413 23:57:26.331676 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:26.333619 systemd[1]: Created slice kubepods-burstable-podebf8e820819e4b80bc03d078b9ba80f5.slice - libcontainer container kubepods-burstable-podebf8e820819e4b80bc03d078b9ba80f5.slice. Apr 13 23:57:26.336403 kubelet[2213]: E0413 23:57:26.336374 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:57:26.348796 systemd[1]: Created slice kubepods-burstable-pod39798d73a6894e44ae801eb773bf9a39.slice - libcontainer container kubepods-burstable-pod39798d73a6894e44ae801eb773bf9a39.slice. Apr 13 23:57:26.350978 kubelet[2213]: E0413 23:57:26.350919 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:57:26.443057 kubelet[2213]: E0413 23:57:26.442904 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="800ms" Apr 13 23:57:26.502844 kubelet[2213]: I0413 23:57:26.501571 2213 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:57:26.502844 kubelet[2213]: E0413 23:57:26.502461 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Apr 13 23:57:26.626943 kubelet[2213]: E0413 23:57:26.626849 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:26.629058 containerd[1465]: time="2026-04-13T23:57:26.628640233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ec4e5c5105558a5e2d8b4f9772d220dc,Namespace:kube-system,Attempt:0,}" Apr 13 23:57:26.638241 kubelet[2213]: E0413 23:57:26.638116 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:26.640017 containerd[1465]: time="2026-04-13T23:57:26.639716426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,}" Apr 13 23:57:26.668356 kubelet[2213]: E0413 23:57:26.667339 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:26.669602 containerd[1465]: time="2026-04-13T23:57:26.668970453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,}" Apr 13 23:57:26.766089 kubelet[2213]: E0413 23:57:26.765661 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:57:26.910238 kubelet[2213]: I0413 23:57:26.909904 2213 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:57:26.912062 kubelet[2213]: E0413 23:57:26.911807 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Apr 13 23:57:26.992352 kubelet[2213]: E0413 23:57:26.991852 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:57:27.154146 kubelet[2213]: E0413 23:57:27.153643 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:57:27.247390 kubelet[2213]: E0413 23:57:27.244798 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="1.6s" Apr 13 23:57:27.246615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount20907153.mount: Deactivated successfully. Apr 13 23:57:27.255076 containerd[1465]: time="2026-04-13T23:57:27.254906910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:57:27.256993 containerd[1465]: time="2026-04-13T23:57:27.256884431Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 23:57:27.258389 containerd[1465]: time="2026-04-13T23:57:27.258336046Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:57:27.260131 containerd[1465]: time="2026-04-13T23:57:27.259987802Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:57:27.261588 containerd[1465]: time="2026-04-13T23:57:27.261519588Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:57:27.262514 containerd[1465]: time="2026-04-13T23:57:27.262452505Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 23:57:27.263620 containerd[1465]: time="2026-04-13T23:57:27.263472204Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 13 23:57:27.265429 containerd[1465]: time="2026-04-13T23:57:27.265384263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:57:27.267168 containerd[1465]: time="2026-04-13T23:57:27.267111818Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 627.212311ms" Apr 13 23:57:27.269312 containerd[1465]: time="2026-04-13T23:57:27.268948159Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 599.888757ms" Apr 13 23:57:27.275432 containerd[1465]: time="2026-04-13T23:57:27.275357482Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 645.56194ms" Apr 13 23:57:27.310962 kubelet[2213]: E0413 23:57:27.310883 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:57:27.446959 containerd[1465]: time="2026-04-13T23:57:27.446624443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:57:27.446959 containerd[1465]: time="2026-04-13T23:57:27.446725350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:57:27.446959 containerd[1465]: time="2026-04-13T23:57:27.446773087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:57:27.446959 containerd[1465]: time="2026-04-13T23:57:27.446879559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:57:27.446959 containerd[1465]: time="2026-04-13T23:57:27.446542996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:57:27.446959 containerd[1465]: time="2026-04-13T23:57:27.446595623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:57:27.446959 containerd[1465]: time="2026-04-13T23:57:27.446623356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:57:27.446959 containerd[1465]: time="2026-04-13T23:57:27.446775426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:57:27.450899 containerd[1465]: time="2026-04-13T23:57:27.450800448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:57:27.451261 containerd[1465]: time="2026-04-13T23:57:27.451104674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:57:27.451261 containerd[1465]: time="2026-04-13T23:57:27.451129832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:57:27.451695 containerd[1465]: time="2026-04-13T23:57:27.451457374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:57:27.504705 systemd[1]: Started cri-containerd-2402e411b7340a96352cafabefdeff2278ac86a65eb635ee9198f467ed4c6723.scope - libcontainer container 2402e411b7340a96352cafabefdeff2278ac86a65eb635ee9198f467ed4c6723. Apr 13 23:57:27.506802 systemd[1]: Started cri-containerd-5a2668c1187752126c4080c7f3a98daba12da7afda72e9039d615a7163afd2a2.scope - libcontainer container 5a2668c1187752126c4080c7f3a98daba12da7afda72e9039d615a7163afd2a2. Apr 13 23:57:27.508710 systemd[1]: Started cri-containerd-e908ec602332e8b8a7f04c1999266996701929242803477c540a46b92a582156.scope - libcontainer container e908ec602332e8b8a7f04c1999266996701929242803477c540a46b92a582156. Apr 13 23:57:27.587223 containerd[1465]: time="2026-04-13T23:57:27.587067950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ec4e5c5105558a5e2d8b4f9772d220dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"2402e411b7340a96352cafabefdeff2278ac86a65eb635ee9198f467ed4c6723\"" Apr 13 23:57:27.590835 kubelet[2213]: E0413 23:57:27.590775 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:27.601765 containerd[1465]: time="2026-04-13T23:57:27.601528759Z" level=info msg="CreateContainer within sandbox \"2402e411b7340a96352cafabefdeff2278ac86a65eb635ee9198f467ed4c6723\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 23:57:27.603013 containerd[1465]: time="2026-04-13T23:57:27.602941122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a2668c1187752126c4080c7f3a98daba12da7afda72e9039d615a7163afd2a2\"" Apr 13 23:57:27.603137 containerd[1465]: time="2026-04-13T23:57:27.602985177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,} returns sandbox id \"e908ec602332e8b8a7f04c1999266996701929242803477c540a46b92a582156\"" Apr 13 23:57:27.604608 kubelet[2213]: E0413 23:57:27.604421 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:27.604767 kubelet[2213]: E0413 23:57:27.604761 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:27.616352 containerd[1465]: time="2026-04-13T23:57:27.615473059Z" level=info msg="CreateContainer within sandbox \"5a2668c1187752126c4080c7f3a98daba12da7afda72e9039d615a7163afd2a2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 23:57:27.618529 containerd[1465]: time="2026-04-13T23:57:27.618470504Z" level=info msg="CreateContainer within sandbox \"e908ec602332e8b8a7f04c1999266996701929242803477c540a46b92a582156\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 23:57:27.654979 containerd[1465]: time="2026-04-13T23:57:27.654531299Z" level=info msg="CreateContainer within sandbox \"2402e411b7340a96352cafabefdeff2278ac86a65eb635ee9198f467ed4c6723\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7e6427ace6fdcdcc1ea7ed5b9d096de5a183ca06571a073d05a4fc33c5b523ed\"" Apr 13 23:57:27.660341 containerd[1465]: time="2026-04-13T23:57:27.657996779Z" level=info msg="StartContainer for \"7e6427ace6fdcdcc1ea7ed5b9d096de5a183ca06571a073d05a4fc33c5b523ed\"" Apr 13 23:57:27.678134 containerd[1465]: time="2026-04-13T23:57:27.677449318Z" level=info msg="CreateContainer within sandbox \"e908ec602332e8b8a7f04c1999266996701929242803477c540a46b92a582156\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"21b6c84a716cad012156f8f744c7d3a62f47f84f259bc753fcc2dd0dff81a1e4\"" Apr 13 23:57:27.680569 containerd[1465]: time="2026-04-13T23:57:27.680523589Z" level=info msg="CreateContainer within sandbox \"5a2668c1187752126c4080c7f3a98daba12da7afda72e9039d615a7163afd2a2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6f02628a2eb2bdbc09b37f0036e9933db96fc525f437819a4a3a93993f5bb8ac\"" Apr 13 23:57:27.681120 containerd[1465]: time="2026-04-13T23:57:27.680881152Z" level=info msg="StartContainer for \"21b6c84a716cad012156f8f744c7d3a62f47f84f259bc753fcc2dd0dff81a1e4\"" Apr 13 23:57:27.684346 containerd[1465]: time="2026-04-13T23:57:27.684303480Z" level=info msg="StartContainer for \"6f02628a2eb2bdbc09b37f0036e9933db96fc525f437819a4a3a93993f5bb8ac\"" Apr 13 23:57:27.707254 systemd[1]: Started cri-containerd-7e6427ace6fdcdcc1ea7ed5b9d096de5a183ca06571a073d05a4fc33c5b523ed.scope - libcontainer container 7e6427ace6fdcdcc1ea7ed5b9d096de5a183ca06571a073d05a4fc33c5b523ed. Apr 13 23:57:27.716706 kubelet[2213]: I0413 23:57:27.716438 2213 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:57:27.718255 kubelet[2213]: E0413 23:57:27.718065 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Apr 13 23:57:27.727619 systemd[1]: Started cri-containerd-21b6c84a716cad012156f8f744c7d3a62f47f84f259bc753fcc2dd0dff81a1e4.scope - libcontainer container 21b6c84a716cad012156f8f744c7d3a62f47f84f259bc753fcc2dd0dff81a1e4. Apr 13 23:57:27.760499 systemd[1]: Started cri-containerd-6f02628a2eb2bdbc09b37f0036e9933db96fc525f437819a4a3a93993f5bb8ac.scope - libcontainer container 6f02628a2eb2bdbc09b37f0036e9933db96fc525f437819a4a3a93993f5bb8ac. Apr 13 23:57:27.824505 kubelet[2213]: E0413 23:57:27.824306 2213 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:57:27.829096 containerd[1465]: time="2026-04-13T23:57:27.828525411Z" level=info msg="StartContainer for \"21b6c84a716cad012156f8f744c7d3a62f47f84f259bc753fcc2dd0dff81a1e4\" returns successfully" Apr 13 23:57:27.830396 containerd[1465]: time="2026-04-13T23:57:27.830178273Z" level=info msg="StartContainer for \"7e6427ace6fdcdcc1ea7ed5b9d096de5a183ca06571a073d05a4fc33c5b523ed\" returns successfully" Apr 13 23:57:27.832580 containerd[1465]: time="2026-04-13T23:57:27.830555694Z" level=info msg="StartContainer for \"6f02628a2eb2bdbc09b37f0036e9933db96fc525f437819a4a3a93993f5bb8ac\" returns successfully" Apr 13 23:57:27.963791 kubelet[2213]: E0413 23:57:27.963757 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:57:27.964978 kubelet[2213]: E0413 23:57:27.964569 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:27.974037 kubelet[2213]: E0413 23:57:27.974010 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:57:27.975384 kubelet[2213]: E0413 23:57:27.974918 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:27.983662 kubelet[2213]: E0413 23:57:27.983068 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:57:27.984247 kubelet[2213]: E0413 23:57:27.983949 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:28.987765 kubelet[2213]: E0413 23:57:28.986841 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:57:28.987765 kubelet[2213]: E0413 23:57:28.986981 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:28.987765 kubelet[2213]: E0413 23:57:28.987515 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:57:28.987765 kubelet[2213]: E0413 23:57:28.987669 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:29.323325 kubelet[2213]: I0413 23:57:29.322996 2213 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:57:29.994229 kubelet[2213]: E0413 23:57:29.994003 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:57:29.994684 kubelet[2213]: E0413 23:57:29.994498 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:31.000137 kubelet[2213]: E0413 23:57:30.999946 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:57:31.000637 kubelet[2213]: E0413 23:57:31.000328 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:31.295786 kubelet[2213]: E0413 23:57:31.295402 2213 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 13 23:57:31.301566 kubelet[2213]: E0413 23:57:31.300371 2213 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a60ff47ebd7e20 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:57:25.817024032 +0000 UTC m=+1.686243531,LastTimestamp:2026-04-13 23:57:25.817024032 +0000 UTC m=+1.686243531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:57:31.410021 kubelet[2213]: E0413 23:57:31.409399 2213 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a60ff47f94af3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:57:25.831126846 +0000 UTC m=+1.700346344,LastTimestamp:2026-04-13 23:57:25.831126846 +0000 UTC m=+1.700346344,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:57:31.411659 kubelet[2213]: I0413 23:57:31.411534 2213 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 13 23:57:31.411932 kubelet[2213]: E0413 23:57:31.411891 2213 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 13 23:57:31.427903 kubelet[2213]: I0413 23:57:31.427838 2213 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 13 23:57:31.533926 kubelet[2213]: E0413 23:57:31.517141 2213 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 13 23:57:31.533926 kubelet[2213]: I0413 23:57:31.517473 2213 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:31.539646 kubelet[2213]: E0413 23:57:31.539559 2213 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:31.539646 kubelet[2213]: I0413 23:57:31.539622 2213 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 13 23:57:31.544113 kubelet[2213]: E0413 23:57:31.544029 2213 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 13 23:57:31.798892 kubelet[2213]: I0413 23:57:31.798748 2213 apiserver.go:52] "Watching apiserver" Apr 13 23:57:31.834773 kubelet[2213]: I0413 23:57:31.834312 2213 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 23:57:32.605955 kubelet[2213]: I0413 23:57:32.605506 2213 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 13 23:57:32.627657 kubelet[2213]: E0413 23:57:32.627567 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:33.001758 kubelet[2213]: E0413 23:57:33.001468 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:34.609335 kubelet[2213]: I0413 23:57:34.608090 2213 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:34.643450 kubelet[2213]: E0413 23:57:34.641793 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:35.034889 kubelet[2213]: E0413 23:57:35.034390 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:36.035501 kubelet[2213]: I0413 23:57:36.035362 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.035340801 podStartE2EDuration="2.035340801s" podCreationTimestamp="2026-04-13 23:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:57:36.014091066 +0000 UTC m=+11.883310565" watchObservedRunningTime="2026-04-13 23:57:36.035340801 +0000 UTC m=+11.904560313" Apr 13 23:57:36.976643 systemd[1]: Reloading requested from client PID 2509 ('systemctl') (unit session-9.scope)... Apr 13 23:57:36.976667 systemd[1]: Reloading... Apr 13 23:57:37.187825 zram_generator::config[2544]: No configuration found. Apr 13 23:57:37.558307 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:57:37.772593 systemd[1]: Reloading finished in 795 ms. Apr 13 23:57:37.921670 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:57:37.954112 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 23:57:37.954982 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:57:37.955088 systemd[1]: kubelet.service: Consumed 3.028s CPU time, 134.1M memory peak, 0B memory swap peak. Apr 13 23:57:37.974850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:57:38.331608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:57:38.340897 (kubelet)[2593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 23:57:38.539493 kubelet[2593]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 23:57:38.539493 kubelet[2593]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 23:57:38.539493 kubelet[2593]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 23:57:38.539493 kubelet[2593]: I0413 23:57:38.538909 2593 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 23:57:38.564403 kubelet[2593]: I0413 23:57:38.563798 2593 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 23:57:38.564403 kubelet[2593]: I0413 23:57:38.563874 2593 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 23:57:38.565848 kubelet[2593]: I0413 23:57:38.564704 2593 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 23:57:38.569578 kubelet[2593]: I0413 23:57:38.568558 2593 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 23:57:38.581297 kubelet[2593]: I0413 23:57:38.577791 2593 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 23:57:38.606542 kubelet[2593]: E0413 23:57:38.599988 2593 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 23:57:38.606542 kubelet[2593]: I0413 23:57:38.600684 2593 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 23:57:38.612774 kubelet[2593]: I0413 23:57:38.612629 2593 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 23:57:38.613205 kubelet[2593]: I0413 23:57:38.613073 2593 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 23:57:38.613586 kubelet[2593]: I0413 23:57:38.613136 2593 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 23:57:38.613586 kubelet[2593]: I0413 23:57:38.613592 2593 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 23:57:38.613866 kubelet[2593]: I0413 23:57:38.613609 2593 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 23:57:38.613866 kubelet[2593]: I0413 23:57:38.613697 2593 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:57:38.613921 kubelet[2593]: I0413 23:57:38.613915 2593 kubelet.go:480] "Attempting to sync node with API server" Apr 13 23:57:38.613944 kubelet[2593]: I0413 23:57:38.613925 2593 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 23:57:38.613965 kubelet[2593]: I0413 23:57:38.613947 2593 kubelet.go:386] "Adding apiserver pod source" Apr 13 23:57:38.613965 kubelet[2593]: I0413 23:57:38.613962 2593 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 23:57:38.620970 kubelet[2593]: I0413 23:57:38.618987 2593 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 23:57:38.620970 kubelet[2593]: I0413 23:57:38.619898 2593 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 23:57:38.626736 kubelet[2593]: I0413 23:57:38.626688 2593 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 23:57:38.628115 kubelet[2593]: I0413 23:57:38.628074 2593 server.go:1289] "Started kubelet" Apr 13 23:57:38.631928 kubelet[2593]: I0413 23:57:38.631881 2593 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 23:57:38.642757 kubelet[2593]: I0413 23:57:38.642612 2593 server.go:317] "Adding debug handlers to kubelet server" Apr 13 23:57:38.703586 kubelet[2593]: I0413 23:57:38.697401 2593 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 23:57:38.707411 sudo[2611]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 13 23:57:38.707917 sudo[2611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 13 23:57:38.708791 kubelet[2593]: I0413 23:57:38.708486 2593 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 23:57:38.710341 kubelet[2593]: I0413 23:57:38.710187 2593 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 23:57:38.710678 kubelet[2593]: I0413 23:57:38.710626 2593 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 23:57:38.720455 kubelet[2593]: I0413 23:57:38.716013 2593 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 23:57:38.720455 kubelet[2593]: I0413 23:57:38.717211 2593 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 23:57:38.720455 kubelet[2593]: I0413 23:57:38.717404 2593 reconciler.go:26] "Reconciler: start to sync state" Apr 13 23:57:38.725577 kubelet[2593]: I0413 23:57:38.725352 2593 factory.go:223] Registration of the systemd container factory successfully Apr 13 23:57:38.725577 kubelet[2593]: I0413 23:57:38.725505 2593 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 23:57:38.728510 kubelet[2593]: E0413 23:57:38.727973 2593 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 23:57:38.732359 kubelet[2593]: I0413 23:57:38.731979 2593 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 23:57:38.734805 kubelet[2593]: I0413 23:57:38.734770 2593 factory.go:223] Registration of the containerd container factory successfully Apr 13 23:57:38.761507 kubelet[2593]: I0413 23:57:38.756829 2593 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 23:57:38.761507 kubelet[2593]: I0413 23:57:38.756882 2593 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 23:57:38.761507 kubelet[2593]: I0413 23:57:38.756919 2593 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 23:57:38.761507 kubelet[2593]: I0413 23:57:38.756929 2593 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 23:57:38.761507 kubelet[2593]: E0413 23:57:38.757007 2593 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:57:38.880792 kubelet[2593]: E0413 23:57:38.880473 2593 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:57:38.906258 kubelet[2593]: I0413 23:57:38.906223 2593 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 23:57:38.908712 kubelet[2593]: I0413 23:57:38.907742 2593 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 23:57:38.908712 kubelet[2593]: I0413 23:57:38.907908 2593 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:57:38.908712 kubelet[2593]: I0413 23:57:38.908335 2593 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 23:57:38.908712 kubelet[2593]: I0413 23:57:38.908349 2593 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 23:57:38.908712 kubelet[2593]: I0413 23:57:38.908415 2593 policy_none.go:49] "None policy: Start" Apr 13 23:57:38.908712 kubelet[2593]: I0413 23:57:38.908430 2593 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 23:57:38.908712 kubelet[2593]: I0413 23:57:38.908443 2593 state_mem.go:35] "Initializing new in-memory state store" Apr 13 23:57:38.910015 kubelet[2593]: I0413 23:57:38.909940 2593 state_mem.go:75] "Updated machine memory state" Apr 13 23:57:38.927524 kubelet[2593]: E0413 23:57:38.927441 2593 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 23:57:38.929368 kubelet[2593]: I0413 23:57:38.929352 2593 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 23:57:38.930363 kubelet[2593]: I0413 23:57:38.930004 2593 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 23:57:38.932022 kubelet[2593]: I0413 23:57:38.931935 2593 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 23:57:38.933004 kubelet[2593]: E0413 23:57:38.932922 2593 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 23:57:39.076693 kubelet[2593]: I0413 23:57:39.076629 2593 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:57:39.082787 kubelet[2593]: I0413 23:57:39.082730 2593 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 13 23:57:39.082787 kubelet[2593]: I0413 23:57:39.082761 2593 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 13 23:57:39.083008 kubelet[2593]: I0413 23:57:39.082733 2593 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:39.106485 kubelet[2593]: E0413 23:57:39.105979 2593 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:39.108829 kubelet[2593]: E0413 23:57:39.108753 2593 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 13 23:57:39.109003 kubelet[2593]: I0413 23:57:39.108981 2593 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 13 23:57:39.109402 kubelet[2593]: I0413 23:57:39.109065 2593 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 13 23:57:39.121027 kubelet[2593]: I0413 23:57:39.120954 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:39.121027 kubelet[2593]: I0413 23:57:39.121025 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:39.121659 kubelet[2593]: I0413 23:57:39.121065 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 13 23:57:39.121659 kubelet[2593]: I0413 23:57:39.121093 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec4e5c5105558a5e2d8b4f9772d220dc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec4e5c5105558a5e2d8b4f9772d220dc\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:57:39.122915 kubelet[2593]: I0413 23:57:39.121112 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec4e5c5105558a5e2d8b4f9772d220dc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ec4e5c5105558a5e2d8b4f9772d220dc\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:57:39.123716 kubelet[2593]: I0413 23:57:39.122965 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:39.123716 kubelet[2593]: I0413 23:57:39.123079 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:39.123716 kubelet[2593]: I0413 23:57:39.123099 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:39.123716 kubelet[2593]: I0413 23:57:39.123117 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec4e5c5105558a5e2d8b4f9772d220dc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec4e5c5105558a5e2d8b4f9772d220dc\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:57:39.406736 kubelet[2593]: E0413 23:57:39.406434 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:39.408590 kubelet[2593]: E0413 23:57:39.407477 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:39.409740 kubelet[2593]: E0413 23:57:39.409557 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:39.614891 kubelet[2593]: I0413 23:57:39.614789 2593 apiserver.go:52] "Watching apiserver" Apr 13 23:57:39.718805 kubelet[2593]: I0413 23:57:39.717736 2593 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 23:57:39.811896 kubelet[2593]: I0413 23:57:39.811856 2593 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:39.816477 kubelet[2593]: I0413 23:57:39.812897 2593 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 13 23:57:39.816762 kubelet[2593]: I0413 23:57:39.814925 2593 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 13 23:57:39.886694 kubelet[2593]: E0413 23:57:39.886584 2593 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 13 23:57:39.887439 kubelet[2593]: E0413 23:57:39.887332 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:39.888709 kubelet[2593]: E0413 23:57:39.888091 2593 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 13 23:57:39.888709 kubelet[2593]: E0413 23:57:39.888508 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:39.892042 kubelet[2593]: E0413 23:57:39.891325 2593 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 13 23:57:39.892042 kubelet[2593]: E0413 23:57:39.891700 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:40.012128 sudo[2611]: pam_unix(sudo:session): session closed for user root Apr 13 23:57:40.044217 kubelet[2593]: I0413 23:57:40.043223 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.0431889619999999 podStartE2EDuration="1.043188962s" podCreationTimestamp="2026-04-13 23:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:57:40.002265048 +0000 UTC m=+1.653813532" watchObservedRunningTime="2026-04-13 23:57:40.043188962 +0000 UTC m=+1.694737434" Apr 13 23:57:40.815932 kubelet[2593]: E0413 23:57:40.815193 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:40.815932 kubelet[2593]: E0413 23:57:40.815643 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:40.815932 kubelet[2593]: E0413 23:57:40.815664 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:41.500639 kubelet[2593]: I0413 23:57:41.500384 2593 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 23:57:41.503427 containerd[1465]: time="2026-04-13T23:57:41.503389019Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 23:57:41.505300 kubelet[2593]: I0413 23:57:41.505246 2593 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 23:57:41.840090 kubelet[2593]: E0413 23:57:41.837142 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:41.840090 kubelet[2593]: E0413 23:57:41.837886 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:42.409903 kubelet[2593]: E0413 23:57:42.409831 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:42.636390 systemd[1]: Created slice kubepods-besteffort-pod9b91cf38_39cf_43bf_985e_093cffdd5a52.slice - libcontainer container kubepods-besteffort-pod9b91cf38_39cf_43bf_985e_093cffdd5a52.slice. Apr 13 23:57:42.674622 systemd[1]: Created slice kubepods-burstable-poda66fa1bd_1042_482b_9724_7081d0236f97.slice - libcontainer container kubepods-burstable-poda66fa1bd_1042_482b_9724_7081d0236f97.slice. Apr 13 23:57:42.691879 systemd[1]: Created slice kubepods-besteffort-pod444eeaee_fc6e_4af9_8b0f_a55b7364c514.slice - libcontainer container kubepods-besteffort-pod444eeaee_fc6e_4af9_8b0f_a55b7364c514.slice. Apr 13 23:57:42.698669 kubelet[2593]: I0413 23:57:42.698494 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-lib-modules\") pod \"cilium-4vjr4\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " pod="kube-system/cilium-4vjr4" Apr 13 23:57:42.698835 kubelet[2593]: I0413 23:57:42.698736 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b91cf38-39cf-43bf-985e-093cffdd5a52-kube-proxy\") pod \"kube-proxy-swkfs\" (UID: \"9b91cf38-39cf-43bf-985e-093cffdd5a52\") " pod="kube-system/kube-proxy-swkfs" Apr 13 23:57:42.698835 kubelet[2593]: I0413 23:57:42.698770 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-hostproc\") pod \"cilium-4vjr4\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " pod="kube-system/cilium-4vjr4" Apr 13 23:57:42.698835 kubelet[2593]: I0413 23:57:42.698784 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-cilium-cgroup\") pod \"cilium-4vjr4\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " pod="kube-system/cilium-4vjr4" Apr 13 23:57:42.698835 kubelet[2593]: I0413 23:57:42.698796 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a66fa1bd-1042-482b-9724-7081d0236f97-clustermesh-secrets\") pod \"cilium-4vjr4\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " pod="kube-system/cilium-4vjr4" Apr 13 23:57:42.698835 kubelet[2593]: I0413 23:57:42.698807 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a66fa1bd-1042-482b-9724-7081d0236f97-cilium-config-path\") pod \"cilium-4vjr4\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " pod="kube-system/cilium-4vjr4" Apr 13 23:57:42.698835 kubelet[2593]: I0413 23:57:42.698821 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-host-proc-sys-kernel\") pod \"cilium-4vjr4\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " pod="kube-system/cilium-4vjr4" Apr 13 23:57:42.698928 kubelet[2593]: I0413 23:57:42.698837 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a66fa1bd-1042-482b-9724-7081d0236f97-hubble-tls\") pod \"cilium-4vjr4\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " pod="kube-system/cilium-4vjr4" Apr 13 23:57:42.698928 kubelet[2593]: I0413 23:57:42.698849 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-cilium-run\") pod \"cilium-4vjr4\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " pod="kube-system/cilium-4vjr4" Apr 13 23:57:42.698928 kubelet[2593]: I0413 23:57:42.698858 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-bpf-maps\") pod \"cilium-4vjr4\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " pod="kube-system/cilium-4vjr4" Apr 13 23:57:42.698928 kubelet[2593]: I0413 23:57:42.698893 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b91cf38-39cf-43bf-985e-093cffdd5a52-xtables-lock\") pod \"kube-proxy-swkfs\" (UID: \"9b91cf38-39cf-43bf-985e-093cffdd5a52\") " pod="kube-system/kube-proxy-swkfs" Apr 13 23:57:42.698928 kubelet[2593]: I0413 23:57:42.698909 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b91cf38-39cf-43bf-985e-093cffdd5a52-lib-modules\") pod \"kube-proxy-swkfs\" (UID: \"9b91cf38-39cf-43bf-985e-093cffdd5a52\") " pod="kube-system/kube-proxy-swkfs" Apr 13 23:57:42.699003 kubelet[2593]: I0413 23:57:42.698926 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxjv8\" (UniqueName: \"kubernetes.io/projected/9b91cf38-39cf-43bf-985e-093cffdd5a52-kube-api-access-wxjv8\") pod \"kube-proxy-swkfs\" (UID: \"9b91cf38-39cf-43bf-985e-093cffdd5a52\") " pod="kube-system/kube-proxy-swkfs" Apr 13 23:57:42.699003 kubelet[2593]: I0413 23:57:42.698944 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-cni-path\") pod \"cilium-4vjr4\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " pod="kube-system/cilium-4vjr4" Apr 13 23:57:42.699003 kubelet[2593]: I0413 23:57:42.698961 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-etc-cni-netd\") pod \"cilium-4vjr4\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " pod="kube-system/cilium-4vjr4" Apr 13 23:57:42.699003 kubelet[2593]: I0413 23:57:42.698973 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-xtables-lock\") pod \"cilium-4vjr4\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " pod="kube-system/cilium-4vjr4" Apr 13 23:57:42.699003 kubelet[2593]: I0413 23:57:42.698983 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-host-proc-sys-net\") pod \"cilium-4vjr4\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " pod="kube-system/cilium-4vjr4" Apr 13 23:57:42.699076 kubelet[2593]: I0413 23:57:42.698997 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55wp8\" (UniqueName: \"kubernetes.io/projected/a66fa1bd-1042-482b-9724-7081d0236f97-kube-api-access-55wp8\") pod \"cilium-4vjr4\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " pod="kube-system/cilium-4vjr4" Apr 13 23:57:42.820757 kubelet[2593]: I0413 23:57:42.820674 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq96q\" (UniqueName: \"kubernetes.io/projected/444eeaee-fc6e-4af9-8b0f-a55b7364c514-kube-api-access-cq96q\") pod \"cilium-operator-6c4d7847fc-b2g2w\" (UID: \"444eeaee-fc6e-4af9-8b0f-a55b7364c514\") " pod="kube-system/cilium-operator-6c4d7847fc-b2g2w" Apr 13 23:57:42.821044 kubelet[2593]: I0413 23:57:42.820887 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/444eeaee-fc6e-4af9-8b0f-a55b7364c514-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-b2g2w\" (UID: \"444eeaee-fc6e-4af9-8b0f-a55b7364c514\") " pod="kube-system/cilium-operator-6c4d7847fc-b2g2w" Apr 13 23:57:42.860518 kubelet[2593]: E0413 23:57:42.860476 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:42.904537 kubelet[2593]: E0413 23:57:42.904451 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:43.007897 kubelet[2593]: E0413 23:57:43.007583 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:43.009633 containerd[1465]: time="2026-04-13T23:57:43.009549879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-swkfs,Uid:9b91cf38-39cf-43bf-985e-093cffdd5a52,Namespace:kube-system,Attempt:0,}" Apr 13 23:57:43.011000 kubelet[2593]: E0413 23:57:43.010928 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:43.012730 containerd[1465]: time="2026-04-13T23:57:43.012309720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4vjr4,Uid:a66fa1bd-1042-482b-9724-7081d0236f97,Namespace:kube-system,Attempt:0,}" Apr 13 23:57:43.088936 sudo[1654]: pam_unix(sudo:session): session closed for user root Apr 13 23:57:43.090845 containerd[1465]: time="2026-04-13T23:57:43.090682839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:57:43.090845 containerd[1465]: time="2026-04-13T23:57:43.090770611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:57:43.091032 containerd[1465]: time="2026-04-13T23:57:43.090784102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:57:43.091032 containerd[1465]: time="2026-04-13T23:57:43.090941837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:57:43.094965 sshd[1649]: pam_unix(sshd:session): session closed for user core Apr 13 23:57:43.101509 systemd[1]: sshd@8-10.0.0.37:22-10.0.0.1:51996.service: Deactivated successfully. Apr 13 23:57:43.104970 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 23:57:43.105524 systemd[1]: session-9.scope: Consumed 8.475s CPU time, 161.7M memory peak, 0B memory swap peak. Apr 13 23:57:43.106817 containerd[1465]: time="2026-04-13T23:57:43.105506697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:57:43.106817 containerd[1465]: time="2026-04-13T23:57:43.105579394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:57:43.106817 containerd[1465]: time="2026-04-13T23:57:43.105597230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:57:43.106817 containerd[1465]: time="2026-04-13T23:57:43.105686918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:57:43.109565 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Apr 13 23:57:43.114608 systemd-logind[1450]: Removed session 9. Apr 13 23:57:43.133755 systemd[1]: Started cri-containerd-a631d00791baf6bf509f0092b3d98fda56d092dd022b473bb357c9f153888bed.scope - libcontainer container a631d00791baf6bf509f0092b3d98fda56d092dd022b473bb357c9f153888bed. Apr 13 23:57:43.213684 systemd[1]: Started cri-containerd-ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa.scope - libcontainer container ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa. Apr 13 23:57:43.282950 containerd[1465]: time="2026-04-13T23:57:43.281967806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4vjr4,Uid:a66fa1bd-1042-482b-9724-7081d0236f97,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\"" Apr 13 23:57:43.285268 kubelet[2593]: E0413 23:57:43.284984 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:43.291824 containerd[1465]: time="2026-04-13T23:57:43.291775903Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 13 23:57:43.301409 kubelet[2593]: E0413 23:57:43.301124 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:43.302628 containerd[1465]: time="2026-04-13T23:57:43.302558406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-b2g2w,Uid:444eeaee-fc6e-4af9-8b0f-a55b7364c514,Namespace:kube-system,Attempt:0,}" Apr 13 23:57:43.305843 containerd[1465]: time="2026-04-13T23:57:43.304817001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-swkfs,Uid:9b91cf38-39cf-43bf-985e-093cffdd5a52,Namespace:kube-system,Attempt:0,} returns sandbox id \"a631d00791baf6bf509f0092b3d98fda56d092dd022b473bb357c9f153888bed\"" Apr 13 23:57:43.307955 kubelet[2593]: E0413 23:57:43.307926 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:43.354686 containerd[1465]: time="2026-04-13T23:57:43.344096770Z" level=info msg="CreateContainer within sandbox \"a631d00791baf6bf509f0092b3d98fda56d092dd022b473bb357c9f153888bed\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 23:57:43.450702 containerd[1465]: time="2026-04-13T23:57:43.449874075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:57:43.450702 containerd[1465]: time="2026-04-13T23:57:43.450113152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:57:43.450702 containerd[1465]: time="2026-04-13T23:57:43.450131377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:57:43.450702 containerd[1465]: time="2026-04-13T23:57:43.450321509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:57:43.474241 containerd[1465]: time="2026-04-13T23:57:43.473844376Z" level=info msg="CreateContainer within sandbox \"a631d00791baf6bf509f0092b3d98fda56d092dd022b473bb357c9f153888bed\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4810959743f7758e6538b69af37ff2a0d0e47158ebb18dbec2bebdc1465c8373\"" Apr 13 23:57:43.475743 containerd[1465]: time="2026-04-13T23:57:43.475045520Z" level=info msg="StartContainer for \"4810959743f7758e6538b69af37ff2a0d0e47158ebb18dbec2bebdc1465c8373\"" Apr 13 23:57:43.497427 systemd[1]: Started cri-containerd-6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e.scope - libcontainer container 6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e. Apr 13 23:57:43.648584 systemd[1]: Started cri-containerd-4810959743f7758e6538b69af37ff2a0d0e47158ebb18dbec2bebdc1465c8373.scope - libcontainer container 4810959743f7758e6538b69af37ff2a0d0e47158ebb18dbec2bebdc1465c8373. Apr 13 23:57:43.678305 containerd[1465]: time="2026-04-13T23:57:43.678224589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-b2g2w,Uid:444eeaee-fc6e-4af9-8b0f-a55b7364c514,Namespace:kube-system,Attempt:0,} returns sandbox id \"6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e\"" Apr 13 23:57:43.680008 kubelet[2593]: E0413 23:57:43.679954 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:43.700993 containerd[1465]: time="2026-04-13T23:57:43.700924339Z" level=info msg="StartContainer for \"4810959743f7758e6538b69af37ff2a0d0e47158ebb18dbec2bebdc1465c8373\" returns successfully" Apr 13 23:57:43.877566 kubelet[2593]: E0413 23:57:43.877344 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:43.878790 kubelet[2593]: E0413 23:57:43.878643 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:43.904942 kubelet[2593]: I0413 23:57:43.904096 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-swkfs" podStartSLOduration=1.904050477 podStartE2EDuration="1.904050477s" podCreationTimestamp="2026-04-13 23:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:57:43.903721259 +0000 UTC m=+5.555269743" watchObservedRunningTime="2026-04-13 23:57:43.904050477 +0000 UTC m=+5.555598957" Apr 13 23:57:57.877385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1500487668.mount: Deactivated successfully. Apr 13 23:58:05.531055 containerd[1465]: time="2026-04-13T23:58:05.530855435Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:58:05.533517 containerd[1465]: time="2026-04-13T23:58:05.533439770Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 13 23:58:05.566094 containerd[1465]: time="2026-04-13T23:58:05.566012561Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:58:05.572810 containerd[1465]: time="2026-04-13T23:58:05.572679574Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 22.280635296s" Apr 13 23:58:05.573039 containerd[1465]: time="2026-04-13T23:58:05.572827443Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 13 23:58:05.583490 containerd[1465]: time="2026-04-13T23:58:05.583333311Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 13 23:58:05.603417 containerd[1465]: time="2026-04-13T23:58:05.603270121Z" level=info msg="CreateContainer within sandbox \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 23:58:05.693341 containerd[1465]: time="2026-04-13T23:58:05.692576908Z" level=info msg="CreateContainer within sandbox \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c\"" Apr 13 23:58:05.697403 containerd[1465]: time="2026-04-13T23:58:05.695342029Z" level=info msg="StartContainer for \"363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c\"" Apr 13 23:58:05.746731 systemd[1]: run-containerd-runc-k8s.io-363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c-runc.xfdDp5.mount: Deactivated successfully. Apr 13 23:58:05.766936 systemd[1]: Started cri-containerd-363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c.scope - libcontainer container 363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c. Apr 13 23:58:05.906783 containerd[1465]: time="2026-04-13T23:58:05.906716811Z" level=info msg="StartContainer for \"363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c\" returns successfully" Apr 13 23:58:05.991249 systemd[1]: cri-containerd-363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c.scope: Deactivated successfully. Apr 13 23:58:06.180391 kubelet[2593]: E0413 23:58:06.177836 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:06.267542 containerd[1465]: time="2026-04-13T23:58:06.266945240Z" level=info msg="shim disconnected" id=363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c namespace=k8s.io Apr 13 23:58:06.267542 containerd[1465]: time="2026-04-13T23:58:06.267027163Z" level=warning msg="cleaning up after shim disconnected" id=363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c namespace=k8s.io Apr 13 23:58:06.267542 containerd[1465]: time="2026-04-13T23:58:06.267036244Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:58:06.690946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c-rootfs.mount: Deactivated successfully. Apr 13 23:58:07.185117 kubelet[2593]: E0413 23:58:07.184890 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:07.218023 containerd[1465]: time="2026-04-13T23:58:07.217678781Z" level=info msg="CreateContainer within sandbox \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 23:58:07.380973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3385076391.mount: Deactivated successfully. Apr 13 23:58:07.437695 containerd[1465]: time="2026-04-13T23:58:07.436672736Z" level=info msg="CreateContainer within sandbox \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075\"" Apr 13 23:58:07.440029 containerd[1465]: time="2026-04-13T23:58:07.439973434Z" level=info msg="StartContainer for \"896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075\"" Apr 13 23:58:07.508551 systemd[1]: Started cri-containerd-896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075.scope - libcontainer container 896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075. Apr 13 23:58:07.575470 containerd[1465]: time="2026-04-13T23:58:07.574928915Z" level=info msg="StartContainer for \"896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075\" returns successfully" Apr 13 23:58:07.620897 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 23:58:07.621432 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:58:07.621514 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 13 23:58:07.639889 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 23:58:07.640127 systemd[1]: cri-containerd-896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075.scope: Deactivated successfully. Apr 13 23:58:07.710471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075-rootfs.mount: Deactivated successfully. Apr 13 23:58:07.714630 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:58:07.738692 containerd[1465]: time="2026-04-13T23:58:07.738567353Z" level=info msg="shim disconnected" id=896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075 namespace=k8s.io Apr 13 23:58:07.738692 containerd[1465]: time="2026-04-13T23:58:07.738660579Z" level=warning msg="cleaning up after shim disconnected" id=896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075 namespace=k8s.io Apr 13 23:58:07.738692 containerd[1465]: time="2026-04-13T23:58:07.738673824Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:58:08.195964 kubelet[2593]: E0413 23:58:08.195883 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:08.239809 containerd[1465]: time="2026-04-13T23:58:08.238642720Z" level=info msg="CreateContainer within sandbox \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 23:58:08.352371 containerd[1465]: time="2026-04-13T23:58:08.352032468Z" level=info msg="CreateContainer within sandbox \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1\"" Apr 13 23:58:08.358804 containerd[1465]: time="2026-04-13T23:58:08.358652388Z" level=info msg="StartContainer for \"9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1\"" Apr 13 23:58:08.441913 systemd[1]: Started cri-containerd-9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1.scope - libcontainer container 9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1. Apr 13 23:58:08.541610 systemd[1]: cri-containerd-9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1.scope: Deactivated successfully. Apr 13 23:58:08.581749 containerd[1465]: time="2026-04-13T23:58:08.581558973Z" level=info msg="StartContainer for \"9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1\" returns successfully" Apr 13 23:58:08.714080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1-rootfs.mount: Deactivated successfully. Apr 13 23:58:08.722890 containerd[1465]: time="2026-04-13T23:58:08.722817852Z" level=info msg="shim disconnected" id=9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1 namespace=k8s.io Apr 13 23:58:08.722890 containerd[1465]: time="2026-04-13T23:58:08.722885036Z" level=warning msg="cleaning up after shim disconnected" id=9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1 namespace=k8s.io Apr 13 23:58:08.722890 containerd[1465]: time="2026-04-13T23:58:08.722896428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:58:09.108201 containerd[1465]: time="2026-04-13T23:58:09.106898984Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:58:09.109387 containerd[1465]: time="2026-04-13T23:58:09.109322625Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 13 23:58:09.113109 containerd[1465]: time="2026-04-13T23:58:09.112795559Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:58:09.115759 containerd[1465]: time="2026-04-13T23:58:09.115694086Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.532147587s" Apr 13 23:58:09.116007 containerd[1465]: time="2026-04-13T23:58:09.115770373Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 13 23:58:09.137906 containerd[1465]: time="2026-04-13T23:58:09.136641677Z" level=info msg="CreateContainer within sandbox \"6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 13 23:58:09.205839 containerd[1465]: time="2026-04-13T23:58:09.205782049Z" level=info msg="CreateContainer within sandbox \"6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2\"" Apr 13 23:58:09.207091 containerd[1465]: time="2026-04-13T23:58:09.206873779Z" level=info msg="StartContainer for \"1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2\"" Apr 13 23:58:09.216485 kubelet[2593]: E0413 23:58:09.213906 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:09.284764 containerd[1465]: time="2026-04-13T23:58:09.284675778Z" level=info msg="CreateContainer within sandbox \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 23:58:09.344776 containerd[1465]: time="2026-04-13T23:58:09.344711979Z" level=info msg="CreateContainer within sandbox \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c\"" Apr 13 23:58:09.348373 containerd[1465]: time="2026-04-13T23:58:09.346269545Z" level=info msg="StartContainer for \"ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c\"" Apr 13 23:58:09.349382 systemd[1]: Started cri-containerd-1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2.scope - libcontainer container 1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2. Apr 13 23:58:09.420493 systemd[1]: Started cri-containerd-ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c.scope - libcontainer container ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c. Apr 13 23:58:09.458909 containerd[1465]: time="2026-04-13T23:58:09.458808043Z" level=info msg="StartContainer for \"1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2\" returns successfully" Apr 13 23:58:09.517890 systemd[1]: cri-containerd-ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c.scope: Deactivated successfully. Apr 13 23:58:09.521761 containerd[1465]: time="2026-04-13T23:58:09.521533995Z" level=info msg="StartContainer for \"ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c\" returns successfully" Apr 13 23:58:09.688650 containerd[1465]: time="2026-04-13T23:58:09.688090222Z" level=info msg="shim disconnected" id=ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c namespace=k8s.io Apr 13 23:58:09.688650 containerd[1465]: time="2026-04-13T23:58:09.688308401Z" level=warning msg="cleaning up after shim disconnected" id=ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c namespace=k8s.io Apr 13 23:58:09.688650 containerd[1465]: time="2026-04-13T23:58:09.688329265Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:58:10.272901 kubelet[2593]: E0413 23:58:10.272825 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:10.289952 kubelet[2593]: E0413 23:58:10.289859 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:10.300983 containerd[1465]: time="2026-04-13T23:58:10.300887813Z" level=info msg="CreateContainer within sandbox \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 23:58:10.412801 containerd[1465]: time="2026-04-13T23:58:10.412493479Z" level=info msg="CreateContainer within sandbox \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352\"" Apr 13 23:58:10.423575 containerd[1465]: time="2026-04-13T23:58:10.423396239Z" level=info msg="StartContainer for \"684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352\"" Apr 13 23:58:10.580712 systemd[1]: Started cri-containerd-684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352.scope - libcontainer container 684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352. Apr 13 23:58:10.687590 kubelet[2593]: I0413 23:58:10.687445 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-b2g2w" podStartSLOduration=3.250559516 podStartE2EDuration="28.687381088s" podCreationTimestamp="2026-04-13 23:57:42 +0000 UTC" firstStartedPulling="2026-04-13 23:57:43.684020511 +0000 UTC m=+5.335569001" lastFinishedPulling="2026-04-13 23:58:09.120842097 +0000 UTC m=+30.772390573" observedRunningTime="2026-04-13 23:58:10.422965184 +0000 UTC m=+32.074513662" watchObservedRunningTime="2026-04-13 23:58:10.687381088 +0000 UTC m=+32.338929573" Apr 13 23:58:10.706917 systemd[1]: run-containerd-runc-k8s.io-684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352-runc.jN42uh.mount: Deactivated successfully. Apr 13 23:58:10.772050 containerd[1465]: time="2026-04-13T23:58:10.771953993Z" level=info msg="StartContainer for \"684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352\" returns successfully" Apr 13 23:58:11.036669 systemd[1]: run-containerd-runc-k8s.io-684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352-runc.rju1M2.mount: Deactivated successfully. Apr 13 23:58:11.307674 kubelet[2593]: E0413 23:58:11.305668 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:11.309022 kubelet[2593]: E0413 23:58:11.308982 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:11.374857 kubelet[2593]: I0413 23:58:11.374752 2593 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 13 23:58:11.519920 kubelet[2593]: I0413 23:58:11.517134 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4vjr4" podStartSLOduration=7.220997236 podStartE2EDuration="29.517104845s" podCreationTimestamp="2026-04-13 23:57:42 +0000 UTC" firstStartedPulling="2026-04-13 23:57:43.286743462 +0000 UTC m=+4.938291940" lastFinishedPulling="2026-04-13 23:58:05.582851068 +0000 UTC m=+27.234399549" observedRunningTime="2026-04-13 23:58:11.414838287 +0000 UTC m=+33.066386764" watchObservedRunningTime="2026-04-13 23:58:11.517104845 +0000 UTC m=+33.168653337" Apr 13 23:58:11.606506 systemd[1]: Created slice kubepods-burstable-pod45bb15a9_86dc_4ff1_8bb6_52b73e813f0a.slice - libcontainer container kubepods-burstable-pod45bb15a9_86dc_4ff1_8bb6_52b73e813f0a.slice. Apr 13 23:58:11.617691 systemd[1]: Created slice kubepods-burstable-podb404693a_e2da_476d_8ed8_c983adc75311.slice - libcontainer container kubepods-burstable-podb404693a_e2da_476d_8ed8_c983adc75311.slice. Apr 13 23:58:11.713000 kubelet[2593]: I0413 23:58:11.712898 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67ljj\" (UniqueName: \"kubernetes.io/projected/45bb15a9-86dc-4ff1-8bb6-52b73e813f0a-kube-api-access-67ljj\") pod \"coredns-674b8bbfcf-mjcvf\" (UID: \"45bb15a9-86dc-4ff1-8bb6-52b73e813f0a\") " pod="kube-system/coredns-674b8bbfcf-mjcvf" Apr 13 23:58:11.713000 kubelet[2593]: I0413 23:58:11.712994 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqqzb\" (UniqueName: \"kubernetes.io/projected/b404693a-e2da-476d-8ed8-c983adc75311-kube-api-access-xqqzb\") pod \"coredns-674b8bbfcf-mzp22\" (UID: \"b404693a-e2da-476d-8ed8-c983adc75311\") " pod="kube-system/coredns-674b8bbfcf-mzp22" Apr 13 23:58:11.713307 kubelet[2593]: I0413 23:58:11.713028 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b404693a-e2da-476d-8ed8-c983adc75311-config-volume\") pod \"coredns-674b8bbfcf-mzp22\" (UID: \"b404693a-e2da-476d-8ed8-c983adc75311\") " pod="kube-system/coredns-674b8bbfcf-mzp22" Apr 13 23:58:11.713307 kubelet[2593]: I0413 23:58:11.713061 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45bb15a9-86dc-4ff1-8bb6-52b73e813f0a-config-volume\") pod \"coredns-674b8bbfcf-mjcvf\" (UID: \"45bb15a9-86dc-4ff1-8bb6-52b73e813f0a\") " pod="kube-system/coredns-674b8bbfcf-mjcvf" Apr 13 23:58:11.921237 kubelet[2593]: E0413 23:58:11.917908 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:11.922906 kubelet[2593]: E0413 23:58:11.922458 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:11.931453 containerd[1465]: time="2026-04-13T23:58:11.930968519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mzp22,Uid:b404693a-e2da-476d-8ed8-c983adc75311,Namespace:kube-system,Attempt:0,}" Apr 13 23:58:11.931453 containerd[1465]: time="2026-04-13T23:58:11.931078814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mjcvf,Uid:45bb15a9-86dc-4ff1-8bb6-52b73e813f0a,Namespace:kube-system,Attempt:0,}" Apr 13 23:58:12.313459 kubelet[2593]: E0413 23:58:12.312065 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:13.319211 kubelet[2593]: E0413 23:58:13.318344 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:14.301648 systemd-networkd[1390]: cilium_host: Link UP Apr 13 23:58:14.302114 systemd-networkd[1390]: cilium_net: Link UP Apr 13 23:58:14.302118 systemd-networkd[1390]: cilium_net: Gained carrier Apr 13 23:58:14.304060 systemd-networkd[1390]: cilium_host: Gained carrier Apr 13 23:58:14.693586 systemd-networkd[1390]: cilium_host: Gained IPv6LL Apr 13 23:58:14.813115 systemd-networkd[1390]: cilium_vxlan: Link UP Apr 13 23:58:14.813124 systemd-networkd[1390]: cilium_vxlan: Gained carrier Apr 13 23:58:14.875457 systemd-networkd[1390]: cilium_net: Gained IPv6LL Apr 13 23:58:16.006009 systemd-networkd[1390]: cilium_vxlan: Gained IPv6LL Apr 13 23:58:16.103228 kernel: NET: Registered PF_ALG protocol family Apr 13 23:58:19.026253 kubelet[2593]: E0413 23:58:19.025214 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:19.025617 systemd-networkd[1390]: lxc_health: Link UP Apr 13 23:58:19.033015 systemd-networkd[1390]: lxc_health: Gained carrier Apr 13 23:58:19.402940 kubelet[2593]: E0413 23:58:19.402374 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:19.768864 systemd-networkd[1390]: lxc06c1115dbf94: Link UP Apr 13 23:58:19.770607 kernel: eth0: renamed from tmpe15fc Apr 13 23:58:19.789652 systemd-networkd[1390]: lxcdab6fd9bc604: Link UP Apr 13 23:58:19.789901 systemd-networkd[1390]: lxc06c1115dbf94: Gained carrier Apr 13 23:58:19.793793 kernel: eth0: renamed from tmp0d5ca Apr 13 23:58:19.799564 systemd-networkd[1390]: lxcdab6fd9bc604: Gained carrier Apr 13 23:58:20.158582 systemd-networkd[1390]: lxc_health: Gained IPv6LL Apr 13 23:58:20.408489 kubelet[2593]: E0413 23:58:20.408278 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:21.116602 systemd-networkd[1390]: lxcdab6fd9bc604: Gained IPv6LL Apr 13 23:58:21.259923 systemd-networkd[1390]: lxc06c1115dbf94: Gained IPv6LL Apr 13 23:58:33.184795 containerd[1465]: time="2026-04-13T23:58:33.184040042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:58:33.184795 containerd[1465]: time="2026-04-13T23:58:33.184114137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:58:33.184795 containerd[1465]: time="2026-04-13T23:58:33.184136203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:58:33.184795 containerd[1465]: time="2026-04-13T23:58:33.184586464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:58:33.200444 containerd[1465]: time="2026-04-13T23:58:33.200319569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:58:33.200588 containerd[1465]: time="2026-04-13T23:58:33.200456623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:58:33.200588 containerd[1465]: time="2026-04-13T23:58:33.200474869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:58:33.200658 containerd[1465]: time="2026-04-13T23:58:33.200594079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:58:33.215708 systemd[1]: run-containerd-runc-k8s.io-0d5cab472a3559171698749e5f2533bfac39bafa4f7aa9f58666c06d6aaff820-runc.P7BBhg.mount: Deactivated successfully. Apr 13 23:58:33.223512 systemd[1]: Started cri-containerd-0d5cab472a3559171698749e5f2533bfac39bafa4f7aa9f58666c06d6aaff820.scope - libcontainer container 0d5cab472a3559171698749e5f2533bfac39bafa4f7aa9f58666c06d6aaff820. Apr 13 23:58:33.244471 systemd[1]: Started cri-containerd-e15fcb75cc264fe5b82ec711af08928f4aecb40cef89fc1d0d7ed82304b91787.scope - libcontainer container e15fcb75cc264fe5b82ec711af08928f4aecb40cef89fc1d0d7ed82304b91787. Apr 13 23:58:33.263704 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 13 23:58:33.282882 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 13 23:58:33.337603 containerd[1465]: time="2026-04-13T23:58:33.337059016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mzp22,Uid:b404693a-e2da-476d-8ed8-c983adc75311,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d5cab472a3559171698749e5f2533bfac39bafa4f7aa9f58666c06d6aaff820\"" Apr 13 23:58:33.341198 kubelet[2593]: E0413 23:58:33.340762 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:33.380763 containerd[1465]: time="2026-04-13T23:58:33.380668852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mjcvf,Uid:45bb15a9-86dc-4ff1-8bb6-52b73e813f0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e15fcb75cc264fe5b82ec711af08928f4aecb40cef89fc1d0d7ed82304b91787\"" Apr 13 23:58:33.382616 kubelet[2593]: E0413 23:58:33.382573 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:33.385310 containerd[1465]: time="2026-04-13T23:58:33.385255691Z" level=info msg="CreateContainer within sandbox \"0d5cab472a3559171698749e5f2533bfac39bafa4f7aa9f58666c06d6aaff820\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 23:58:33.391265 containerd[1465]: time="2026-04-13T23:58:33.391020177Z" level=info msg="CreateContainer within sandbox \"e15fcb75cc264fe5b82ec711af08928f4aecb40cef89fc1d0d7ed82304b91787\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 23:58:33.511518 containerd[1465]: time="2026-04-13T23:58:33.510461853Z" level=info msg="CreateContainer within sandbox \"0d5cab472a3559171698749e5f2533bfac39bafa4f7aa9f58666c06d6aaff820\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e635be511e52f7329281b68cf2b66588af4f64137dc545b1185923a0ae094aff\"" Apr 13 23:58:33.512588 containerd[1465]: time="2026-04-13T23:58:33.512478433Z" level=info msg="StartContainer for \"e635be511e52f7329281b68cf2b66588af4f64137dc545b1185923a0ae094aff\"" Apr 13 23:58:33.513228 containerd[1465]: time="2026-04-13T23:58:33.513065238Z" level=info msg="CreateContainer within sandbox \"e15fcb75cc264fe5b82ec711af08928f4aecb40cef89fc1d0d7ed82304b91787\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da931e4e6b7395e7542abcc33695e1e249e6663376f5e353ddc180a65571b0c2\"" Apr 13 23:58:33.516205 containerd[1465]: time="2026-04-13T23:58:33.515130675Z" level=info msg="StartContainer for \"da931e4e6b7395e7542abcc33695e1e249e6663376f5e353ddc180a65571b0c2\"" Apr 13 23:58:33.571802 systemd[1]: Started cri-containerd-da931e4e6b7395e7542abcc33695e1e249e6663376f5e353ddc180a65571b0c2.scope - libcontainer container da931e4e6b7395e7542abcc33695e1e249e6663376f5e353ddc180a65571b0c2. Apr 13 23:58:33.573544 systemd[1]: Started cri-containerd-e635be511e52f7329281b68cf2b66588af4f64137dc545b1185923a0ae094aff.scope - libcontainer container e635be511e52f7329281b68cf2b66588af4f64137dc545b1185923a0ae094aff. Apr 13 23:58:33.617467 containerd[1465]: time="2026-04-13T23:58:33.617373717Z" level=info msg="StartContainer for \"e635be511e52f7329281b68cf2b66588af4f64137dc545b1185923a0ae094aff\" returns successfully" Apr 13 23:58:33.707471 containerd[1465]: time="2026-04-13T23:58:33.644533939Z" level=info msg="StartContainer for \"da931e4e6b7395e7542abcc33695e1e249e6663376f5e353ddc180a65571b0c2\" returns successfully" Apr 13 23:58:34.571182 kubelet[2593]: E0413 23:58:34.570987 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:34.577467 kubelet[2593]: E0413 23:58:34.577444 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:34.595867 kubelet[2593]: I0413 23:58:34.595752 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mjcvf" podStartSLOduration=52.595709595 podStartE2EDuration="52.595709595s" podCreationTimestamp="2026-04-13 23:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:58:34.592782486 +0000 UTC m=+56.244330960" watchObservedRunningTime="2026-04-13 23:58:34.595709595 +0000 UTC m=+56.247258069" Apr 13 23:58:35.585376 kubelet[2593]: E0413 23:58:35.583671 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:35.585376 kubelet[2593]: E0413 23:58:35.584809 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:36.589177 kubelet[2593]: E0413 23:58:36.589118 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:36.589982 kubelet[2593]: E0413 23:58:36.589799 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:43.760313 kubelet[2593]: E0413 23:58:43.759882 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:45.760479 kubelet[2593]: E0413 23:58:45.759380 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:49.760206 kubelet[2593]: E0413 23:58:49.758764 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:50.768369 kubelet[2593]: E0413 23:58:50.767649 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:59:24.773806 kubelet[2593]: E0413 23:59:24.773657 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:59:38.760633 kubelet[2593]: E0413 23:59:38.759268 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:59:39.778034 kubelet[2593]: E0413 23:59:39.777962 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:59:53.760235 kubelet[2593]: E0413 23:59:53.760066 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:59:56.769892 kubelet[2593]: E0413 23:59:56.769818 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:59:59.760956 kubelet[2593]: E0413 23:59:59.760215 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:08.767070 kubelet[2593]: E0414 00:00:08.766982 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:09.774220 kubelet[2593]: E0414 00:00:09.774097 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:18.903120 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Apr 14 00:00:18.967345 systemd[1]: logrotate.service: Deactivated successfully. Apr 14 00:00:32.980925 update_engine[1455]: I20260414 00:00:32.975211 1455 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 14 00:00:32.980925 update_engine[1455]: I20260414 00:00:32.975289 1455 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 14 00:00:32.980925 update_engine[1455]: I20260414 00:00:32.976708 1455 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 14 00:00:32.980925 update_engine[1455]: I20260414 00:00:32.980938 1455 omaha_request_params.cc:62] Current group set to lts Apr 14 00:00:32.984443 update_engine[1455]: I20260414 00:00:32.983682 1455 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 14 00:00:32.984443 update_engine[1455]: I20260414 00:00:32.983732 1455 update_attempter.cc:643] Scheduling an action processor start. Apr 14 00:00:32.984443 update_engine[1455]: I20260414 00:00:32.983755 1455 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 14 00:00:32.984443 update_engine[1455]: I20260414 00:00:32.983880 1455 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 14 00:00:32.984443 update_engine[1455]: I20260414 00:00:32.983991 1455 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 14 00:00:32.984443 update_engine[1455]: I20260414 00:00:32.984000 1455 omaha_request_action.cc:272] Request: Apr 14 00:00:32.984443 update_engine[1455]: Apr 14 00:00:32.984443 update_engine[1455]: Apr 14 00:00:32.984443 update_engine[1455]: Apr 14 00:00:32.984443 update_engine[1455]: Apr 14 00:00:32.984443 update_engine[1455]: Apr 14 00:00:32.984443 update_engine[1455]: Apr 14 00:00:32.984443 update_engine[1455]: Apr 14 00:00:32.984443 update_engine[1455]: Apr 14 00:00:32.984443 update_engine[1455]: I20260414 00:00:32.984007 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 00:00:32.985374 locksmithd[1484]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 14 00:00:32.993897 update_engine[1455]: I20260414 00:00:32.993789 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 00:00:32.995408 update_engine[1455]: I20260414 00:00:32.994925 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 00:00:33.023207 update_engine[1455]: E20260414 00:00:33.022927 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 00:00:33.023207 update_engine[1455]: I20260414 00:00:33.023088 1455 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 14 00:00:39.779455 kubelet[2593]: E0414 00:00:39.772075 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:42.924474 update_engine[1455]: I20260414 00:00:42.911882 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 00:00:42.925521 update_engine[1455]: I20260414 00:00:42.925469 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 00:00:42.925971 update_engine[1455]: I20260414 00:00:42.925947 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 00:00:42.940714 update_engine[1455]: E20260414 00:00:42.940644 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 00:00:42.941021 update_engine[1455]: I20260414 00:00:42.941000 1455 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 14 00:00:52.894939 update_engine[1455]: I20260414 00:00:52.894277 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 00:00:52.894939 update_engine[1455]: I20260414 00:00:52.894657 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 00:00:52.894939 update_engine[1455]: I20260414 00:00:52.894870 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 00:00:52.912267 update_engine[1455]: E20260414 00:00:52.912106 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 00:00:52.912267 update_engine[1455]: I20260414 00:00:52.912247 1455 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 14 00:00:53.765977 kubelet[2593]: E0414 00:00:53.762998 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:02.898053 update_engine[1455]: I20260414 00:01:02.896926 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 00:01:02.899731 update_engine[1455]: I20260414 00:01:02.898131 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 00:01:02.899731 update_engine[1455]: I20260414 00:01:02.898404 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 00:01:02.914197 update_engine[1455]: E20260414 00:01:02.910028 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 00:01:02.914197 update_engine[1455]: I20260414 00:01:02.910125 1455 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 14 00:01:02.914197 update_engine[1455]: I20260414 00:01:02.910135 1455 omaha_request_action.cc:617] Omaha request response: Apr 14 00:01:02.914197 update_engine[1455]: E20260414 00:01:02.910725 1455 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 14 00:01:02.914197 update_engine[1455]: I20260414 00:01:02.911267 1455 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 14 00:01:02.914197 update_engine[1455]: I20260414 00:01:02.911278 1455 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 00:01:02.914197 update_engine[1455]: I20260414 00:01:02.911283 1455 update_attempter.cc:306] Processing Done. Apr 14 00:01:02.914197 update_engine[1455]: E20260414 00:01:02.911307 1455 update_attempter.cc:619] Update failed. Apr 14 00:01:02.914197 update_engine[1455]: I20260414 00:01:02.911314 1455 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 14 00:01:02.914197 update_engine[1455]: I20260414 00:01:02.911322 1455 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 14 00:01:02.914197 update_engine[1455]: I20260414 00:01:02.911336 1455 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 14 00:01:02.914197 update_engine[1455]: I20260414 00:01:02.911552 1455 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 14 00:01:02.914197 update_engine[1455]: I20260414 00:01:02.911716 1455 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 14 00:01:02.914197 update_engine[1455]: I20260414 00:01:02.911724 1455 omaha_request_action.cc:272] Request: Apr 14 00:01:02.914197 update_engine[1455]: Apr 14 00:01:02.914197 update_engine[1455]: Apr 14 00:01:02.914197 update_engine[1455]: Apr 14 00:01:02.917440 update_engine[1455]: Apr 14 00:01:02.917440 update_engine[1455]: Apr 14 00:01:02.917440 update_engine[1455]: Apr 14 00:01:02.917440 update_engine[1455]: I20260414 00:01:02.911731 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 00:01:02.917440 update_engine[1455]: I20260414 00:01:02.913691 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 00:01:02.917825 locksmithd[1484]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 14 00:01:02.919568 update_engine[1455]: I20260414 00:01:02.918573 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 00:01:02.932999 update_engine[1455]: E20260414 00:01:02.932435 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 00:01:02.932999 update_engine[1455]: I20260414 00:01:02.932731 1455 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 14 00:01:02.932999 update_engine[1455]: I20260414 00:01:02.932767 1455 omaha_request_action.cc:617] Omaha request response: Apr 14 00:01:02.932999 update_engine[1455]: I20260414 00:01:02.932778 1455 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 00:01:02.932999 update_engine[1455]: I20260414 00:01:02.932787 1455 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 00:01:02.932999 update_engine[1455]: I20260414 00:01:02.932793 1455 update_attempter.cc:306] Processing Done. Apr 14 00:01:02.932999 update_engine[1455]: I20260414 00:01:02.932803 1455 update_attempter.cc:310] Error event sent. Apr 14 00:01:02.932999 update_engine[1455]: I20260414 00:01:02.932817 1455 update_check_scheduler.cc:74] Next update check in 48m22s Apr 14 00:01:02.934101 locksmithd[1484]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 14 00:01:04.761049 kubelet[2593]: E0414 00:01:04.759401 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:08.773857 kubelet[2593]: E0414 00:01:08.773772 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:14.770030 kubelet[2593]: E0414 00:01:14.765588 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:16.785713 kubelet[2593]: E0414 00:01:16.785643 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:19.774550 kubelet[2593]: E0414 00:01:19.774505 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:36.766828 kubelet[2593]: E0414 00:01:36.766361 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:56.773533 kubelet[2593]: E0414 00:01:56.773482 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:59.777242 kubelet[2593]: E0414 00:01:59.777004 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:13.807398 kubelet[2593]: E0414 00:02:13.806979 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:16.792736 kubelet[2593]: E0414 00:02:16.792444 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:21.905305 systemd[1]: Started sshd@9-10.0.0.37:22-10.0.0.1:35168.service - OpenSSH per-connection server daemon (10.0.0.1:35168). Apr 14 00:02:22.083865 sshd[4026]: Accepted publickey for core from 10.0.0.1 port 35168 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:02:22.095373 sshd[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:02:22.140950 systemd-logind[1450]: New session 10 of user core. Apr 14 00:02:22.217065 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 14 00:02:23.210265 sshd[4026]: pam_unix(sshd:session): session closed for user core Apr 14 00:02:23.230021 systemd[1]: sshd@9-10.0.0.37:22-10.0.0.1:35168.service: Deactivated successfully. Apr 14 00:02:23.234072 systemd[1]: session-10.scope: Deactivated successfully. Apr 14 00:02:23.238796 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Apr 14 00:02:23.242123 systemd-logind[1450]: Removed session 10. Apr 14 00:02:28.244922 systemd[1]: Started sshd@10-10.0.0.37:22-10.0.0.1:37372.service - OpenSSH per-connection server daemon (10.0.0.1:37372). Apr 14 00:02:28.475925 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 37372 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:02:28.479628 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:02:28.507306 systemd-logind[1450]: New session 11 of user core. Apr 14 00:02:28.562512 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 14 00:02:29.163211 sshd[4045]: pam_unix(sshd:session): session closed for user core Apr 14 00:02:29.169117 systemd[1]: sshd@10-10.0.0.37:22-10.0.0.1:37372.service: Deactivated successfully. Apr 14 00:02:29.182919 systemd[1]: session-11.scope: Deactivated successfully. Apr 14 00:02:29.199705 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Apr 14 00:02:29.209304 systemd-logind[1450]: Removed session 11. Apr 14 00:02:29.802917 kubelet[2593]: E0414 00:02:29.802711 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:30.807373 kubelet[2593]: E0414 00:02:30.805467 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:34.290351 systemd[1]: Started sshd@11-10.0.0.37:22-10.0.0.1:37378.service - OpenSSH per-connection server daemon (10.0.0.1:37378). Apr 14 00:02:34.361667 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 37378 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:02:34.371120 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:02:34.391559 systemd-logind[1450]: New session 12 of user core. Apr 14 00:02:34.421355 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 14 00:02:34.788105 kubelet[2593]: E0414 00:02:34.788015 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:34.839971 sshd[4061]: pam_unix(sshd:session): session closed for user core Apr 14 00:02:34.850831 systemd[1]: sshd@11-10.0.0.37:22-10.0.0.1:37378.service: Deactivated successfully. Apr 14 00:02:34.871730 systemd[1]: session-12.scope: Deactivated successfully. Apr 14 00:02:34.897970 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Apr 14 00:02:34.906743 systemd-logind[1450]: Removed session 12. Apr 14 00:02:39.969555 systemd[1]: Started sshd@12-10.0.0.37:22-10.0.0.1:54148.service - OpenSSH per-connection server daemon (10.0.0.1:54148). Apr 14 00:02:40.028638 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 54148 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:02:40.031966 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:02:40.116759 systemd-logind[1450]: New session 13 of user core. Apr 14 00:02:40.132509 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 14 00:02:40.692261 sshd[4079]: pam_unix(sshd:session): session closed for user core Apr 14 00:02:40.700823 systemd[1]: sshd@12-10.0.0.37:22-10.0.0.1:54148.service: Deactivated successfully. Apr 14 00:02:40.717785 systemd[1]: session-13.scope: Deactivated successfully. Apr 14 00:02:40.720031 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Apr 14 00:02:40.721814 systemd-logind[1450]: Removed session 13. Apr 14 00:02:45.743025 systemd[1]: Started sshd@13-10.0.0.37:22-10.0.0.1:38076.service - OpenSSH per-connection server daemon (10.0.0.1:38076). Apr 14 00:02:45.849623 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 38076 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:02:45.862283 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:02:45.916631 systemd-logind[1450]: New session 14 of user core. Apr 14 00:02:45.977053 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 14 00:02:46.435564 sshd[4097]: pam_unix(sshd:session): session closed for user core Apr 14 00:02:46.494413 systemd[1]: sshd@13-10.0.0.37:22-10.0.0.1:38076.service: Deactivated successfully. Apr 14 00:02:46.497352 systemd[1]: session-14.scope: Deactivated successfully. Apr 14 00:02:46.499651 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Apr 14 00:02:46.501374 systemd-logind[1450]: Removed session 14. Apr 14 00:02:47.789358 kubelet[2593]: E0414 00:02:47.788906 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:51.520960 systemd[1]: Started sshd@14-10.0.0.37:22-10.0.0.1:38088.service - OpenSSH per-connection server daemon (10.0.0.1:38088). Apr 14 00:02:51.625644 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 38088 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:02:51.627077 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:02:51.700826 systemd-logind[1450]: New session 15 of user core. Apr 14 00:02:51.710185 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 14 00:02:52.171523 sshd[4114]: pam_unix(sshd:session): session closed for user core Apr 14 00:02:52.196066 systemd[1]: sshd@14-10.0.0.37:22-10.0.0.1:38088.service: Deactivated successfully. Apr 14 00:02:52.214181 systemd[1]: session-15.scope: Deactivated successfully. Apr 14 00:02:52.232934 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Apr 14 00:02:52.242531 systemd-logind[1450]: Removed session 15. Apr 14 00:02:57.277769 systemd[1]: Started sshd@15-10.0.0.37:22-10.0.0.1:56018.service - OpenSSH per-connection server daemon (10.0.0.1:56018). Apr 14 00:02:57.369956 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 56018 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:02:57.372604 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:02:57.386519 systemd-logind[1450]: New session 16 of user core. Apr 14 00:02:57.407584 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 14 00:02:57.920781 sshd[4129]: pam_unix(sshd:session): session closed for user core Apr 14 00:02:57.927499 systemd[1]: sshd@15-10.0.0.37:22-10.0.0.1:56018.service: Deactivated successfully. Apr 14 00:02:57.930495 systemd[1]: session-16.scope: Deactivated successfully. Apr 14 00:02:57.969579 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Apr 14 00:02:57.973974 systemd-logind[1450]: Removed session 16. Apr 14 00:03:03.004817 systemd[1]: Started sshd@16-10.0.0.37:22-10.0.0.1:56030.service - OpenSSH per-connection server daemon (10.0.0.1:56030). Apr 14 00:03:03.097853 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 56030 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:03:03.101839 sshd[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:03:03.174349 systemd-logind[1450]: New session 17 of user core. Apr 14 00:03:03.179573 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 14 00:03:03.782629 sshd[4145]: pam_unix(sshd:session): session closed for user core Apr 14 00:03:03.804921 systemd[1]: sshd@16-10.0.0.37:22-10.0.0.1:56030.service: Deactivated successfully. Apr 14 00:03:03.817749 systemd[1]: session-17.scope: Deactivated successfully. Apr 14 00:03:03.834589 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Apr 14 00:03:03.837595 systemd-logind[1450]: Removed session 17. Apr 14 00:03:08.782988 kubelet[2593]: E0414 00:03:08.782905 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:08.833338 systemd[1]: Started sshd@17-10.0.0.37:22-10.0.0.1:51350.service - OpenSSH per-connection server daemon (10.0.0.1:51350). Apr 14 00:03:08.926562 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 51350 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:03:08.929651 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:03:08.988099 systemd-logind[1450]: New session 18 of user core. Apr 14 00:03:09.000709 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 14 00:03:09.473792 sshd[4161]: pam_unix(sshd:session): session closed for user core Apr 14 00:03:09.482446 systemd[1]: sshd@17-10.0.0.37:22-10.0.0.1:51350.service: Deactivated successfully. Apr 14 00:03:09.486797 systemd[1]: session-18.scope: Deactivated successfully. Apr 14 00:03:09.499937 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Apr 14 00:03:09.502709 systemd-logind[1450]: Removed session 18. Apr 14 00:03:14.523983 systemd[1]: Started sshd@18-10.0.0.37:22-10.0.0.1:51360.service - OpenSSH per-connection server daemon (10.0.0.1:51360). Apr 14 00:03:14.633129 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 51360 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:03:14.635641 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:03:14.687522 systemd-logind[1450]: New session 19 of user core. Apr 14 00:03:14.699349 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 14 00:03:15.092177 sshd[4180]: pam_unix(sshd:session): session closed for user core Apr 14 00:03:15.100221 systemd[1]: sshd@18-10.0.0.37:22-10.0.0.1:51360.service: Deactivated successfully. Apr 14 00:03:15.110114 systemd[1]: session-19.scope: Deactivated successfully. Apr 14 00:03:15.155201 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Apr 14 00:03:15.161603 systemd-logind[1450]: Removed session 19. Apr 14 00:03:16.783922 kubelet[2593]: E0414 00:03:16.779252 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:20.191770 systemd[1]: Started sshd@19-10.0.0.37:22-10.0.0.1:44806.service - OpenSSH per-connection server daemon (10.0.0.1:44806). Apr 14 00:03:20.384950 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 44806 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:03:20.397691 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:03:20.449114 systemd-logind[1450]: New session 20 of user core. Apr 14 00:03:20.469861 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 14 00:03:21.096323 sshd[4195]: pam_unix(sshd:session): session closed for user core Apr 14 00:03:21.108730 systemd[1]: sshd@19-10.0.0.37:22-10.0.0.1:44806.service: Deactivated successfully. Apr 14 00:03:21.113524 systemd[1]: session-20.scope: Deactivated successfully. Apr 14 00:03:21.117550 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Apr 14 00:03:21.119758 systemd-logind[1450]: Removed session 20. Apr 14 00:03:24.804042 kubelet[2593]: E0414 00:03:24.803986 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:26.127017 systemd[1]: Started sshd@20-10.0.0.37:22-10.0.0.1:37424.service - OpenSSH per-connection server daemon (10.0.0.1:37424). Apr 14 00:03:26.236071 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 37424 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:03:26.236879 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:03:26.284326 systemd-logind[1450]: New session 21 of user core. Apr 14 00:03:26.292791 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 14 00:03:26.671703 sshd[4210]: pam_unix(sshd:session): session closed for user core Apr 14 00:03:26.690426 systemd[1]: sshd@20-10.0.0.37:22-10.0.0.1:37424.service: Deactivated successfully. Apr 14 00:03:26.692470 systemd[1]: session-21.scope: Deactivated successfully. Apr 14 00:03:26.695835 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. Apr 14 00:03:26.701750 systemd-logind[1450]: Removed session 21. Apr 14 00:03:27.785544 kubelet[2593]: E0414 00:03:27.784710 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:31.726253 systemd[1]: Started sshd@21-10.0.0.37:22-10.0.0.1:37440.service - OpenSSH per-connection server daemon (10.0.0.1:37440). Apr 14 00:03:31.818253 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 37440 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:03:31.817724 sshd[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:03:31.829672 systemd-logind[1450]: New session 22 of user core. Apr 14 00:03:31.835630 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 14 00:03:32.287133 sshd[4226]: pam_unix(sshd:session): session closed for user core Apr 14 00:03:32.295331 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. Apr 14 00:03:32.295910 systemd[1]: sshd@21-10.0.0.37:22-10.0.0.1:37440.service: Deactivated successfully. Apr 14 00:03:32.302053 systemd[1]: session-22.scope: Deactivated successfully. Apr 14 00:03:32.308796 systemd-logind[1450]: Removed session 22. Apr 14 00:03:35.786707 kubelet[2593]: E0414 00:03:35.786083 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:36.790934 kubelet[2593]: E0414 00:03:36.790886 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:37.326984 systemd[1]: Started sshd@22-10.0.0.37:22-10.0.0.1:34560.service - OpenSSH per-connection server daemon (10.0.0.1:34560). Apr 14 00:03:37.401847 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 34560 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:03:37.405124 sshd[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:03:37.422790 systemd-logind[1450]: New session 23 of user core. Apr 14 00:03:37.437063 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 14 00:03:37.967583 sshd[4241]: pam_unix(sshd:session): session closed for user core Apr 14 00:03:37.977059 systemd[1]: sshd@22-10.0.0.37:22-10.0.0.1:34560.service: Deactivated successfully. Apr 14 00:03:37.982761 systemd[1]: session-23.scope: Deactivated successfully. Apr 14 00:03:37.990000 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. Apr 14 00:03:37.992760 systemd-logind[1450]: Removed session 23. Apr 14 00:03:42.971366 systemd[1]: Started sshd@23-10.0.0.37:22-10.0.0.1:34576.service - OpenSSH per-connection server daemon (10.0.0.1:34576). Apr 14 00:03:43.133200 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 34576 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:03:43.132616 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:03:43.148916 systemd-logind[1450]: New session 24 of user core. Apr 14 00:03:43.165377 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 14 00:03:43.614914 sshd[4258]: pam_unix(sshd:session): session closed for user core Apr 14 00:03:43.623820 systemd[1]: sshd@23-10.0.0.37:22-10.0.0.1:34576.service: Deactivated successfully. Apr 14 00:03:43.628043 systemd[1]: session-24.scope: Deactivated successfully. Apr 14 00:03:43.636422 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. Apr 14 00:03:43.638013 systemd-logind[1450]: Removed session 24. Apr 14 00:03:48.671313 systemd[1]: Started sshd@24-10.0.0.37:22-10.0.0.1:42522.service - OpenSSH per-connection server daemon (10.0.0.1:42522). Apr 14 00:03:48.837449 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 42522 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:03:48.839591 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:03:48.858281 systemd-logind[1450]: New session 25 of user core. Apr 14 00:03:48.873967 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 14 00:03:49.400922 sshd[4275]: pam_unix(sshd:session): session closed for user core Apr 14 00:03:49.416973 systemd[1]: sshd@24-10.0.0.37:22-10.0.0.1:42522.service: Deactivated successfully. Apr 14 00:03:49.424766 systemd-logind[1450]: Session 25 logged out. Waiting for processes to exit. Apr 14 00:03:49.429950 systemd[1]: session-25.scope: Deactivated successfully. Apr 14 00:03:49.457019 systemd-logind[1450]: Removed session 25. Apr 14 00:03:54.509773 systemd[1]: Started sshd@25-10.0.0.37:22-10.0.0.1:42528.service - OpenSSH per-connection server daemon (10.0.0.1:42528). Apr 14 00:03:54.633066 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 42528 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:03:54.637107 sshd[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:03:54.671730 systemd-logind[1450]: New session 26 of user core. Apr 14 00:03:54.681981 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 14 00:03:55.203910 sshd[4291]: pam_unix(sshd:session): session closed for user core Apr 14 00:03:55.221932 systemd-logind[1450]: Session 26 logged out. Waiting for processes to exit. Apr 14 00:03:55.225538 systemd[1]: sshd@25-10.0.0.37:22-10.0.0.1:42528.service: Deactivated successfully. Apr 14 00:03:55.229472 systemd[1]: session-26.scope: Deactivated successfully. Apr 14 00:03:55.234090 systemd-logind[1450]: Removed session 26. Apr 14 00:03:59.783576 kubelet[2593]: E0414 00:03:59.782688 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:00.314047 systemd[1]: Started sshd@26-10.0.0.37:22-10.0.0.1:38628.service - OpenSSH per-connection server daemon (10.0.0.1:38628). Apr 14 00:04:00.477600 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 38628 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:04:00.481766 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:04:00.515000 systemd-logind[1450]: New session 27 of user core. Apr 14 00:04:00.541228 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 14 00:04:01.204055 sshd[4307]: pam_unix(sshd:session): session closed for user core Apr 14 00:04:01.229030 systemd[1]: sshd@26-10.0.0.37:22-10.0.0.1:38628.service: Deactivated successfully. Apr 14 00:04:01.236203 systemd[1]: session-27.scope: Deactivated successfully. Apr 14 00:04:01.289536 systemd-logind[1450]: Session 27 logged out. Waiting for processes to exit. Apr 14 00:04:01.298053 systemd-logind[1450]: Removed session 27. Apr 14 00:04:05.802774 kubelet[2593]: E0414 00:04:05.794743 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:06.289136 systemd[1]: Started sshd@27-10.0.0.37:22-10.0.0.1:46172.service - OpenSSH per-connection server daemon (10.0.0.1:46172). Apr 14 00:04:06.430420 sshd[4323]: Accepted publickey for core from 10.0.0.1 port 46172 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:04:06.456453 sshd[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:04:06.497475 systemd-logind[1450]: New session 28 of user core. Apr 14 00:04:06.529499 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 14 00:04:07.094703 sshd[4323]: pam_unix(sshd:session): session closed for user core Apr 14 00:04:07.120114 systemd[1]: sshd@27-10.0.0.37:22-10.0.0.1:46172.service: Deactivated successfully. Apr 14 00:04:07.137222 systemd[1]: session-28.scope: Deactivated successfully. Apr 14 00:04:07.184687 systemd-logind[1450]: Session 28 logged out. Waiting for processes to exit. Apr 14 00:04:07.188454 systemd-logind[1450]: Removed session 28. Apr 14 00:04:12.144213 systemd[1]: Started sshd@28-10.0.0.37:22-10.0.0.1:46186.service - OpenSSH per-connection server daemon (10.0.0.1:46186). Apr 14 00:04:12.253248 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 46186 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:04:12.231068 sshd[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:04:12.261143 systemd-logind[1450]: New session 29 of user core. Apr 14 00:04:12.278640 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 14 00:04:12.764746 sshd[4339]: pam_unix(sshd:session): session closed for user core Apr 14 00:04:12.771854 systemd[1]: sshd@28-10.0.0.37:22-10.0.0.1:46186.service: Deactivated successfully. Apr 14 00:04:12.776590 systemd[1]: session-29.scope: Deactivated successfully. Apr 14 00:04:12.779135 systemd-logind[1450]: Session 29 logged out. Waiting for processes to exit. Apr 14 00:04:12.784547 systemd-logind[1450]: Removed session 29. Apr 14 00:04:17.778646 kubelet[2593]: E0414 00:04:17.777025 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:17.813418 systemd[1]: Started sshd@29-10.0.0.37:22-10.0.0.1:41192.service - OpenSSH per-connection server daemon (10.0.0.1:41192). Apr 14 00:04:17.931765 sshd[4356]: Accepted publickey for core from 10.0.0.1 port 41192 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:04:17.936041 sshd[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:04:17.966192 systemd-logind[1450]: New session 30 of user core. Apr 14 00:04:17.981755 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 14 00:04:18.434081 sshd[4356]: pam_unix(sshd:session): session closed for user core Apr 14 00:04:18.494669 systemd[1]: sshd@29-10.0.0.37:22-10.0.0.1:41192.service: Deactivated successfully. Apr 14 00:04:18.500104 systemd[1]: session-30.scope: Deactivated successfully. Apr 14 00:04:18.513116 systemd-logind[1450]: Session 30 logged out. Waiting for processes to exit. Apr 14 00:04:18.517667 systemd-logind[1450]: Removed session 30. Apr 14 00:04:23.491532 systemd[1]: Started sshd@30-10.0.0.37:22-10.0.0.1:41204.service - OpenSSH per-connection server daemon (10.0.0.1:41204). Apr 14 00:04:23.626223 sshd[4371]: Accepted publickey for core from 10.0.0.1 port 41204 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:04:23.632359 sshd[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:04:23.739920 systemd-logind[1450]: New session 31 of user core. Apr 14 00:04:23.748633 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 14 00:04:24.299383 sshd[4371]: pam_unix(sshd:session): session closed for user core Apr 14 00:04:24.311225 systemd-logind[1450]: Session 31 logged out. Waiting for processes to exit. Apr 14 00:04:24.311898 systemd[1]: sshd@30-10.0.0.37:22-10.0.0.1:41204.service: Deactivated successfully. Apr 14 00:04:24.330140 systemd[1]: session-31.scope: Deactivated successfully. Apr 14 00:04:24.343475 systemd-logind[1450]: Removed session 31. Apr 14 00:04:29.417650 systemd[1]: Started sshd@31-10.0.0.37:22-10.0.0.1:46976.service - OpenSSH per-connection server daemon (10.0.0.1:46976). Apr 14 00:04:29.535137 sshd[4388]: Accepted publickey for core from 10.0.0.1 port 46976 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:04:29.597660 sshd[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:04:29.639563 systemd-logind[1450]: New session 32 of user core. Apr 14 00:04:29.717033 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 14 00:04:30.277946 sshd[4388]: pam_unix(sshd:session): session closed for user core Apr 14 00:04:30.305684 systemd[1]: sshd@31-10.0.0.37:22-10.0.0.1:46976.service: Deactivated successfully. Apr 14 00:04:30.310451 systemd[1]: session-32.scope: Deactivated successfully. Apr 14 00:04:30.313142 systemd-logind[1450]: Session 32 logged out. Waiting for processes to exit. Apr 14 00:04:30.316499 systemd-logind[1450]: Removed session 32. Apr 14 00:04:33.788355 kubelet[2593]: E0414 00:04:33.787977 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:35.340981 systemd[1]: Started sshd@32-10.0.0.37:22-10.0.0.1:39624.service - OpenSSH per-connection server daemon (10.0.0.1:39624). Apr 14 00:04:35.506014 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 39624 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:04:35.506028 sshd[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:04:35.539093 systemd-logind[1450]: New session 33 of user core. Apr 14 00:04:35.571071 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 14 00:04:35.766672 kubelet[2593]: E0414 00:04:35.765914 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:36.380928 sshd[4403]: pam_unix(sshd:session): session closed for user core Apr 14 00:04:36.392509 systemd[1]: sshd@32-10.0.0.37:22-10.0.0.1:39624.service: Deactivated successfully. Apr 14 00:04:36.396624 systemd[1]: session-33.scope: Deactivated successfully. Apr 14 00:04:36.398464 systemd-logind[1450]: Session 33 logged out. Waiting for processes to exit. Apr 14 00:04:36.404273 systemd-logind[1450]: Removed session 33. Apr 14 00:04:41.393069 systemd[1]: Started sshd@33-10.0.0.37:22-10.0.0.1:39634.service - OpenSSH per-connection server daemon (10.0.0.1:39634). Apr 14 00:04:41.536674 sshd[4420]: Accepted publickey for core from 10.0.0.1 port 39634 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:04:41.537132 sshd[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:04:41.550527 systemd-logind[1450]: New session 34 of user core. Apr 14 00:04:41.562590 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 14 00:04:42.061496 sshd[4420]: pam_unix(sshd:session): session closed for user core Apr 14 00:04:42.067500 systemd[1]: sshd@33-10.0.0.37:22-10.0.0.1:39634.service: Deactivated successfully. Apr 14 00:04:42.071867 systemd[1]: session-34.scope: Deactivated successfully. Apr 14 00:04:42.075075 systemd-logind[1450]: Session 34 logged out. Waiting for processes to exit. Apr 14 00:04:42.082203 systemd-logind[1450]: Removed session 34. Apr 14 00:04:47.082714 systemd[1]: Started sshd@34-10.0.0.37:22-10.0.0.1:43498.service - OpenSSH per-connection server daemon (10.0.0.1:43498). Apr 14 00:04:47.209078 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 43498 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:04:47.212597 sshd[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:04:47.245760 systemd-logind[1450]: New session 35 of user core. Apr 14 00:04:47.259998 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 14 00:04:47.763746 kubelet[2593]: E0414 00:04:47.763660 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:47.767272 sshd[4437]: pam_unix(sshd:session): session closed for user core Apr 14 00:04:47.774762 systemd[1]: sshd@34-10.0.0.37:22-10.0.0.1:43498.service: Deactivated successfully. Apr 14 00:04:47.778908 systemd[1]: session-35.scope: Deactivated successfully. Apr 14 00:04:47.781353 systemd-logind[1450]: Session 35 logged out. Waiting for processes to exit. Apr 14 00:04:47.783995 systemd-logind[1450]: Removed session 35. Apr 14 00:04:49.796776 kubelet[2593]: E0414 00:04:49.794090 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:52.852403 systemd[1]: Started sshd@35-10.0.0.37:22-10.0.0.1:43500.service - OpenSSH per-connection server daemon (10.0.0.1:43500). Apr 14 00:04:52.959662 sshd[4454]: Accepted publickey for core from 10.0.0.1 port 43500 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:04:52.959878 sshd[4454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:04:52.976420 systemd-logind[1450]: New session 36 of user core. Apr 14 00:04:53.014094 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 14 00:04:53.376592 sshd[4454]: pam_unix(sshd:session): session closed for user core Apr 14 00:04:53.388580 systemd[1]: sshd@35-10.0.0.37:22-10.0.0.1:43500.service: Deactivated successfully. Apr 14 00:04:53.395058 systemd[1]: session-36.scope: Deactivated successfully. Apr 14 00:04:53.425045 systemd-logind[1450]: Session 36 logged out. Waiting for processes to exit. Apr 14 00:04:53.434803 systemd-logind[1450]: Removed session 36. Apr 14 00:04:58.442863 systemd[1]: Started sshd@36-10.0.0.37:22-10.0.0.1:33890.service - OpenSSH per-connection server daemon (10.0.0.1:33890). Apr 14 00:04:58.567800 sshd[4469]: Accepted publickey for core from 10.0.0.1 port 33890 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:04:58.567446 sshd[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:04:58.578780 systemd-logind[1450]: New session 37 of user core. Apr 14 00:04:58.612887 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 14 00:04:59.133812 sshd[4469]: pam_unix(sshd:session): session closed for user core Apr 14 00:04:59.154663 systemd[1]: sshd@36-10.0.0.37:22-10.0.0.1:33890.service: Deactivated successfully. Apr 14 00:04:59.159088 systemd[1]: session-37.scope: Deactivated successfully. Apr 14 00:04:59.166520 systemd-logind[1450]: Session 37 logged out. Waiting for processes to exit. Apr 14 00:04:59.170222 systemd-logind[1450]: Removed session 37. Apr 14 00:05:01.760634 kubelet[2593]: E0414 00:05:01.760190 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:05:03.776498 kubelet[2593]: E0414 00:05:03.775796 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:05:04.136438 systemd[1]: Started sshd@37-10.0.0.37:22-10.0.0.1:33904.service - OpenSSH per-connection server daemon (10.0.0.1:33904). Apr 14 00:05:04.295940 sshd[4484]: Accepted publickey for core from 10.0.0.1 port 33904 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:05:04.304017 sshd[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:05:04.324993 systemd-logind[1450]: New session 38 of user core. Apr 14 00:05:04.345074 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 14 00:05:04.741796 sshd[4484]: pam_unix(sshd:session): session closed for user core Apr 14 00:05:04.786979 systemd[1]: sshd@37-10.0.0.37:22-10.0.0.1:33904.service: Deactivated successfully. Apr 14 00:05:04.797017 systemd[1]: session-38.scope: Deactivated successfully. Apr 14 00:05:04.805855 systemd-logind[1450]: Session 38 logged out. Waiting for processes to exit. Apr 14 00:05:04.811048 systemd-logind[1450]: Removed session 38. Apr 14 00:05:09.786082 systemd[1]: Started sshd@38-10.0.0.37:22-10.0.0.1:40182.service - OpenSSH per-connection server daemon (10.0.0.1:40182). Apr 14 00:05:09.931849 sshd[4499]: Accepted publickey for core from 10.0.0.1 port 40182 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:05:09.935735 sshd[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:05:09.974843 systemd-logind[1450]: New session 39 of user core. Apr 14 00:05:10.009935 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 14 00:05:10.412837 sshd[4499]: pam_unix(sshd:session): session closed for user core Apr 14 00:05:10.420096 systemd[1]: sshd@38-10.0.0.37:22-10.0.0.1:40182.service: Deactivated successfully. Apr 14 00:05:10.423934 systemd[1]: session-39.scope: Deactivated successfully. Apr 14 00:05:10.429843 systemd-logind[1450]: Session 39 logged out. Waiting for processes to exit. Apr 14 00:05:10.433125 systemd-logind[1450]: Removed session 39. Apr 14 00:05:15.500042 systemd[1]: Started sshd@39-10.0.0.37:22-10.0.0.1:34888.service - OpenSSH per-connection server daemon (10.0.0.1:34888). Apr 14 00:05:15.556000 sshd[4516]: Accepted publickey for core from 10.0.0.1 port 34888 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:05:15.559739 sshd[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:05:15.579984 systemd-logind[1450]: New session 40 of user core. Apr 14 00:05:15.589633 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 14 00:05:15.986590 sshd[4516]: pam_unix(sshd:session): session closed for user core Apr 14 00:05:16.001087 systemd[1]: sshd@39-10.0.0.37:22-10.0.0.1:34888.service: Deactivated successfully. Apr 14 00:05:16.005609 systemd[1]: session-40.scope: Deactivated successfully. Apr 14 00:05:16.016123 systemd-logind[1450]: Session 40 logged out. Waiting for processes to exit. Apr 14 00:05:16.019344 systemd-logind[1450]: Removed session 40. Apr 14 00:05:19.772740 kubelet[2593]: E0414 00:05:19.772444 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:05:21.015341 systemd[1]: Started sshd@40-10.0.0.37:22-10.0.0.1:34902.service - OpenSSH per-connection server daemon (10.0.0.1:34902). Apr 14 00:05:21.117061 sshd[4531]: Accepted publickey for core from 10.0.0.1 port 34902 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:05:21.121082 sshd[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:05:21.131844 systemd-logind[1450]: New session 41 of user core. Apr 14 00:05:21.139813 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 14 00:05:21.468487 sshd[4531]: pam_unix(sshd:session): session closed for user core Apr 14 00:05:21.475017 systemd[1]: sshd@40-10.0.0.37:22-10.0.0.1:34902.service: Deactivated successfully. Apr 14 00:05:21.478489 systemd[1]: session-41.scope: Deactivated successfully. Apr 14 00:05:21.479923 systemd-logind[1450]: Session 41 logged out. Waiting for processes to exit. Apr 14 00:05:21.483657 systemd-logind[1450]: Removed session 41. Apr 14 00:05:26.526580 systemd[1]: Started sshd@41-10.0.0.37:22-10.0.0.1:34512.service - OpenSSH per-connection server daemon (10.0.0.1:34512). Apr 14 00:05:26.638658 sshd[4550]: Accepted publickey for core from 10.0.0.1 port 34512 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:05:26.641085 sshd[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:05:26.675028 systemd-logind[1450]: New session 42 of user core. Apr 14 00:05:26.710844 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 14 00:05:27.239700 sshd[4550]: pam_unix(sshd:session): session closed for user core Apr 14 00:05:27.285296 systemd[1]: sshd@41-10.0.0.37:22-10.0.0.1:34512.service: Deactivated successfully. Apr 14 00:05:27.291390 systemd[1]: session-42.scope: Deactivated successfully. Apr 14 00:05:27.303719 systemd-logind[1450]: Session 42 logged out. Waiting for processes to exit. Apr 14 00:05:27.309514 systemd-logind[1450]: Removed session 42. Apr 14 00:05:32.308945 systemd[1]: Started sshd@42-10.0.0.37:22-10.0.0.1:34526.service - OpenSSH per-connection server daemon (10.0.0.1:34526). Apr 14 00:05:32.427577 sshd[4566]: Accepted publickey for core from 10.0.0.1 port 34526 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:05:32.430942 sshd[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:05:32.449920 systemd-logind[1450]: New session 43 of user core. Apr 14 00:05:32.461769 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 14 00:05:32.778396 kubelet[2593]: E0414 00:05:32.777674 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:05:32.931951 sshd[4566]: pam_unix(sshd:session): session closed for user core Apr 14 00:05:32.946824 systemd[1]: sshd@42-10.0.0.37:22-10.0.0.1:34526.service: Deactivated successfully. Apr 14 00:05:32.951140 systemd[1]: session-43.scope: Deactivated successfully. Apr 14 00:05:32.958686 systemd-logind[1450]: Session 43 logged out. Waiting for processes to exit. Apr 14 00:05:32.963251 systemd-logind[1450]: Removed session 43. Apr 14 00:05:38.053264 systemd[1]: Started sshd@43-10.0.0.37:22-10.0.0.1:34912.service - OpenSSH per-connection server daemon (10.0.0.1:34912). Apr 14 00:05:38.224571 sshd[4582]: Accepted publickey for core from 10.0.0.1 port 34912 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:05:38.234667 sshd[4582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:05:38.331761 systemd-logind[1450]: New session 44 of user core. Apr 14 00:05:38.365077 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 14 00:05:38.908663 sshd[4582]: pam_unix(sshd:session): session closed for user core Apr 14 00:05:38.919192 systemd[1]: sshd@43-10.0.0.37:22-10.0.0.1:34912.service: Deactivated successfully. Apr 14 00:05:38.925309 systemd[1]: session-44.scope: Deactivated successfully. Apr 14 00:05:38.929733 systemd-logind[1450]: Session 44 logged out. Waiting for processes to exit. Apr 14 00:05:38.934457 systemd-logind[1450]: Removed session 44. Apr 14 00:05:44.006948 systemd[1]: Started sshd@44-10.0.0.37:22-10.0.0.1:34924.service - OpenSSH per-connection server daemon (10.0.0.1:34924). Apr 14 00:05:44.241817 sshd[4600]: Accepted publickey for core from 10.0.0.1 port 34924 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:05:44.256255 sshd[4600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:05:44.276942 systemd-logind[1450]: New session 45 of user core. Apr 14 00:05:44.287589 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 14 00:05:44.718570 sshd[4600]: pam_unix(sshd:session): session closed for user core Apr 14 00:05:44.794727 systemd[1]: sshd@44-10.0.0.37:22-10.0.0.1:34924.service: Deactivated successfully. Apr 14 00:05:44.799825 kubelet[2593]: E0414 00:05:44.798797 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:05:44.804012 systemd[1]: session-45.scope: Deactivated successfully. Apr 14 00:05:44.813825 systemd-logind[1450]: Session 45 logged out. Waiting for processes to exit. Apr 14 00:05:44.833023 systemd[1]: Started sshd@45-10.0.0.37:22-10.0.0.1:34932.service - OpenSSH per-connection server daemon (10.0.0.1:34932). Apr 14 00:05:44.841326 systemd-logind[1450]: Removed session 45. Apr 14 00:05:44.943534 sshd[4617]: Accepted publickey for core from 10.0.0.1 port 34932 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:05:44.944716 sshd[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:05:44.977549 systemd-logind[1450]: New session 46 of user core. Apr 14 00:05:44.992003 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 14 00:05:45.806435 sshd[4617]: pam_unix(sshd:session): session closed for user core Apr 14 00:05:45.825114 systemd[1]: sshd@45-10.0.0.37:22-10.0.0.1:34932.service: Deactivated successfully. Apr 14 00:05:45.834361 systemd[1]: session-46.scope: Deactivated successfully. Apr 14 00:05:45.836405 systemd-logind[1450]: Session 46 logged out. Waiting for processes to exit. Apr 14 00:05:45.863687 systemd[1]: Started sshd@46-10.0.0.37:22-10.0.0.1:56288.service - OpenSSH per-connection server daemon (10.0.0.1:56288). Apr 14 00:05:45.880778 systemd-logind[1450]: Removed session 46. Apr 14 00:05:46.081511 sshd[4630]: Accepted publickey for core from 10.0.0.1 port 56288 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:05:46.087012 sshd[4630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:05:46.102594 systemd-logind[1450]: New session 47 of user core. Apr 14 00:05:46.119988 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 14 00:05:46.604370 sshd[4630]: pam_unix(sshd:session): session closed for user core Apr 14 00:05:46.616270 systemd-logind[1450]: Session 47 logged out. Waiting for processes to exit. Apr 14 00:05:46.616611 systemd[1]: sshd@46-10.0.0.37:22-10.0.0.1:56288.service: Deactivated successfully. Apr 14 00:05:46.621027 systemd[1]: session-47.scope: Deactivated successfully. Apr 14 00:05:46.628538 systemd-logind[1450]: Removed session 47. Apr 14 00:05:47.767668 kubelet[2593]: E0414 00:05:47.767024 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:05:51.660081 systemd[1]: Started sshd@47-10.0.0.37:22-10.0.0.1:56302.service - OpenSSH per-connection server daemon (10.0.0.1:56302). Apr 14 00:05:51.741635 sshd[4645]: Accepted publickey for core from 10.0.0.1 port 56302 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:05:51.746197 sshd[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:05:51.803391 systemd-logind[1450]: New session 48 of user core. Apr 14 00:05:51.818991 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 14 00:05:52.397849 sshd[4645]: pam_unix(sshd:session): session closed for user core Apr 14 00:05:52.408562 systemd[1]: sshd@47-10.0.0.37:22-10.0.0.1:56302.service: Deactivated successfully. Apr 14 00:05:52.425258 systemd[1]: session-48.scope: Deactivated successfully. Apr 14 00:05:52.427634 systemd-logind[1450]: Session 48 logged out. Waiting for processes to exit. Apr 14 00:05:52.429428 systemd-logind[1450]: Removed session 48. Apr 14 00:05:57.496591 systemd[1]: Started sshd@48-10.0.0.37:22-10.0.0.1:34478.service - OpenSSH per-connection server daemon (10.0.0.1:34478). Apr 14 00:05:57.629766 sshd[4659]: Accepted publickey for core from 10.0.0.1 port 34478 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:05:57.636836 sshd[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:05:57.689702 systemd-logind[1450]: New session 49 of user core. Apr 14 00:05:57.704346 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 14 00:05:57.796435 kubelet[2593]: E0414 00:05:57.795680 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:05:58.090107 sshd[4659]: pam_unix(sshd:session): session closed for user core Apr 14 00:05:58.105125 systemd[1]: sshd@48-10.0.0.37:22-10.0.0.1:34478.service: Deactivated successfully. Apr 14 00:05:58.144386 systemd[1]: session-49.scope: Deactivated successfully. Apr 14 00:05:58.147915 systemd-logind[1450]: Session 49 logged out. Waiting for processes to exit. Apr 14 00:05:58.155575 systemd-logind[1450]: Removed session 49. Apr 14 00:06:03.166992 systemd[1]: Started sshd@49-10.0.0.37:22-10.0.0.1:34484.service - OpenSSH per-connection server daemon (10.0.0.1:34484). Apr 14 00:06:03.337401 sshd[4673]: Accepted publickey for core from 10.0.0.1 port 34484 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:06:03.340324 sshd[4673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:06:03.358883 systemd-logind[1450]: New session 50 of user core. Apr 14 00:06:03.375989 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 14 00:06:03.883345 sshd[4673]: pam_unix(sshd:session): session closed for user core Apr 14 00:06:03.888875 systemd[1]: sshd@49-10.0.0.37:22-10.0.0.1:34484.service: Deactivated successfully. Apr 14 00:06:03.896368 systemd[1]: session-50.scope: Deactivated successfully. Apr 14 00:06:03.905132 systemd-logind[1450]: Session 50 logged out. Waiting for processes to exit. Apr 14 00:06:03.908056 systemd-logind[1450]: Removed session 50. Apr 14 00:06:07.760616 kubelet[2593]: E0414 00:06:07.759635 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:06:07.761699 kubelet[2593]: E0414 00:06:07.761080 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:06:08.919978 systemd[1]: Started sshd@50-10.0.0.37:22-10.0.0.1:45254.service - OpenSSH per-connection server daemon (10.0.0.1:45254). Apr 14 00:06:09.010312 sshd[4687]: Accepted publickey for core from 10.0.0.1 port 45254 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:06:09.014632 sshd[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:06:09.030049 systemd-logind[1450]: New session 51 of user core. Apr 14 00:06:09.043981 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 14 00:06:09.372268 sshd[4687]: pam_unix(sshd:session): session closed for user core Apr 14 00:06:09.378812 systemd[1]: sshd@50-10.0.0.37:22-10.0.0.1:45254.service: Deactivated successfully. Apr 14 00:06:09.386063 systemd[1]: session-51.scope: Deactivated successfully. Apr 14 00:06:09.415914 systemd-logind[1450]: Session 51 logged out. Waiting for processes to exit. Apr 14 00:06:09.420062 systemd-logind[1450]: Removed session 51. Apr 14 00:06:11.767867 kubelet[2593]: E0414 00:06:11.766783 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:06:14.450441 systemd[1]: Started sshd@51-10.0.0.37:22-10.0.0.1:45268.service - OpenSSH per-connection server daemon (10.0.0.1:45268). Apr 14 00:06:14.541603 sshd[4703]: Accepted publickey for core from 10.0.0.1 port 45268 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:06:14.597559 sshd[4703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:06:14.621280 systemd-logind[1450]: New session 52 of user core. Apr 14 00:06:14.636388 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 14 00:06:15.037865 sshd[4703]: pam_unix(sshd:session): session closed for user core Apr 14 00:06:15.099239 systemd[1]: sshd@51-10.0.0.37:22-10.0.0.1:45268.service: Deactivated successfully. Apr 14 00:06:15.104047 systemd[1]: session-52.scope: Deactivated successfully. Apr 14 00:06:15.107210 systemd-logind[1450]: Session 52 logged out. Waiting for processes to exit. Apr 14 00:06:15.110265 systemd-logind[1450]: Removed session 52. Apr 14 00:06:20.058561 systemd[1]: Started sshd@52-10.0.0.37:22-10.0.0.1:45416.service - OpenSSH per-connection server daemon (10.0.0.1:45416). Apr 14 00:06:20.236362 sshd[4717]: Accepted publickey for core from 10.0.0.1 port 45416 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:06:20.300824 sshd[4717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:06:20.321237 systemd-logind[1450]: New session 53 of user core. Apr 14 00:06:20.329732 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 14 00:06:20.701586 sshd[4717]: pam_unix(sshd:session): session closed for user core Apr 14 00:06:20.713040 systemd[1]: sshd@52-10.0.0.37:22-10.0.0.1:45416.service: Deactivated successfully. Apr 14 00:06:20.718385 systemd[1]: session-53.scope: Deactivated successfully. Apr 14 00:06:20.727526 systemd-logind[1450]: Session 53 logged out. Waiting for processes to exit. Apr 14 00:06:20.732017 systemd-logind[1450]: Removed session 53. Apr 14 00:06:25.786343 systemd[1]: Started sshd@53-10.0.0.37:22-10.0.0.1:57258.service - OpenSSH per-connection server daemon (10.0.0.1:57258). Apr 14 00:06:25.848201 sshd[4731]: Accepted publickey for core from 10.0.0.1 port 57258 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:06:25.851437 sshd[4731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:06:25.885236 systemd-logind[1450]: New session 54 of user core. Apr 14 00:06:25.898181 systemd[1]: Started session-54.scope - Session 54 of User core. Apr 14 00:06:26.176480 sshd[4731]: pam_unix(sshd:session): session closed for user core Apr 14 00:06:26.215119 systemd[1]: sshd@53-10.0.0.37:22-10.0.0.1:57258.service: Deactivated successfully. Apr 14 00:06:26.220077 systemd[1]: session-54.scope: Deactivated successfully. Apr 14 00:06:26.223481 systemd-logind[1450]: Session 54 logged out. Waiting for processes to exit. Apr 14 00:06:26.226644 systemd-logind[1450]: Removed session 54. Apr 14 00:06:31.306745 systemd[1]: Started sshd@54-10.0.0.37:22-10.0.0.1:57272.service - OpenSSH per-connection server daemon (10.0.0.1:57272). Apr 14 00:06:31.359274 sshd[4746]: Accepted publickey for core from 10.0.0.1 port 57272 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:06:31.362754 sshd[4746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:06:31.388233 systemd-logind[1450]: New session 55 of user core. Apr 14 00:06:31.419465 systemd[1]: Started session-55.scope - Session 55 of User core. Apr 14 00:06:31.826855 sshd[4746]: pam_unix(sshd:session): session closed for user core Apr 14 00:06:31.833419 systemd[1]: sshd@54-10.0.0.37:22-10.0.0.1:57272.service: Deactivated successfully. Apr 14 00:06:31.836465 systemd[1]: session-55.scope: Deactivated successfully. Apr 14 00:06:31.839873 systemd-logind[1450]: Session 55 logged out. Waiting for processes to exit. Apr 14 00:06:31.844708 systemd-logind[1450]: Removed session 55. Apr 14 00:06:36.891356 systemd[1]: Started sshd@55-10.0.0.37:22-10.0.0.1:42602.service - OpenSSH per-connection server daemon (10.0.0.1:42602). Apr 14 00:06:37.026097 sshd[4762]: Accepted publickey for core from 10.0.0.1 port 42602 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:06:37.031108 sshd[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:06:37.074373 systemd-logind[1450]: New session 56 of user core. Apr 14 00:06:37.087270 systemd[1]: Started session-56.scope - Session 56 of User core. Apr 14 00:06:37.528836 sshd[4762]: pam_unix(sshd:session): session closed for user core Apr 14 00:06:37.541580 systemd[1]: sshd@55-10.0.0.37:22-10.0.0.1:42602.service: Deactivated successfully. Apr 14 00:06:37.554760 systemd[1]: session-56.scope: Deactivated successfully. Apr 14 00:06:37.558823 systemd-logind[1450]: Session 56 logged out. Waiting for processes to exit. Apr 14 00:06:37.565462 systemd-logind[1450]: Removed session 56. Apr 14 00:06:38.773563 kubelet[2593]: E0414 00:06:38.772654 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:06:42.573506 systemd[1]: Started sshd@56-10.0.0.37:22-10.0.0.1:42610.service - OpenSSH per-connection server daemon (10.0.0.1:42610). Apr 14 00:06:42.696885 sshd[4778]: Accepted publickey for core from 10.0.0.1 port 42610 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:06:42.699881 sshd[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:06:42.717738 systemd-logind[1450]: New session 57 of user core. Apr 14 00:06:42.726489 systemd[1]: Started session-57.scope - Session 57 of User core. Apr 14 00:06:43.202779 sshd[4778]: pam_unix(sshd:session): session closed for user core Apr 14 00:06:43.212554 systemd[1]: sshd@56-10.0.0.37:22-10.0.0.1:42610.service: Deactivated successfully. Apr 14 00:06:43.223836 systemd[1]: session-57.scope: Deactivated successfully. Apr 14 00:06:43.226744 systemd-logind[1450]: Session 57 logged out. Waiting for processes to exit. Apr 14 00:06:43.230213 systemd-logind[1450]: Removed session 57. Apr 14 00:06:48.265182 systemd[1]: Started sshd@57-10.0.0.37:22-10.0.0.1:36200.service - OpenSSH per-connection server daemon (10.0.0.1:36200). Apr 14 00:06:48.319983 sshd[4795]: Accepted publickey for core from 10.0.0.1 port 36200 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:06:48.344119 sshd[4795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:06:48.379737 systemd-logind[1450]: New session 58 of user core. Apr 14 00:06:48.416932 systemd[1]: Started session-58.scope - Session 58 of User core. Apr 14 00:06:48.854329 sshd[4795]: pam_unix(sshd:session): session closed for user core Apr 14 00:06:48.869078 systemd[1]: sshd@57-10.0.0.37:22-10.0.0.1:36200.service: Deactivated successfully. Apr 14 00:06:48.891109 systemd[1]: session-58.scope: Deactivated successfully. Apr 14 00:06:48.897957 systemd-logind[1450]: Session 58 logged out. Waiting for processes to exit. Apr 14 00:06:48.902645 systemd-logind[1450]: Removed session 58. Apr 14 00:06:52.789530 kubelet[2593]: E0414 00:06:52.788394 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:06:53.937201 systemd[1]: Started sshd@58-10.0.0.37:22-10.0.0.1:36210.service - OpenSSH per-connection server daemon (10.0.0.1:36210). Apr 14 00:06:54.051768 sshd[4809]: Accepted publickey for core from 10.0.0.1 port 36210 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:06:54.064536 sshd[4809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:06:54.101066 systemd-logind[1450]: New session 59 of user core. Apr 14 00:06:54.118952 systemd[1]: Started session-59.scope - Session 59 of User core. Apr 14 00:06:54.483820 sshd[4809]: pam_unix(sshd:session): session closed for user core Apr 14 00:06:54.490915 systemd[1]: sshd@58-10.0.0.37:22-10.0.0.1:36210.service: Deactivated successfully. Apr 14 00:06:54.496741 systemd[1]: session-59.scope: Deactivated successfully. Apr 14 00:06:54.498481 systemd-logind[1450]: Session 59 logged out. Waiting for processes to exit. Apr 14 00:06:54.509286 systemd-logind[1450]: Removed session 59. Apr 14 00:06:56.810639 kubelet[2593]: E0414 00:06:56.810590 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:06:59.554833 systemd[1]: Started sshd@59-10.0.0.37:22-10.0.0.1:48312.service - OpenSSH per-connection server daemon (10.0.0.1:48312). Apr 14 00:06:59.647122 sshd[4824]: Accepted publickey for core from 10.0.0.1 port 48312 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:06:59.651928 sshd[4824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:06:59.704054 systemd-logind[1450]: New session 60 of user core. Apr 14 00:06:59.709746 systemd[1]: Started session-60.scope - Session 60 of User core. Apr 14 00:07:00.078504 sshd[4824]: pam_unix(sshd:session): session closed for user core Apr 14 00:07:00.084694 systemd[1]: sshd@59-10.0.0.37:22-10.0.0.1:48312.service: Deactivated successfully. Apr 14 00:07:00.089954 systemd[1]: session-60.scope: Deactivated successfully. Apr 14 00:07:00.093252 systemd-logind[1450]: Session 60 logged out. Waiting for processes to exit. Apr 14 00:07:00.096091 systemd-logind[1450]: Removed session 60. Apr 14 00:07:01.785087 kubelet[2593]: E0414 00:07:01.784936 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:07:05.109314 systemd[1]: Started sshd@60-10.0.0.37:22-10.0.0.1:48314.service - OpenSSH per-connection server daemon (10.0.0.1:48314). Apr 14 00:07:05.227020 sshd[4838]: Accepted publickey for core from 10.0.0.1 port 48314 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:07:05.230284 sshd[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:07:05.242456 systemd-logind[1450]: New session 61 of user core. Apr 14 00:07:05.303305 systemd[1]: Started session-61.scope - Session 61 of User core. Apr 14 00:07:05.744588 sshd[4838]: pam_unix(sshd:session): session closed for user core Apr 14 00:07:05.792308 systemd[1]: sshd@60-10.0.0.37:22-10.0.0.1:48314.service: Deactivated successfully. Apr 14 00:07:05.796206 systemd[1]: session-61.scope: Deactivated successfully. Apr 14 00:07:05.800071 systemd-logind[1450]: Session 61 logged out. Waiting for processes to exit. Apr 14 00:07:05.802268 systemd-logind[1450]: Removed session 61. Apr 14 00:07:08.798774 kubelet[2593]: E0414 00:07:08.798602 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:07:10.891422 systemd[1]: Started sshd@61-10.0.0.37:22-10.0.0.1:58840.service - OpenSSH per-connection server daemon (10.0.0.1:58840). Apr 14 00:07:11.004549 sshd[4853]: Accepted publickey for core from 10.0.0.1 port 58840 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:07:11.008121 sshd[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:07:11.026765 systemd-logind[1450]: New session 62 of user core. Apr 14 00:07:11.052347 systemd[1]: Started session-62.scope - Session 62 of User core. Apr 14 00:07:11.514129 sshd[4853]: pam_unix(sshd:session): session closed for user core Apr 14 00:07:11.523043 systemd[1]: sshd@61-10.0.0.37:22-10.0.0.1:58840.service: Deactivated successfully. Apr 14 00:07:11.527655 systemd[1]: session-62.scope: Deactivated successfully. Apr 14 00:07:11.531229 systemd-logind[1450]: Session 62 logged out. Waiting for processes to exit. Apr 14 00:07:11.534770 systemd-logind[1450]: Removed session 62. Apr 14 00:07:16.605982 systemd[1]: Started sshd@62-10.0.0.37:22-10.0.0.1:46244.service - OpenSSH per-connection server daemon (10.0.0.1:46244). Apr 14 00:07:16.724129 sshd[4871]: Accepted publickey for core from 10.0.0.1 port 46244 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:07:16.728690 sshd[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:07:16.772030 systemd-logind[1450]: New session 63 of user core. Apr 14 00:07:16.780392 systemd[1]: Started session-63.scope - Session 63 of User core. Apr 14 00:07:17.379252 sshd[4871]: pam_unix(sshd:session): session closed for user core Apr 14 00:07:17.393740 systemd[1]: sshd@62-10.0.0.37:22-10.0.0.1:46244.service: Deactivated successfully. Apr 14 00:07:17.399038 systemd[1]: session-63.scope: Deactivated successfully. Apr 14 00:07:17.402005 systemd-logind[1450]: Session 63 logged out. Waiting for processes to exit. Apr 14 00:07:17.419273 systemd-logind[1450]: Removed session 63. Apr 14 00:07:17.801414 kubelet[2593]: E0414 00:07:17.800930 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:07:20.775906 kubelet[2593]: E0414 00:07:20.775239 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:07:22.423196 systemd[1]: Started sshd@63-10.0.0.37:22-10.0.0.1:46246.service - OpenSSH per-connection server daemon (10.0.0.1:46246). Apr 14 00:07:22.560441 sshd[4886]: Accepted publickey for core from 10.0.0.1 port 46246 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:07:22.565456 sshd[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:07:22.606940 systemd-logind[1450]: New session 64 of user core. Apr 14 00:07:22.656742 systemd[1]: Started session-64.scope - Session 64 of User core. Apr 14 00:07:23.324891 sshd[4886]: pam_unix(sshd:session): session closed for user core Apr 14 00:07:23.353799 systemd-logind[1450]: Session 64 logged out. Waiting for processes to exit. Apr 14 00:07:23.357875 systemd[1]: sshd@63-10.0.0.37:22-10.0.0.1:46246.service: Deactivated successfully. Apr 14 00:07:23.404029 systemd[1]: session-64.scope: Deactivated successfully. Apr 14 00:07:23.409375 systemd-logind[1450]: Removed session 64. Apr 14 00:07:28.362430 systemd[1]: Started sshd@64-10.0.0.37:22-10.0.0.1:55222.service - OpenSSH per-connection server daemon (10.0.0.1:55222). Apr 14 00:07:28.542766 sshd[4901]: Accepted publickey for core from 10.0.0.1 port 55222 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:07:28.546000 sshd[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:07:28.599034 systemd-logind[1450]: New session 65 of user core. Apr 14 00:07:28.609405 systemd[1]: Started session-65.scope - Session 65 of User core. Apr 14 00:07:29.170404 sshd[4901]: pam_unix(sshd:session): session closed for user core Apr 14 00:07:29.196008 systemd-logind[1450]: Session 65 logged out. Waiting for processes to exit. Apr 14 00:07:29.197074 systemd[1]: sshd@64-10.0.0.37:22-10.0.0.1:55222.service: Deactivated successfully. Apr 14 00:07:29.221118 systemd[1]: session-65.scope: Deactivated successfully. Apr 14 00:07:29.227323 systemd-logind[1450]: Removed session 65. Apr 14 00:07:34.226513 systemd[1]: Started sshd@65-10.0.0.37:22-10.0.0.1:55224.service - OpenSSH per-connection server daemon (10.0.0.1:55224). Apr 14 00:07:34.433022 sshd[4915]: Accepted publickey for core from 10.0.0.1 port 55224 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:07:34.437226 sshd[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:07:34.505671 systemd-logind[1450]: New session 66 of user core. Apr 14 00:07:34.511770 systemd[1]: Started session-66.scope - Session 66 of User core. Apr 14 00:07:34.781913 kubelet[2593]: E0414 00:07:34.779924 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:07:34.957031 sshd[4915]: pam_unix(sshd:session): session closed for user core Apr 14 00:07:34.965208 systemd[1]: sshd@65-10.0.0.37:22-10.0.0.1:55224.service: Deactivated successfully. Apr 14 00:07:34.968561 systemd[1]: session-66.scope: Deactivated successfully. Apr 14 00:07:34.974864 systemd-logind[1450]: Session 66 logged out. Waiting for processes to exit. Apr 14 00:07:34.992902 systemd-logind[1450]: Removed session 66. Apr 14 00:07:40.034750 systemd[1]: Started sshd@66-10.0.0.37:22-10.0.0.1:45954.service - OpenSSH per-connection server daemon (10.0.0.1:45954). Apr 14 00:07:40.193200 sshd[4931]: Accepted publickey for core from 10.0.0.1 port 45954 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:07:40.199408 sshd[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:07:40.213938 systemd-logind[1450]: New session 67 of user core. Apr 14 00:07:40.224298 systemd[1]: Started session-67.scope - Session 67 of User core. Apr 14 00:07:40.653339 sshd[4931]: pam_unix(sshd:session): session closed for user core Apr 14 00:07:40.661617 systemd[1]: sshd@66-10.0.0.37:22-10.0.0.1:45954.service: Deactivated successfully. Apr 14 00:07:40.685556 systemd[1]: session-67.scope: Deactivated successfully. Apr 14 00:07:40.690861 systemd-logind[1450]: Session 67 logged out. Waiting for processes to exit. Apr 14 00:07:40.696529 systemd-logind[1450]: Removed session 67. Apr 14 00:07:45.695037 systemd[1]: Started sshd@67-10.0.0.37:22-10.0.0.1:51664.service - OpenSSH per-connection server daemon (10.0.0.1:51664). Apr 14 00:07:45.826271 sshd[4947]: Accepted publickey for core from 10.0.0.1 port 51664 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:07:45.832051 sshd[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:07:45.846662 systemd-logind[1450]: New session 68 of user core. Apr 14 00:07:45.855895 systemd[1]: Started session-68.scope - Session 68 of User core. Apr 14 00:07:46.297507 sshd[4947]: pam_unix(sshd:session): session closed for user core Apr 14 00:07:46.305476 systemd[1]: sshd@67-10.0.0.37:22-10.0.0.1:51664.service: Deactivated successfully. Apr 14 00:07:46.311430 systemd[1]: session-68.scope: Deactivated successfully. Apr 14 00:07:46.312934 systemd-logind[1450]: Session 68 logged out. Waiting for processes to exit. Apr 14 00:07:46.316294 systemd-logind[1450]: Removed session 68. Apr 14 00:07:51.360085 systemd[1]: Started sshd@68-10.0.0.37:22-10.0.0.1:51674.service - OpenSSH per-connection server daemon (10.0.0.1:51674). Apr 14 00:07:51.440074 sshd[4962]: Accepted publickey for core from 10.0.0.1 port 51674 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:07:51.445018 sshd[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:07:51.520617 systemd-logind[1450]: New session 69 of user core. Apr 14 00:07:51.538636 systemd[1]: Started session-69.scope - Session 69 of User core. Apr 14 00:07:51.871849 sshd[4962]: pam_unix(sshd:session): session closed for user core Apr 14 00:07:51.878362 systemd[1]: sshd@68-10.0.0.37:22-10.0.0.1:51674.service: Deactivated successfully. Apr 14 00:07:51.883069 systemd[1]: session-69.scope: Deactivated successfully. Apr 14 00:07:51.889543 systemd-logind[1450]: Session 69 logged out. Waiting for processes to exit. Apr 14 00:07:51.891037 systemd-logind[1450]: Removed session 69. Apr 14 00:07:56.970859 systemd[1]: Started sshd@69-10.0.0.37:22-10.0.0.1:33974.service - OpenSSH per-connection server daemon (10.0.0.1:33974). Apr 14 00:07:57.047944 sshd[4978]: Accepted publickey for core from 10.0.0.1 port 33974 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:07:57.050127 sshd[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:07:57.069729 systemd-logind[1450]: New session 70 of user core. Apr 14 00:07:57.107027 systemd[1]: Started session-70.scope - Session 70 of User core. Apr 14 00:07:57.375415 sshd[4978]: pam_unix(sshd:session): session closed for user core Apr 14 00:07:57.389379 systemd[1]: sshd@69-10.0.0.37:22-10.0.0.1:33974.service: Deactivated successfully. Apr 14 00:07:57.394588 systemd[1]: session-70.scope: Deactivated successfully. Apr 14 00:07:57.398465 systemd-logind[1450]: Session 70 logged out. Waiting for processes to exit. Apr 14 00:07:57.404080 systemd-logind[1450]: Removed session 70. Apr 14 00:07:57.778442 kubelet[2593]: E0414 00:07:57.778081 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:08:02.431071 systemd[1]: Started sshd@70-10.0.0.37:22-10.0.0.1:33976.service - OpenSSH per-connection server daemon (10.0.0.1:33976). Apr 14 00:08:02.507451 sshd[4992]: Accepted publickey for core from 10.0.0.1 port 33976 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:08:02.510432 sshd[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:08:02.521431 systemd-logind[1450]: New session 71 of user core. Apr 14 00:08:02.528535 systemd[1]: Started session-71.scope - Session 71 of User core. Apr 14 00:08:02.759869 sshd[4992]: pam_unix(sshd:session): session closed for user core Apr 14 00:08:02.768529 systemd[1]: sshd@70-10.0.0.37:22-10.0.0.1:33976.service: Deactivated successfully. Apr 14 00:08:02.772838 systemd[1]: session-71.scope: Deactivated successfully. Apr 14 00:08:02.774823 systemd-logind[1450]: Session 71 logged out. Waiting for processes to exit. Apr 14 00:08:02.793699 systemd-logind[1450]: Removed session 71. Apr 14 00:08:04.757859 kubelet[2593]: E0414 00:08:04.757806 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:08:07.833603 systemd[1]: Started sshd@71-10.0.0.37:22-10.0.0.1:54618.service - OpenSSH per-connection server daemon (10.0.0.1:54618). Apr 14 00:08:07.918354 sshd[5008]: Accepted publickey for core from 10.0.0.1 port 54618 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:08:07.930144 sshd[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:08:07.945247 systemd-logind[1450]: New session 72 of user core. Apr 14 00:08:07.997916 systemd[1]: Started session-72.scope - Session 72 of User core. Apr 14 00:08:08.297033 sshd[5008]: pam_unix(sshd:session): session closed for user core Apr 14 00:08:08.312655 systemd[1]: sshd@71-10.0.0.37:22-10.0.0.1:54618.service: Deactivated successfully. Apr 14 00:08:08.315266 systemd[1]: session-72.scope: Deactivated successfully. Apr 14 00:08:08.317177 systemd-logind[1450]: Session 72 logged out. Waiting for processes to exit. Apr 14 00:08:08.320822 systemd-logind[1450]: Removed session 72. Apr 14 00:08:11.764356 kubelet[2593]: E0414 00:08:11.764025 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:08:13.319346 systemd[1]: Started sshd@72-10.0.0.37:22-10.0.0.1:54624.service - OpenSSH per-connection server daemon (10.0.0.1:54624). Apr 14 00:08:13.361307 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 54624 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:08:13.363735 sshd[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:08:13.377139 systemd-logind[1450]: New session 73 of user core. Apr 14 00:08:13.397851 systemd[1]: Started session-73.scope - Session 73 of User core. Apr 14 00:08:13.695549 sshd[5024]: pam_unix(sshd:session): session closed for user core Apr 14 00:08:13.701060 systemd[1]: sshd@72-10.0.0.37:22-10.0.0.1:54624.service: Deactivated successfully. Apr 14 00:08:13.705082 systemd[1]: session-73.scope: Deactivated successfully. Apr 14 00:08:13.708126 systemd-logind[1450]: Session 73 logged out. Waiting for processes to exit. Apr 14 00:08:13.710327 systemd-logind[1450]: Removed session 73. Apr 14 00:08:18.796859 kubelet[2593]: E0414 00:08:18.796768 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:08:18.800901 systemd[1]: Started sshd@73-10.0.0.37:22-10.0.0.1:42408.service - OpenSSH per-connection server daemon (10.0.0.1:42408). Apr 14 00:08:18.857097 sshd[5041]: Accepted publickey for core from 10.0.0.1 port 42408 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:08:18.859345 sshd[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:08:18.870459 systemd-logind[1450]: New session 74 of user core. Apr 14 00:08:18.882959 systemd[1]: Started session-74.scope - Session 74 of User core. Apr 14 00:08:19.309595 sshd[5041]: pam_unix(sshd:session): session closed for user core Apr 14 00:08:19.326742 systemd[1]: sshd@73-10.0.0.37:22-10.0.0.1:42408.service: Deactivated successfully. Apr 14 00:08:19.330819 systemd[1]: session-74.scope: Deactivated successfully. Apr 14 00:08:19.398959 systemd-logind[1450]: Session 74 logged out. Waiting for processes to exit. Apr 14 00:08:19.404921 systemd-logind[1450]: Removed session 74. Apr 14 00:08:22.786271 kubelet[2593]: E0414 00:08:22.784978 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:08:24.332739 systemd[1]: Started sshd@74-10.0.0.37:22-10.0.0.1:42410.service - OpenSSH per-connection server daemon (10.0.0.1:42410). Apr 14 00:08:24.408509 sshd[5057]: Accepted publickey for core from 10.0.0.1 port 42410 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:08:24.412111 sshd[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:08:24.441938 systemd-logind[1450]: New session 75 of user core. Apr 14 00:08:24.518449 systemd[1]: Started session-75.scope - Session 75 of User core. Apr 14 00:08:24.776506 sshd[5057]: pam_unix(sshd:session): session closed for user core Apr 14 00:08:24.785341 systemd[1]: sshd@74-10.0.0.37:22-10.0.0.1:42410.service: Deactivated successfully. Apr 14 00:08:24.788639 systemd[1]: session-75.scope: Deactivated successfully. Apr 14 00:08:24.790410 systemd-logind[1450]: Session 75 logged out. Waiting for processes to exit. Apr 14 00:08:24.792337 systemd-logind[1450]: Removed session 75. Apr 14 00:08:29.804949 systemd[1]: Started sshd@75-10.0.0.37:22-10.0.0.1:60282.service - OpenSSH per-connection server daemon (10.0.0.1:60282). Apr 14 00:08:29.849362 sshd[5073]: Accepted publickey for core from 10.0.0.1 port 60282 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:08:29.852013 sshd[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:08:29.891563 systemd-logind[1450]: New session 76 of user core. Apr 14 00:08:29.905840 systemd[1]: Started session-76.scope - Session 76 of User core. Apr 14 00:08:30.098091 sshd[5073]: pam_unix(sshd:session): session closed for user core Apr 14 00:08:30.103353 systemd[1]: sshd@75-10.0.0.37:22-10.0.0.1:60282.service: Deactivated successfully. Apr 14 00:08:30.106526 systemd[1]: session-76.scope: Deactivated successfully. Apr 14 00:08:30.107675 systemd-logind[1450]: Session 76 logged out. Waiting for processes to exit. Apr 14 00:08:30.109625 systemd-logind[1450]: Removed session 76. Apr 14 00:08:35.115639 systemd[1]: Started sshd@76-10.0.0.37:22-10.0.0.1:60290.service - OpenSSH per-connection server daemon (10.0.0.1:60290). Apr 14 00:08:35.207584 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 60290 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:08:35.211059 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:08:35.224747 systemd-logind[1450]: New session 77 of user core. Apr 14 00:08:35.234792 systemd[1]: Started session-77.scope - Session 77 of User core. Apr 14 00:08:35.494504 sshd[5087]: pam_unix(sshd:session): session closed for user core Apr 14 00:08:35.502371 systemd[1]: sshd@76-10.0.0.37:22-10.0.0.1:60290.service: Deactivated successfully. Apr 14 00:08:35.506437 systemd[1]: session-77.scope: Deactivated successfully. Apr 14 00:08:35.508682 systemd-logind[1450]: Session 77 logged out. Waiting for processes to exit. Apr 14 00:08:35.510503 systemd-logind[1450]: Removed session 77. Apr 14 00:08:37.758908 kubelet[2593]: E0414 00:08:37.758556 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:08:40.516101 systemd[1]: Started sshd@77-10.0.0.37:22-10.0.0.1:60026.service - OpenSSH per-connection server daemon (10.0.0.1:60026). Apr 14 00:08:40.602417 sshd[5104]: Accepted publickey for core from 10.0.0.1 port 60026 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:08:40.604615 sshd[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:08:40.615429 systemd-logind[1450]: New session 78 of user core. Apr 14 00:08:40.625008 systemd[1]: Started session-78.scope - Session 78 of User core. Apr 14 00:08:40.817553 sshd[5104]: pam_unix(sshd:session): session closed for user core Apr 14 00:08:40.822106 systemd[1]: sshd@77-10.0.0.37:22-10.0.0.1:60026.service: Deactivated successfully. Apr 14 00:08:40.825010 systemd[1]: session-78.scope: Deactivated successfully. Apr 14 00:08:40.826566 systemd-logind[1450]: Session 78 logged out. Waiting for processes to exit. Apr 14 00:08:40.829205 systemd-logind[1450]: Removed session 78. Apr 14 00:08:42.758803 kubelet[2593]: E0414 00:08:42.758724 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:08:45.832312 systemd[1]: Started sshd@78-10.0.0.37:22-10.0.0.1:58192.service - OpenSSH per-connection server daemon (10.0.0.1:58192). Apr 14 00:08:45.926227 sshd[5120]: Accepted publickey for core from 10.0.0.1 port 58192 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:08:45.945359 sshd[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:08:45.970863 systemd-logind[1450]: New session 79 of user core. Apr 14 00:08:45.977524 systemd[1]: Started session-79.scope - Session 79 of User core. Apr 14 00:08:46.183422 sshd[5120]: pam_unix(sshd:session): session closed for user core Apr 14 00:08:46.188858 systemd[1]: sshd@78-10.0.0.37:22-10.0.0.1:58192.service: Deactivated successfully. Apr 14 00:08:46.191511 systemd[1]: session-79.scope: Deactivated successfully. Apr 14 00:08:46.193004 systemd-logind[1450]: Session 79 logged out. Waiting for processes to exit. Apr 14 00:08:46.194543 systemd-logind[1450]: Removed session 79. Apr 14 00:08:46.763325 kubelet[2593]: E0414 00:08:46.763200 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:08:51.209929 systemd[1]: Started sshd@79-10.0.0.37:22-10.0.0.1:58198.service - OpenSSH per-connection server daemon (10.0.0.1:58198). Apr 14 00:08:51.246264 sshd[5135]: Accepted publickey for core from 10.0.0.1 port 58198 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:08:51.248558 sshd[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:08:51.265899 systemd-logind[1450]: New session 80 of user core. Apr 14 00:08:51.283104 systemd[1]: Started session-80.scope - Session 80 of User core. Apr 14 00:08:51.578815 sshd[5135]: pam_unix(sshd:session): session closed for user core Apr 14 00:08:51.585515 systemd[1]: sshd@79-10.0.0.37:22-10.0.0.1:58198.service: Deactivated successfully. Apr 14 00:08:51.589366 systemd[1]: session-80.scope: Deactivated successfully. Apr 14 00:08:51.591467 systemd-logind[1450]: Session 80 logged out. Waiting for processes to exit. Apr 14 00:08:51.593274 systemd-logind[1450]: Removed session 80. Apr 14 00:08:56.640621 systemd[1]: Started sshd@80-10.0.0.37:22-10.0.0.1:44792.service - OpenSSH per-connection server daemon (10.0.0.1:44792). Apr 14 00:08:56.727211 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 44792 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:08:56.730689 sshd[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:08:56.805767 systemd-logind[1450]: New session 81 of user core. Apr 14 00:08:56.824284 systemd[1]: Started session-81.scope - Session 81 of User core. Apr 14 00:08:57.139143 sshd[5149]: pam_unix(sshd:session): session closed for user core Apr 14 00:08:57.190475 systemd[1]: sshd@80-10.0.0.37:22-10.0.0.1:44792.service: Deactivated successfully. Apr 14 00:08:57.193611 systemd[1]: session-81.scope: Deactivated successfully. Apr 14 00:08:57.194890 systemd-logind[1450]: Session 81 logged out. Waiting for processes to exit. Apr 14 00:08:57.196583 systemd-logind[1450]: Removed session 81. Apr 14 00:09:02.166313 systemd[1]: Started sshd@81-10.0.0.37:22-10.0.0.1:44794.service - OpenSSH per-connection server daemon (10.0.0.1:44794). Apr 14 00:09:02.286534 sshd[5164]: Accepted publickey for core from 10.0.0.1 port 44794 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:09:02.289418 sshd[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:09:02.300231 systemd-logind[1450]: New session 82 of user core. Apr 14 00:09:02.312421 systemd[1]: Started session-82.scope - Session 82 of User core. Apr 14 00:09:02.687681 sshd[5164]: pam_unix(sshd:session): session closed for user core Apr 14 00:09:02.721012 systemd[1]: sshd@81-10.0.0.37:22-10.0.0.1:44794.service: Deactivated successfully. Apr 14 00:09:02.725986 systemd[1]: session-82.scope: Deactivated successfully. Apr 14 00:09:02.729704 systemd-logind[1450]: Session 82 logged out. Waiting for processes to exit. Apr 14 00:09:02.785072 systemd[1]: Started sshd@82-10.0.0.37:22-10.0.0.1:44798.service - OpenSSH per-connection server daemon (10.0.0.1:44798). Apr 14 00:09:02.789785 systemd-logind[1450]: Removed session 82. Apr 14 00:09:02.835767 sshd[5178]: Accepted publickey for core from 10.0.0.1 port 44798 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:09:02.837900 sshd[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:09:02.851427 systemd-logind[1450]: New session 83 of user core. Apr 14 00:09:02.867739 systemd[1]: Started session-83.scope - Session 83 of User core. Apr 14 00:09:03.998094 sshd[5178]: pam_unix(sshd:session): session closed for user core Apr 14 00:09:04.019398 systemd[1]: sshd@82-10.0.0.37:22-10.0.0.1:44798.service: Deactivated successfully. Apr 14 00:09:04.026133 systemd[1]: session-83.scope: Deactivated successfully. Apr 14 00:09:04.028349 systemd-logind[1450]: Session 83 logged out. Waiting for processes to exit. Apr 14 00:09:04.041380 systemd[1]: Started sshd@83-10.0.0.37:22-10.0.0.1:44808.service - OpenSSH per-connection server daemon (10.0.0.1:44808). Apr 14 00:09:04.049719 systemd-logind[1450]: Removed session 83. Apr 14 00:09:04.179264 sshd[5191]: Accepted publickey for core from 10.0.0.1 port 44808 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:09:04.188716 sshd[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:09:04.217639 systemd-logind[1450]: New session 84 of user core. Apr 14 00:09:04.230827 systemd[1]: Started session-84.scope - Session 84 of User core. Apr 14 00:09:05.948556 sshd[5191]: pam_unix(sshd:session): session closed for user core Apr 14 00:09:05.963280 systemd[1]: sshd@83-10.0.0.37:22-10.0.0.1:44808.service: Deactivated successfully. Apr 14 00:09:05.966952 systemd[1]: session-84.scope: Deactivated successfully. Apr 14 00:09:05.969661 systemd-logind[1450]: Session 84 logged out. Waiting for processes to exit. Apr 14 00:09:05.980388 systemd[1]: Started sshd@84-10.0.0.37:22-10.0.0.1:47414.service - OpenSSH per-connection server daemon (10.0.0.1:47414). Apr 14 00:09:05.983276 systemd-logind[1450]: Removed session 84. Apr 14 00:09:06.095445 sshd[5211]: Accepted publickey for core from 10.0.0.1 port 47414 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:09:06.099454 sshd[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:09:06.115273 systemd-logind[1450]: New session 85 of user core. Apr 14 00:09:06.132070 systemd[1]: Started session-85.scope - Session 85 of User core. Apr 14 00:09:07.202091 sshd[5211]: pam_unix(sshd:session): session closed for user core Apr 14 00:09:07.216044 systemd[1]: sshd@84-10.0.0.37:22-10.0.0.1:47414.service: Deactivated successfully. Apr 14 00:09:07.230435 systemd[1]: session-85.scope: Deactivated successfully. Apr 14 00:09:07.280474 systemd-logind[1450]: Session 85 logged out. Waiting for processes to exit. Apr 14 00:09:07.296104 systemd[1]: Started sshd@85-10.0.0.37:22-10.0.0.1:47422.service - OpenSSH per-connection server daemon (10.0.0.1:47422). Apr 14 00:09:07.300095 systemd-logind[1450]: Removed session 85. Apr 14 00:09:07.343244 sshd[5223]: Accepted publickey for core from 10.0.0.1 port 47422 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:09:07.399577 sshd[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:09:07.410335 systemd-logind[1450]: New session 86 of user core. Apr 14 00:09:07.470794 systemd[1]: Started session-86.scope - Session 86 of User core. Apr 14 00:09:07.816888 sshd[5223]: pam_unix(sshd:session): session closed for user core Apr 14 00:09:07.827539 systemd[1]: sshd@85-10.0.0.37:22-10.0.0.1:47422.service: Deactivated successfully. Apr 14 00:09:07.836573 systemd[1]: session-86.scope: Deactivated successfully. Apr 14 00:09:07.839778 systemd-logind[1450]: Session 86 logged out. Waiting for processes to exit. Apr 14 00:09:07.843372 systemd-logind[1450]: Removed session 86. Apr 14 00:09:11.760598 kubelet[2593]: E0414 00:09:11.759980 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:09:12.939984 systemd[1]: Started sshd@86-10.0.0.37:22-10.0.0.1:47430.service - OpenSSH per-connection server daemon (10.0.0.1:47430). Apr 14 00:09:13.037752 sshd[5237]: Accepted publickey for core from 10.0.0.1 port 47430 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:09:13.045022 sshd[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:09:13.070281 systemd-logind[1450]: New session 87 of user core. Apr 14 00:09:13.087533 systemd[1]: Started session-87.scope - Session 87 of User core. Apr 14 00:09:13.507571 sshd[5237]: pam_unix(sshd:session): session closed for user core Apr 14 00:09:13.525835 systemd[1]: sshd@86-10.0.0.37:22-10.0.0.1:47430.service: Deactivated successfully. Apr 14 00:09:13.537516 systemd[1]: session-87.scope: Deactivated successfully. Apr 14 00:09:13.605089 systemd-logind[1450]: Session 87 logged out. Waiting for processes to exit. Apr 14 00:09:13.607887 systemd-logind[1450]: Removed session 87. Apr 14 00:09:18.587392 systemd[1]: Started sshd@87-10.0.0.37:22-10.0.0.1:52336.service - OpenSSH per-connection server daemon (10.0.0.1:52336). Apr 14 00:09:18.701727 sshd[5254]: Accepted publickey for core from 10.0.0.1 port 52336 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:09:18.701325 sshd[5254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:09:18.731693 systemd-logind[1450]: New session 88 of user core. Apr 14 00:09:18.748550 systemd[1]: Started session-88.scope - Session 88 of User core. Apr 14 00:09:19.241080 sshd[5254]: pam_unix(sshd:session): session closed for user core Apr 14 00:09:19.289350 systemd[1]: sshd@87-10.0.0.37:22-10.0.0.1:52336.service: Deactivated successfully. Apr 14 00:09:19.297665 systemd[1]: session-88.scope: Deactivated successfully. Apr 14 00:09:19.304023 systemd-logind[1450]: Session 88 logged out. Waiting for processes to exit. Apr 14 00:09:19.307941 systemd-logind[1450]: Removed session 88. Apr 14 00:09:19.772221 kubelet[2593]: E0414 00:09:19.767765 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:09:24.331889 systemd[1]: Started sshd@88-10.0.0.37:22-10.0.0.1:52340.service - OpenSSH per-connection server daemon (10.0.0.1:52340). Apr 14 00:09:24.458004 sshd[5268]: Accepted publickey for core from 10.0.0.1 port 52340 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:09:24.461381 sshd[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:09:24.509652 systemd-logind[1450]: New session 89 of user core. Apr 14 00:09:24.525358 systemd[1]: Started session-89.scope - Session 89 of User core. Apr 14 00:09:25.021845 sshd[5268]: pam_unix(sshd:session): session closed for user core Apr 14 00:09:25.045561 systemd[1]: sshd@88-10.0.0.37:22-10.0.0.1:52340.service: Deactivated successfully. Apr 14 00:09:25.066530 systemd[1]: session-89.scope: Deactivated successfully. Apr 14 00:09:25.070677 systemd-logind[1450]: Session 89 logged out. Waiting for processes to exit. Apr 14 00:09:25.080384 systemd-logind[1450]: Removed session 89. Apr 14 00:09:30.031350 systemd[1]: Started sshd@89-10.0.0.37:22-10.0.0.1:47112.service - OpenSSH per-connection server daemon (10.0.0.1:47112). Apr 14 00:09:30.095915 sshd[5283]: Accepted publickey for core from 10.0.0.1 port 47112 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:09:30.103102 sshd[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:09:30.146355 systemd-logind[1450]: New session 90 of user core. Apr 14 00:09:30.231331 systemd[1]: Started session-90.scope - Session 90 of User core. Apr 14 00:09:30.548044 sshd[5283]: pam_unix(sshd:session): session closed for user core Apr 14 00:09:30.555087 systemd[1]: sshd@89-10.0.0.37:22-10.0.0.1:47112.service: Deactivated successfully. Apr 14 00:09:30.562517 systemd[1]: session-90.scope: Deactivated successfully. Apr 14 00:09:30.567707 systemd-logind[1450]: Session 90 logged out. Waiting for processes to exit. Apr 14 00:09:30.572619 systemd-logind[1450]: Removed session 90. Apr 14 00:09:34.790671 kubelet[2593]: E0414 00:09:34.790597 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:09:35.566604 systemd[1]: Started sshd@90-10.0.0.37:22-10.0.0.1:49990.service - OpenSSH per-connection server daemon (10.0.0.1:49990). Apr 14 00:09:35.640891 sshd[5298]: Accepted publickey for core from 10.0.0.1 port 49990 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:09:35.643345 sshd[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:09:35.674706 systemd-logind[1450]: New session 91 of user core. Apr 14 00:09:35.690059 systemd[1]: Started session-91.scope - Session 91 of User core. Apr 14 00:09:35.924254 sshd[5298]: pam_unix(sshd:session): session closed for user core Apr 14 00:09:35.932395 systemd[1]: sshd@90-10.0.0.37:22-10.0.0.1:49990.service: Deactivated successfully. Apr 14 00:09:35.935519 systemd[1]: session-91.scope: Deactivated successfully. Apr 14 00:09:35.938019 systemd-logind[1450]: Session 91 logged out. Waiting for processes to exit. Apr 14 00:09:35.942206 systemd-logind[1450]: Removed session 91. Apr 14 00:09:40.937613 systemd[1]: Started sshd@91-10.0.0.37:22-10.0.0.1:50004.service - OpenSSH per-connection server daemon (10.0.0.1:50004). Apr 14 00:09:40.987667 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 50004 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:09:40.990482 sshd[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:09:40.996286 systemd-logind[1450]: New session 92 of user core. Apr 14 00:09:41.007703 systemd[1]: Started session-92.scope - Session 92 of User core. Apr 14 00:09:41.221123 sshd[5314]: pam_unix(sshd:session): session closed for user core Apr 14 00:09:41.226757 systemd[1]: sshd@91-10.0.0.37:22-10.0.0.1:50004.service: Deactivated successfully. Apr 14 00:09:41.229425 systemd[1]: session-92.scope: Deactivated successfully. Apr 14 00:09:41.230495 systemd-logind[1450]: Session 92 logged out. Waiting for processes to exit. Apr 14 00:09:41.231837 systemd-logind[1450]: Removed session 92. Apr 14 00:09:41.799268 kubelet[2593]: E0414 00:09:41.798744 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:09:46.255892 systemd[1]: Started sshd@92-10.0.0.37:22-10.0.0.1:33708.service - OpenSSH per-connection server daemon (10.0.0.1:33708). Apr 14 00:09:46.297127 sshd[5330]: Accepted publickey for core from 10.0.0.1 port 33708 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:09:46.300061 sshd[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:09:46.308959 systemd-logind[1450]: New session 93 of user core. Apr 14 00:09:46.320860 systemd[1]: Started session-93.scope - Session 93 of User core. Apr 14 00:09:46.557629 sshd[5330]: pam_unix(sshd:session): session closed for user core Apr 14 00:09:46.571304 systemd[1]: sshd@92-10.0.0.37:22-10.0.0.1:33708.service: Deactivated successfully. Apr 14 00:09:46.584350 systemd[1]: session-93.scope: Deactivated successfully. Apr 14 00:09:46.589943 systemd-logind[1450]: Session 93 logged out. Waiting for processes to exit. Apr 14 00:09:46.595252 systemd-logind[1450]: Removed session 93. Apr 14 00:09:51.573568 systemd[1]: Started sshd@93-10.0.0.37:22-10.0.0.1:33710.service - OpenSSH per-connection server daemon (10.0.0.1:33710). Apr 14 00:09:51.642409 sshd[5344]: Accepted publickey for core from 10.0.0.1 port 33710 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:09:51.706804 sshd[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:09:51.717697 systemd-logind[1450]: New session 94 of user core. Apr 14 00:09:51.729531 systemd[1]: Started session-94.scope - Session 94 of User core. Apr 14 00:09:51.759414 kubelet[2593]: E0414 00:09:51.759363 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:09:51.974978 sshd[5344]: pam_unix(sshd:session): session closed for user core Apr 14 00:09:51.982297 systemd[1]: sshd@93-10.0.0.37:22-10.0.0.1:33710.service: Deactivated successfully. Apr 14 00:09:51.985053 systemd[1]: session-94.scope: Deactivated successfully. Apr 14 00:09:51.986867 systemd-logind[1450]: Session 94 logged out. Waiting for processes to exit. Apr 14 00:09:51.988432 systemd-logind[1450]: Removed session 94. Apr 14 00:09:52.769980 kubelet[2593]: E0414 00:09:52.769858 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:09:57.000001 systemd[1]: Started sshd@94-10.0.0.37:22-10.0.0.1:44334.service - OpenSSH per-connection server daemon (10.0.0.1:44334). Apr 14 00:09:57.098001 sshd[5358]: Accepted publickey for core from 10.0.0.1 port 44334 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:09:57.099817 sshd[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:09:57.104852 systemd-logind[1450]: New session 95 of user core. Apr 14 00:09:57.115526 systemd[1]: Started session-95.scope - Session 95 of User core. Apr 14 00:09:57.243951 sshd[5358]: pam_unix(sshd:session): session closed for user core Apr 14 00:09:57.247538 systemd[1]: sshd@94-10.0.0.37:22-10.0.0.1:44334.service: Deactivated successfully. Apr 14 00:09:57.249966 systemd[1]: session-95.scope: Deactivated successfully. Apr 14 00:09:57.250962 systemd-logind[1450]: Session 95 logged out. Waiting for processes to exit. Apr 14 00:09:57.251924 systemd-logind[1450]: Removed session 95. Apr 14 00:10:02.266329 systemd[1]: Started sshd@95-10.0.0.37:22-10.0.0.1:44338.service - OpenSSH per-connection server daemon (10.0.0.1:44338). Apr 14 00:10:02.400784 sshd[5372]: Accepted publickey for core from 10.0.0.1 port 44338 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:10:02.401698 sshd[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:10:02.421033 systemd-logind[1450]: New session 96 of user core. Apr 14 00:10:02.433314 systemd[1]: Started session-96.scope - Session 96 of User core. Apr 14 00:10:02.691349 sshd[5372]: pam_unix(sshd:session): session closed for user core Apr 14 00:10:02.698590 systemd[1]: sshd@95-10.0.0.37:22-10.0.0.1:44338.service: Deactivated successfully. Apr 14 00:10:02.702625 systemd[1]: session-96.scope: Deactivated successfully. Apr 14 00:10:02.705662 systemd-logind[1450]: Session 96 logged out. Waiting for processes to exit. Apr 14 00:10:02.709371 systemd-logind[1450]: Removed session 96. Apr 14 00:10:05.789986 kubelet[2593]: E0414 00:10:05.789703 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:10:05.789986 kubelet[2593]: E0414 00:10:05.789874 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:10:07.710575 systemd[1]: Started sshd@96-10.0.0.37:22-10.0.0.1:52088.service - OpenSSH per-connection server daemon (10.0.0.1:52088). Apr 14 00:10:07.804674 sshd[5386]: Accepted publickey for core from 10.0.0.1 port 52088 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:10:07.808292 sshd[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:10:07.827227 systemd-logind[1450]: New session 97 of user core. Apr 14 00:10:07.834733 systemd[1]: Started session-97.scope - Session 97 of User core. Apr 14 00:10:08.099572 sshd[5386]: pam_unix(sshd:session): session closed for user core Apr 14 00:10:08.108786 systemd[1]: sshd@96-10.0.0.37:22-10.0.0.1:52088.service: Deactivated successfully. Apr 14 00:10:08.113506 systemd[1]: session-97.scope: Deactivated successfully. Apr 14 00:10:08.122504 systemd-logind[1450]: Session 97 logged out. Waiting for processes to exit. Apr 14 00:10:08.125497 systemd-logind[1450]: Removed session 97. Apr 14 00:10:13.117442 systemd[1]: Started sshd@97-10.0.0.37:22-10.0.0.1:52090.service - OpenSSH per-connection server daemon (10.0.0.1:52090). Apr 14 00:10:13.165007 sshd[5400]: Accepted publickey for core from 10.0.0.1 port 52090 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:10:13.167028 sshd[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:10:13.175010 systemd-logind[1450]: New session 98 of user core. Apr 14 00:10:13.189437 systemd[1]: Started session-98.scope - Session 98 of User core. Apr 14 00:10:13.401726 sshd[5400]: pam_unix(sshd:session): session closed for user core Apr 14 00:10:13.406871 systemd[1]: sshd@97-10.0.0.37:22-10.0.0.1:52090.service: Deactivated successfully. Apr 14 00:10:13.411481 systemd[1]: session-98.scope: Deactivated successfully. Apr 14 00:10:13.414370 systemd-logind[1450]: Session 98 logged out. Waiting for processes to exit. Apr 14 00:10:13.423448 systemd-logind[1450]: Removed session 98. Apr 14 00:10:18.429664 systemd[1]: Started sshd@98-10.0.0.37:22-10.0.0.1:51768.service - OpenSSH per-connection server daemon (10.0.0.1:51768). Apr 14 00:10:18.490411 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 51768 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:10:18.493689 sshd[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:10:18.503766 systemd-logind[1450]: New session 99 of user core. Apr 14 00:10:18.515582 systemd[1]: Started session-99.scope - Session 99 of User core. Apr 14 00:10:18.742458 sshd[5416]: pam_unix(sshd:session): session closed for user core Apr 14 00:10:18.796321 systemd[1]: sshd@98-10.0.0.37:22-10.0.0.1:51768.service: Deactivated successfully. Apr 14 00:10:18.798092 systemd[1]: session-99.scope: Deactivated successfully. Apr 14 00:10:18.800434 systemd-logind[1450]: Session 99 logged out. Waiting for processes to exit. Apr 14 00:10:18.804414 systemd-logind[1450]: Removed session 99. Apr 14 00:10:23.782848 systemd[1]: Started sshd@99-10.0.0.37:22-10.0.0.1:51782.service - OpenSSH per-connection server daemon (10.0.0.1:51782). Apr 14 00:10:23.880763 sshd[5430]: Accepted publickey for core from 10.0.0.1 port 51782 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:10:23.884442 sshd[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:10:23.894796 systemd-logind[1450]: New session 100 of user core. Apr 14 00:10:23.906842 systemd[1]: Started session-100.scope - Session 100 of User core. Apr 14 00:10:24.232091 sshd[5430]: pam_unix(sshd:session): session closed for user core Apr 14 00:10:24.235888 systemd[1]: sshd@99-10.0.0.37:22-10.0.0.1:51782.service: Deactivated successfully. Apr 14 00:10:24.239303 systemd[1]: session-100.scope: Deactivated successfully. Apr 14 00:10:24.240365 systemd-logind[1450]: Session 100 logged out. Waiting for processes to exit. Apr 14 00:10:24.242200 systemd-logind[1450]: Removed session 100. Apr 14 00:10:29.291490 systemd[1]: Started sshd@100-10.0.0.37:22-10.0.0.1:51462.service - OpenSSH per-connection server daemon (10.0.0.1:51462). Apr 14 00:10:29.384695 sshd[5446]: Accepted publickey for core from 10.0.0.1 port 51462 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:10:29.385523 sshd[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:10:29.403257 systemd-logind[1450]: New session 101 of user core. Apr 14 00:10:29.413602 systemd[1]: Started session-101.scope - Session 101 of User core. Apr 14 00:10:29.674405 sshd[5446]: pam_unix(sshd:session): session closed for user core Apr 14 00:10:29.679818 systemd[1]: sshd@100-10.0.0.37:22-10.0.0.1:51462.service: Deactivated successfully. Apr 14 00:10:29.683742 systemd[1]: session-101.scope: Deactivated successfully. Apr 14 00:10:29.688919 systemd-logind[1450]: Session 101 logged out. Waiting for processes to exit. Apr 14 00:10:29.690945 systemd-logind[1450]: Removed session 101. Apr 14 00:10:31.760019 kubelet[2593]: E0414 00:10:31.759717 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:10:34.707782 systemd[1]: Started sshd@101-10.0.0.37:22-10.0.0.1:51466.service - OpenSSH per-connection server daemon (10.0.0.1:51466). Apr 14 00:10:34.741140 sshd[5461]: Accepted publickey for core from 10.0.0.1 port 51466 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:10:34.743459 sshd[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:10:34.750076 systemd-logind[1450]: New session 102 of user core. Apr 14 00:10:34.763674 systemd[1]: Started session-102.scope - Session 102 of User core. Apr 14 00:10:34.942008 sshd[5461]: pam_unix(sshd:session): session closed for user core Apr 14 00:10:34.945530 systemd[1]: sshd@101-10.0.0.37:22-10.0.0.1:51466.service: Deactivated successfully. Apr 14 00:10:34.947820 systemd[1]: session-102.scope: Deactivated successfully. Apr 14 00:10:34.950272 systemd-logind[1450]: Session 102 logged out. Waiting for processes to exit. Apr 14 00:10:34.951312 systemd-logind[1450]: Removed session 102. Apr 14 00:10:36.808092 kubelet[2593]: E0414 00:10:36.807442 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:10:38.794503 kubelet[2593]: E0414 00:10:38.794144 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:10:39.975969 systemd[1]: Started sshd@102-10.0.0.37:22-10.0.0.1:51692.service - OpenSSH per-connection server daemon (10.0.0.1:51692). Apr 14 00:10:40.031542 sshd[5477]: Accepted publickey for core from 10.0.0.1 port 51692 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:10:40.033068 sshd[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:10:40.045612 systemd-logind[1450]: New session 103 of user core. Apr 14 00:10:40.095779 systemd[1]: Started session-103.scope - Session 103 of User core. Apr 14 00:10:40.339413 sshd[5477]: pam_unix(sshd:session): session closed for user core Apr 14 00:10:40.356460 systemd[1]: sshd@102-10.0.0.37:22-10.0.0.1:51692.service: Deactivated successfully. Apr 14 00:10:40.364829 systemd[1]: session-103.scope: Deactivated successfully. Apr 14 00:10:40.374756 systemd-logind[1450]: Session 103 logged out. Waiting for processes to exit. Apr 14 00:10:40.382577 systemd-logind[1450]: Removed session 103. Apr 14 00:10:45.380433 systemd[1]: Started sshd@103-10.0.0.37:22-10.0.0.1:36066.service - OpenSSH per-connection server daemon (10.0.0.1:36066). Apr 14 00:10:45.441459 sshd[5493]: Accepted publickey for core from 10.0.0.1 port 36066 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:10:45.444763 sshd[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:10:45.511142 systemd-logind[1450]: New session 104 of user core. Apr 14 00:10:45.524677 systemd[1]: Started session-104.scope - Session 104 of User core. Apr 14 00:10:45.789190 sshd[5493]: pam_unix(sshd:session): session closed for user core Apr 14 00:10:45.795501 systemd[1]: sshd@103-10.0.0.37:22-10.0.0.1:36066.service: Deactivated successfully. Apr 14 00:10:45.805091 systemd[1]: session-104.scope: Deactivated successfully. Apr 14 00:10:45.812322 systemd-logind[1450]: Session 104 logged out. Waiting for processes to exit. Apr 14 00:10:45.820944 systemd-logind[1450]: Removed session 104. Apr 14 00:10:50.804480 systemd[1]: Started sshd@104-10.0.0.37:22-10.0.0.1:36070.service - OpenSSH per-connection server daemon (10.0.0.1:36070). Apr 14 00:10:50.846485 sshd[5507]: Accepted publickey for core from 10.0.0.1 port 36070 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:10:50.847825 sshd[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:10:50.855537 systemd-logind[1450]: New session 105 of user core. Apr 14 00:10:50.869895 systemd[1]: Started session-105.scope - Session 105 of User core. Apr 14 00:10:51.079683 sshd[5507]: pam_unix(sshd:session): session closed for user core Apr 14 00:10:51.091445 systemd[1]: sshd@104-10.0.0.37:22-10.0.0.1:36070.service: Deactivated successfully. Apr 14 00:10:51.093015 systemd[1]: session-105.scope: Deactivated successfully. Apr 14 00:10:51.094957 systemd-logind[1450]: Session 105 logged out. Waiting for processes to exit. Apr 14 00:10:51.104696 systemd[1]: Started sshd@105-10.0.0.37:22-10.0.0.1:36084.service - OpenSSH per-connection server daemon (10.0.0.1:36084). Apr 14 00:10:51.106055 systemd-logind[1450]: Removed session 105. Apr 14 00:10:51.199882 sshd[5523]: Accepted publickey for core from 10.0.0.1 port 36084 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:10:51.201882 sshd[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:10:51.215471 systemd-logind[1450]: New session 106 of user core. Apr 14 00:10:51.225771 systemd[1]: Started session-106.scope - Session 106 of User core. Apr 14 00:10:53.578116 kubelet[2593]: I0414 00:10:53.578012 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mzp22" podStartSLOduration=791.577956069 podStartE2EDuration="13m11.577956069s" podCreationTimestamp="2026-04-13 23:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:58:34.692629351 +0000 UTC m=+56.344177836" watchObservedRunningTime="2026-04-14 00:10:53.577956069 +0000 UTC m=+795.229504555" Apr 14 00:10:53.603683 containerd[1465]: time="2026-04-14T00:10:53.603495672Z" level=info msg="StopContainer for \"1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2\" with timeout 30 (s)" Apr 14 00:10:53.604953 containerd[1465]: time="2026-04-14T00:10:53.604654884Z" level=info msg="Stop container \"1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2\" with signal terminated" Apr 14 00:10:53.644258 systemd[1]: cri-containerd-1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2.scope: Deactivated successfully. Apr 14 00:10:53.644559 systemd[1]: cri-containerd-1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2.scope: Consumed 3.557s CPU time. Apr 14 00:10:53.671600 containerd[1465]: time="2026-04-14T00:10:53.670581131Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 00:10:53.706787 containerd[1465]: time="2026-04-14T00:10:53.706626008Z" level=info msg="StopContainer for \"684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352\" with timeout 2 (s)" Apr 14 00:10:53.707403 containerd[1465]: time="2026-04-14T00:10:53.707356057Z" level=info msg="Stop container \"684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352\" with signal terminated" Apr 14 00:10:53.708658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2-rootfs.mount: Deactivated successfully. Apr 14 00:10:53.720146 containerd[1465]: time="2026-04-14T00:10:53.719777265Z" level=info msg="shim disconnected" id=1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2 namespace=k8s.io Apr 14 00:10:53.721340 containerd[1465]: time="2026-04-14T00:10:53.720086804Z" level=warning msg="cleaning up after shim disconnected" id=1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2 namespace=k8s.io Apr 14 00:10:53.721340 containerd[1465]: time="2026-04-14T00:10:53.721333702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:10:53.725981 systemd-networkd[1390]: lxc_health: Link DOWN Apr 14 00:10:53.725993 systemd-networkd[1390]: lxc_health: Lost carrier Apr 14 00:10:53.782852 containerd[1465]: time="2026-04-14T00:10:53.782516774Z" level=info msg="StopContainer for \"1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2\" returns successfully" Apr 14 00:10:53.790840 containerd[1465]: time="2026-04-14T00:10:53.790577164Z" level=info msg="StopPodSandbox for \"6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e\"" Apr 14 00:10:53.791755 containerd[1465]: time="2026-04-14T00:10:53.790969261Z" level=info msg="Container to stop \"1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 00:10:53.791275 systemd[1]: cri-containerd-684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352.scope: Deactivated successfully. Apr 14 00:10:53.791848 systemd[1]: cri-containerd-684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352.scope: Consumed 33.968s CPU time. Apr 14 00:10:53.796706 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e-shm.mount: Deactivated successfully. Apr 14 00:10:53.805097 systemd[1]: cri-containerd-6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e.scope: Deactivated successfully. Apr 14 00:10:53.830721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352-rootfs.mount: Deactivated successfully. Apr 14 00:10:53.839133 containerd[1465]: time="2026-04-14T00:10:53.839044241Z" level=info msg="shim disconnected" id=684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352 namespace=k8s.io Apr 14 00:10:53.839519 containerd[1465]: time="2026-04-14T00:10:53.839380887Z" level=warning msg="cleaning up after shim disconnected" id=684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352 namespace=k8s.io Apr 14 00:10:53.839519 containerd[1465]: time="2026-04-14T00:10:53.839393163Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:10:53.844791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e-rootfs.mount: Deactivated successfully. Apr 14 00:10:53.853077 containerd[1465]: time="2026-04-14T00:10:53.852841171Z" level=info msg="shim disconnected" id=6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e namespace=k8s.io Apr 14 00:10:53.853077 containerd[1465]: time="2026-04-14T00:10:53.852908763Z" level=warning msg="cleaning up after shim disconnected" id=6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e namespace=k8s.io Apr 14 00:10:53.853077 containerd[1465]: time="2026-04-14T00:10:53.852916938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:10:53.868422 containerd[1465]: time="2026-04-14T00:10:53.867680793Z" level=info msg="StopContainer for \"684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352\" returns successfully" Apr 14 00:10:53.871986 containerd[1465]: time="2026-04-14T00:10:53.871837009Z" level=info msg="StopPodSandbox for \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\"" Apr 14 00:10:53.871986 containerd[1465]: time="2026-04-14T00:10:53.871999832Z" level=info msg="Container to stop \"363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 00:10:53.871986 containerd[1465]: time="2026-04-14T00:10:53.872019349Z" level=info msg="Container to stop \"9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 00:10:53.871986 containerd[1465]: time="2026-04-14T00:10:53.872031181Z" level=info msg="Container to stop \"684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 00:10:53.871986 containerd[1465]: time="2026-04-14T00:10:53.872043546Z" level=info msg="Container to stop \"896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 00:10:53.871986 containerd[1465]: time="2026-04-14T00:10:53.872060740Z" level=info msg="Container to stop \"ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 00:10:53.888418 containerd[1465]: time="2026-04-14T00:10:53.887240783Z" level=warning msg="cleanup warnings time=\"2026-04-14T00:10:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 00:10:53.888628 systemd[1]: cri-containerd-ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa.scope: Deactivated successfully. Apr 14 00:10:53.899005 containerd[1465]: time="2026-04-14T00:10:53.897928290Z" level=info msg="TearDown network for sandbox \"6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e\" successfully" Apr 14 00:10:53.899005 containerd[1465]: time="2026-04-14T00:10:53.898029949Z" level=info msg="StopPodSandbox for \"6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e\" returns successfully" Apr 14 00:10:53.930847 containerd[1465]: time="2026-04-14T00:10:53.930723773Z" level=info msg="shim disconnected" id=ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa namespace=k8s.io Apr 14 00:10:53.930847 containerd[1465]: time="2026-04-14T00:10:53.930823413Z" level=warning msg="cleaning up after shim disconnected" id=ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa namespace=k8s.io Apr 14 00:10:53.930847 containerd[1465]: time="2026-04-14T00:10:53.930830325Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:10:53.934712 kubelet[2593]: I0414 00:10:53.930885 2593 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq96q\" (UniqueName: \"kubernetes.io/projected/444eeaee-fc6e-4af9-8b0f-a55b7364c514-kube-api-access-cq96q\") pod \"444eeaee-fc6e-4af9-8b0f-a55b7364c514\" (UID: \"444eeaee-fc6e-4af9-8b0f-a55b7364c514\") " Apr 14 00:10:53.934712 kubelet[2593]: I0414 00:10:53.930979 2593 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/444eeaee-fc6e-4af9-8b0f-a55b7364c514-cilium-config-path\") pod \"444eeaee-fc6e-4af9-8b0f-a55b7364c514\" (UID: \"444eeaee-fc6e-4af9-8b0f-a55b7364c514\") " Apr 14 00:10:53.940692 kubelet[2593]: I0414 00:10:53.940601 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/444eeaee-fc6e-4af9-8b0f-a55b7364c514-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "444eeaee-fc6e-4af9-8b0f-a55b7364c514" (UID: "444eeaee-fc6e-4af9-8b0f-a55b7364c514"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 00:10:53.984521 kubelet[2593]: I0414 00:10:53.984243 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444eeaee-fc6e-4af9-8b0f-a55b7364c514-kube-api-access-cq96q" (OuterVolumeSpecName: "kube-api-access-cq96q") pod "444eeaee-fc6e-4af9-8b0f-a55b7364c514" (UID: "444eeaee-fc6e-4af9-8b0f-a55b7364c514"). InnerVolumeSpecName "kube-api-access-cq96q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 00:10:54.006416 containerd[1465]: time="2026-04-14T00:10:54.006328757Z" level=info msg="TearDown network for sandbox \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\" successfully" Apr 14 00:10:54.006416 containerd[1465]: time="2026-04-14T00:10:54.006390843Z" level=info msg="StopPodSandbox for \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\" returns successfully" Apr 14 00:10:54.036033 kubelet[2593]: I0414 00:10:54.035233 2593 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a66fa1bd-1042-482b-9724-7081d0236f97-cilium-config-path\") pod \"a66fa1bd-1042-482b-9724-7081d0236f97\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " Apr 14 00:10:54.036033 kubelet[2593]: I0414 00:10:54.035384 2593 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-hostproc\") pod \"a66fa1bd-1042-482b-9724-7081d0236f97\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " Apr 14 00:10:54.036033 kubelet[2593]: I0414 00:10:54.035410 2593 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-bpf-maps\") pod \"a66fa1bd-1042-482b-9724-7081d0236f97\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " Apr 14 00:10:54.036033 kubelet[2593]: I0414 00:10:54.035432 2593 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-cilium-cgroup\") pod \"a66fa1bd-1042-482b-9724-7081d0236f97\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " Apr 14 00:10:54.036033 kubelet[2593]: I0414 00:10:54.035452 2593 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-etc-cni-netd\") pod \"a66fa1bd-1042-482b-9724-7081d0236f97\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " Apr 14 00:10:54.036033 kubelet[2593]: I0414 00:10:54.035469 2593 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-host-proc-sys-net\") pod \"a66fa1bd-1042-482b-9724-7081d0236f97\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " Apr 14 00:10:54.036471 kubelet[2593]: I0414 00:10:54.035499 2593 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a66fa1bd-1042-482b-9724-7081d0236f97-hubble-tls\") pod \"a66fa1bd-1042-482b-9724-7081d0236f97\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " Apr 14 00:10:54.036471 kubelet[2593]: I0414 00:10:54.035515 2593 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-xtables-lock\") pod \"a66fa1bd-1042-482b-9724-7081d0236f97\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " Apr 14 00:10:54.036471 kubelet[2593]: I0414 00:10:54.035531 2593 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-cilium-run\") pod \"a66fa1bd-1042-482b-9724-7081d0236f97\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " Apr 14 00:10:54.036471 kubelet[2593]: I0414 00:10:54.035556 2593 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-host-proc-sys-kernel\") pod \"a66fa1bd-1042-482b-9724-7081d0236f97\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " Apr 14 00:10:54.036471 kubelet[2593]: I0414 00:10:54.035573 2593 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-cni-path\") pod \"a66fa1bd-1042-482b-9724-7081d0236f97\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " Apr 14 00:10:54.036471 kubelet[2593]: I0414 00:10:54.035589 2593 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-lib-modules\") pod \"a66fa1bd-1042-482b-9724-7081d0236f97\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " Apr 14 00:10:54.038913 kubelet[2593]: I0414 00:10:54.035614 2593 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a66fa1bd-1042-482b-9724-7081d0236f97-clustermesh-secrets\") pod \"a66fa1bd-1042-482b-9724-7081d0236f97\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " Apr 14 00:10:54.038913 kubelet[2593]: I0414 00:10:54.035637 2593 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55wp8\" (UniqueName: \"kubernetes.io/projected/a66fa1bd-1042-482b-9724-7081d0236f97-kube-api-access-55wp8\") pod \"a66fa1bd-1042-482b-9724-7081d0236f97\" (UID: \"a66fa1bd-1042-482b-9724-7081d0236f97\") " Apr 14 00:10:54.038913 kubelet[2593]: I0414 00:10:54.035695 2593 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cq96q\" (UniqueName: \"kubernetes.io/projected/444eeaee-fc6e-4af9-8b0f-a55b7364c514-kube-api-access-cq96q\") on node \"localhost\" DevicePath \"\"" Apr 14 00:10:54.038913 kubelet[2593]: I0414 00:10:54.035710 2593 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/444eeaee-fc6e-4af9-8b0f-a55b7364c514-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 14 00:10:54.038913 kubelet[2593]: I0414 00:10:54.035756 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a66fa1bd-1042-482b-9724-7081d0236f97-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a66fa1bd-1042-482b-9724-7081d0236f97" (UID: "a66fa1bd-1042-482b-9724-7081d0236f97"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 00:10:54.038913 kubelet[2593]: I0414 00:10:54.035777 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a66fa1bd-1042-482b-9724-7081d0236f97" (UID: "a66fa1bd-1042-482b-9724-7081d0236f97"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:10:54.039045 kubelet[2593]: I0414 00:10:54.035759 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-hostproc" (OuterVolumeSpecName: "hostproc") pod "a66fa1bd-1042-482b-9724-7081d0236f97" (UID: "a66fa1bd-1042-482b-9724-7081d0236f97"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:10:54.039045 kubelet[2593]: I0414 00:10:54.035802 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a66fa1bd-1042-482b-9724-7081d0236f97" (UID: "a66fa1bd-1042-482b-9724-7081d0236f97"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:10:54.039045 kubelet[2593]: I0414 00:10:54.035803 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a66fa1bd-1042-482b-9724-7081d0236f97" (UID: "a66fa1bd-1042-482b-9724-7081d0236f97"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:10:54.039045 kubelet[2593]: I0414 00:10:54.035822 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a66fa1bd-1042-482b-9724-7081d0236f97" (UID: "a66fa1bd-1042-482b-9724-7081d0236f97"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:10:54.039045 kubelet[2593]: I0414 00:10:54.035823 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a66fa1bd-1042-482b-9724-7081d0236f97" (UID: "a66fa1bd-1042-482b-9724-7081d0236f97"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:10:54.039129 kubelet[2593]: I0414 00:10:54.035841 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a66fa1bd-1042-482b-9724-7081d0236f97" (UID: "a66fa1bd-1042-482b-9724-7081d0236f97"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:10:54.039129 kubelet[2593]: I0414 00:10:54.035842 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a66fa1bd-1042-482b-9724-7081d0236f97" (UID: "a66fa1bd-1042-482b-9724-7081d0236f97"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:10:54.039129 kubelet[2593]: I0414 00:10:54.035857 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a66fa1bd-1042-482b-9724-7081d0236f97" (UID: "a66fa1bd-1042-482b-9724-7081d0236f97"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:10:54.039129 kubelet[2593]: I0414 00:10:54.035873 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-cni-path" (OuterVolumeSpecName: "cni-path") pod "a66fa1bd-1042-482b-9724-7081d0236f97" (UID: "a66fa1bd-1042-482b-9724-7081d0236f97"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:10:54.040138 kubelet[2593]: I0414 00:10:54.040069 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a66fa1bd-1042-482b-9724-7081d0236f97-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a66fa1bd-1042-482b-9724-7081d0236f97" (UID: "a66fa1bd-1042-482b-9724-7081d0236f97"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 00:10:54.041678 kubelet[2593]: I0414 00:10:54.041527 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a66fa1bd-1042-482b-9724-7081d0236f97-kube-api-access-55wp8" (OuterVolumeSpecName: "kube-api-access-55wp8") pod "a66fa1bd-1042-482b-9724-7081d0236f97" (UID: "a66fa1bd-1042-482b-9724-7081d0236f97"). InnerVolumeSpecName "kube-api-access-55wp8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 00:10:54.042323 kubelet[2593]: I0414 00:10:54.042268 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a66fa1bd-1042-482b-9724-7081d0236f97-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a66fa1bd-1042-482b-9724-7081d0236f97" (UID: "a66fa1bd-1042-482b-9724-7081d0236f97"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 14 00:10:54.137405 kubelet[2593]: I0414 00:10:54.135965 2593 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a66fa1bd-1042-482b-9724-7081d0236f97-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 14 00:10:54.137405 kubelet[2593]: I0414 00:10:54.136034 2593 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 14 00:10:54.137405 kubelet[2593]: I0414 00:10:54.136044 2593 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 14 00:10:54.137405 kubelet[2593]: I0414 00:10:54.136060 2593 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 14 00:10:54.137405 kubelet[2593]: I0414 00:10:54.136098 2593 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 14 00:10:54.137405 kubelet[2593]: I0414 00:10:54.136112 2593 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 14 00:10:54.137405 kubelet[2593]: I0414 00:10:54.136122 2593 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a66fa1bd-1042-482b-9724-7081d0236f97-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 14 00:10:54.137405 kubelet[2593]: I0414 00:10:54.136132 2593 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-55wp8\" (UniqueName: \"kubernetes.io/projected/a66fa1bd-1042-482b-9724-7081d0236f97-kube-api-access-55wp8\") on node \"localhost\" DevicePath \"\"" Apr 14 00:10:54.138450 kubelet[2593]: I0414 00:10:54.136348 2593 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a66fa1bd-1042-482b-9724-7081d0236f97-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 14 00:10:54.138450 kubelet[2593]: I0414 00:10:54.136424 2593 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 14 00:10:54.138450 kubelet[2593]: I0414 00:10:54.136434 2593 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 14 00:10:54.138450 kubelet[2593]: I0414 00:10:54.136442 2593 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 14 00:10:54.138450 kubelet[2593]: I0414 00:10:54.136453 2593 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 14 00:10:54.138450 kubelet[2593]: I0414 00:10:54.136464 2593 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a66fa1bd-1042-482b-9724-7081d0236f97-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 14 00:10:54.661188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa-rootfs.mount: Deactivated successfully. Apr 14 00:10:54.662503 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa-shm.mount: Deactivated successfully. Apr 14 00:10:54.662850 systemd[1]: var-lib-kubelet-pods-444eeaee\x2dfc6e\x2d4af9\x2d8b0f\x2da55b7364c514-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcq96q.mount: Deactivated successfully. Apr 14 00:10:54.663765 systemd[1]: var-lib-kubelet-pods-a66fa1bd\x2d1042\x2d482b\x2d9724\x2d7081d0236f97-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d55wp8.mount: Deactivated successfully. Apr 14 00:10:54.665915 systemd[1]: var-lib-kubelet-pods-a66fa1bd\x2d1042\x2d482b\x2d9724\x2d7081d0236f97-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 14 00:10:54.666068 systemd[1]: var-lib-kubelet-pods-a66fa1bd\x2d1042\x2d482b\x2d9724\x2d7081d0236f97-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 14 00:10:54.771869 kubelet[2593]: I0414 00:10:54.771773 2593 scope.go:117] "RemoveContainer" containerID="1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2" Apr 14 00:10:54.782500 systemd[1]: Removed slice kubepods-besteffort-pod444eeaee_fc6e_4af9_8b0f_a55b7364c514.slice - libcontainer container kubepods-besteffort-pod444eeaee_fc6e_4af9_8b0f_a55b7364c514.slice. Apr 14 00:10:54.782764 systemd[1]: kubepods-besteffort-pod444eeaee_fc6e_4af9_8b0f_a55b7364c514.slice: Consumed 3.597s CPU time. Apr 14 00:10:54.790077 containerd[1465]: time="2026-04-14T00:10:54.786877450Z" level=info msg="RemoveContainer for \"1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2\"" Apr 14 00:10:54.791192 systemd[1]: Removed slice kubepods-burstable-poda66fa1bd_1042_482b_9724_7081d0236f97.slice - libcontainer container kubepods-burstable-poda66fa1bd_1042_482b_9724_7081d0236f97.slice. Apr 14 00:10:54.791563 systemd[1]: kubepods-burstable-poda66fa1bd_1042_482b_9724_7081d0236f97.slice: Consumed 34.163s CPU time. Apr 14 00:10:54.798373 containerd[1465]: time="2026-04-14T00:10:54.797757050Z" level=info msg="RemoveContainer for \"1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2\" returns successfully" Apr 14 00:10:54.798989 kubelet[2593]: I0414 00:10:54.798776 2593 scope.go:117] "RemoveContainer" containerID="1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2" Apr 14 00:10:54.803363 containerd[1465]: time="2026-04-14T00:10:54.803104488Z" level=error msg="ContainerStatus for \"1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2\": not found" Apr 14 00:10:54.821413 kubelet[2593]: E0414 00:10:54.821136 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2\": not found" containerID="1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2" Apr 14 00:10:54.821639 kubelet[2593]: I0414 00:10:54.821456 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2"} err="failed to get container status \"1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2\": rpc error: code = NotFound desc = an error occurred when try to find container \"1336c2ac4d3352787901e5841743347526626149009f8dd22909b5bdf14eaca2\": not found" Apr 14 00:10:54.821639 kubelet[2593]: I0414 00:10:54.821556 2593 scope.go:117] "RemoveContainer" containerID="684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352" Apr 14 00:10:54.824487 containerd[1465]: time="2026-04-14T00:10:54.824392326Z" level=info msg="RemoveContainer for \"684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352\"" Apr 14 00:10:54.851762 containerd[1465]: time="2026-04-14T00:10:54.850079428Z" level=info msg="RemoveContainer for \"684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352\" returns successfully" Apr 14 00:10:54.861947 kubelet[2593]: I0414 00:10:54.861764 2593 scope.go:117] "RemoveContainer" containerID="ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c" Apr 14 00:10:54.876687 containerd[1465]: time="2026-04-14T00:10:54.876538774Z" level=info msg="RemoveContainer for \"ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c\"" Apr 14 00:10:54.896790 containerd[1465]: time="2026-04-14T00:10:54.895596949Z" level=info msg="RemoveContainer for \"ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c\" returns successfully" Apr 14 00:10:54.897325 kubelet[2593]: I0414 00:10:54.897145 2593 scope.go:117] "RemoveContainer" containerID="9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1" Apr 14 00:10:54.901029 containerd[1465]: time="2026-04-14T00:10:54.900870673Z" level=info msg="RemoveContainer for \"9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1\"" Apr 14 00:10:54.908786 containerd[1465]: time="2026-04-14T00:10:54.907425433Z" level=info msg="RemoveContainer for \"9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1\" returns successfully" Apr 14 00:10:54.909548 kubelet[2593]: I0414 00:10:54.909509 2593 scope.go:117] "RemoveContainer" containerID="896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075" Apr 14 00:10:54.915663 containerd[1465]: time="2026-04-14T00:10:54.915485509Z" level=info msg="RemoveContainer for \"896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075\"" Apr 14 00:10:54.941581 containerd[1465]: time="2026-04-14T00:10:54.940814009Z" level=info msg="RemoveContainer for \"896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075\" returns successfully" Apr 14 00:10:54.942711 kubelet[2593]: I0414 00:10:54.942576 2593 scope.go:117] "RemoveContainer" containerID="363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c" Apr 14 00:10:54.955714 containerd[1465]: time="2026-04-14T00:10:54.955613540Z" level=info msg="RemoveContainer for \"363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c\"" Apr 14 00:10:54.962445 containerd[1465]: time="2026-04-14T00:10:54.962276744Z" level=info msg="RemoveContainer for \"363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c\" returns successfully" Apr 14 00:10:54.962989 kubelet[2593]: I0414 00:10:54.962953 2593 scope.go:117] "RemoveContainer" containerID="684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352" Apr 14 00:10:54.964850 containerd[1465]: time="2026-04-14T00:10:54.964203513Z" level=error msg="ContainerStatus for \"684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352\": not found" Apr 14 00:10:54.965135 kubelet[2593]: E0414 00:10:54.964445 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352\": not found" containerID="684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352" Apr 14 00:10:54.965135 kubelet[2593]: I0414 00:10:54.964492 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352"} err="failed to get container status \"684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352\": rpc error: code = NotFound desc = an error occurred when try to find container \"684617b6acc0929c98bae6f8aae359a28ed95521692b0599b96e29b4831c8352\": not found" Apr 14 00:10:54.965135 kubelet[2593]: I0414 00:10:54.964527 2593 scope.go:117] "RemoveContainer" containerID="ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c" Apr 14 00:10:54.965466 containerd[1465]: time="2026-04-14T00:10:54.965232753Z" level=error msg="ContainerStatus for \"ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c\": not found" Apr 14 00:10:54.965547 kubelet[2593]: E0414 00:10:54.965511 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c\": not found" containerID="ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c" Apr 14 00:10:54.965600 kubelet[2593]: I0414 00:10:54.965557 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c"} err="failed to get container status \"ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee19feea869faeac5ef54ee17bacfdcfd9441540e25437e30fd68865b32c081c\": not found" Apr 14 00:10:54.965600 kubelet[2593]: I0414 00:10:54.965583 2593 scope.go:117] "RemoveContainer" containerID="9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1" Apr 14 00:10:54.966478 containerd[1465]: time="2026-04-14T00:10:54.965794783Z" level=error msg="ContainerStatus for \"9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1\": not found" Apr 14 00:10:54.966596 kubelet[2593]: E0414 00:10:54.966442 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1\": not found" containerID="9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1" Apr 14 00:10:54.966596 kubelet[2593]: I0414 00:10:54.966474 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1"} err="failed to get container status \"9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c723a1ca8fa293240a1ca368a8788303c325858342253a215fdd758c0cffdd1\": not found" Apr 14 00:10:54.966596 kubelet[2593]: I0414 00:10:54.966501 2593 scope.go:117] "RemoveContainer" containerID="896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075" Apr 14 00:10:54.966709 containerd[1465]: time="2026-04-14T00:10:54.966673479Z" level=error msg="ContainerStatus for \"896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075\": not found" Apr 14 00:10:54.967707 kubelet[2593]: E0414 00:10:54.966766 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075\": not found" containerID="896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075" Apr 14 00:10:54.968194 kubelet[2593]: I0414 00:10:54.968073 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075"} err="failed to get container status \"896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075\": rpc error: code = NotFound desc = an error occurred when try to find container \"896de5deddd5e5be2dc0489311c091299c7498f5924129873a31d846f0682075\": not found" Apr 14 00:10:54.968414 kubelet[2593]: I0414 00:10:54.968231 2593 scope.go:117] "RemoveContainer" containerID="363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c" Apr 14 00:10:54.968704 containerd[1465]: time="2026-04-14T00:10:54.968648682Z" level=error msg="ContainerStatus for \"363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c\": not found" Apr 14 00:10:54.968795 kubelet[2593]: E0414 00:10:54.968773 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c\": not found" containerID="363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c" Apr 14 00:10:54.968850 kubelet[2593]: I0414 00:10:54.968802 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c"} err="failed to get container status \"363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c\": rpc error: code = NotFound desc = an error occurred when try to find container \"363aa63c1abe0b16e4ba5fae77569d1eb10d7f62a5931a59e8cfd4b81fc0d15c\": not found" Apr 14 00:10:55.393834 sshd[5523]: pam_unix(sshd:session): session closed for user core Apr 14 00:10:55.404559 systemd[1]: sshd@105-10.0.0.37:22-10.0.0.1:36084.service: Deactivated successfully. Apr 14 00:10:55.407194 systemd[1]: session-106.scope: Deactivated successfully. Apr 14 00:10:55.407397 systemd[1]: session-106.scope: Consumed 1.260s CPU time. Apr 14 00:10:55.410586 systemd-logind[1450]: Session 106 logged out. Waiting for processes to exit. Apr 14 00:10:55.425741 systemd[1]: Started sshd@106-10.0.0.37:22-10.0.0.1:42294.service - OpenSSH per-connection server daemon (10.0.0.1:42294). Apr 14 00:10:55.433398 systemd-logind[1450]: Removed session 106. Apr 14 00:10:55.524652 sshd[5688]: Accepted publickey for core from 10.0.0.1 port 42294 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:10:55.529738 sshd[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:10:55.537717 systemd-logind[1450]: New session 107 of user core. Apr 14 00:10:55.552235 systemd[1]: Started session-107.scope - Session 107 of User core. Apr 14 00:10:55.597632 kubelet[2593]: E0414 00:10:55.597521 2593 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:10:56.784430 kubelet[2593]: E0414 00:10:56.784303 2593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-mjcvf" podUID="45bb15a9-86dc-4ff1-8bb6-52b73e813f0a" Apr 14 00:10:56.788632 kubelet[2593]: I0414 00:10:56.787355 2593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444eeaee-fc6e-4af9-8b0f-a55b7364c514" path="/var/lib/kubelet/pods/444eeaee-fc6e-4af9-8b0f-a55b7364c514/volumes" Apr 14 00:10:56.788632 kubelet[2593]: I0414 00:10:56.787702 2593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a66fa1bd-1042-482b-9724-7081d0236f97" path="/var/lib/kubelet/pods/a66fa1bd-1042-482b-9724-7081d0236f97/volumes" Apr 14 00:10:57.145097 kubelet[2593]: I0414 00:10:57.144894 2593 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-14T00:10:57Z","lastTransitionTime":"2026-04-14T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 14 00:10:57.431444 sshd[5688]: pam_unix(sshd:session): session closed for user core Apr 14 00:10:57.452170 systemd[1]: sshd@106-10.0.0.37:22-10.0.0.1:42294.service: Deactivated successfully. Apr 14 00:10:57.457737 systemd[1]: session-107.scope: Deactivated successfully. Apr 14 00:10:57.458302 systemd[1]: session-107.scope: Consumed 1.206s CPU time. Apr 14 00:10:57.465760 systemd-logind[1450]: Session 107 logged out. Waiting for processes to exit. Apr 14 00:10:57.479715 systemd[1]: Started sshd@107-10.0.0.37:22-10.0.0.1:42306.service - OpenSSH per-connection server daemon (10.0.0.1:42306). Apr 14 00:10:57.493005 systemd-logind[1450]: Removed session 107. Apr 14 00:10:57.601695 systemd[1]: Created slice kubepods-burstable-pod98a54f6a_bc27_41d1_8fa8_94c38a97666c.slice - libcontainer container kubepods-burstable-pod98a54f6a_bc27_41d1_8fa8_94c38a97666c.slice. Apr 14 00:10:57.620135 sshd[5704]: Accepted publickey for core from 10.0.0.1 port 42306 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:10:57.621443 sshd[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:10:57.643794 systemd-logind[1450]: New session 108 of user core. Apr 14 00:10:57.655768 systemd[1]: Started session-108.scope - Session 108 of User core. Apr 14 00:10:57.731388 kubelet[2593]: I0414 00:10:57.730927 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98a54f6a-bc27-41d1-8fa8-94c38a97666c-host-proc-sys-kernel\") pod \"cilium-g47p6\" (UID: \"98a54f6a-bc27-41d1-8fa8-94c38a97666c\") " pod="kube-system/cilium-g47p6" Apr 14 00:10:57.731388 kubelet[2593]: I0414 00:10:57.730993 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld8k6\" (UniqueName: \"kubernetes.io/projected/98a54f6a-bc27-41d1-8fa8-94c38a97666c-kube-api-access-ld8k6\") pod \"cilium-g47p6\" (UID: \"98a54f6a-bc27-41d1-8fa8-94c38a97666c\") " pod="kube-system/cilium-g47p6" Apr 14 00:10:57.731388 kubelet[2593]: I0414 00:10:57.731012 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98a54f6a-bc27-41d1-8fa8-94c38a97666c-cilium-run\") pod \"cilium-g47p6\" (UID: \"98a54f6a-bc27-41d1-8fa8-94c38a97666c\") " pod="kube-system/cilium-g47p6" Apr 14 00:10:57.731388 kubelet[2593]: I0414 00:10:57.731028 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98a54f6a-bc27-41d1-8fa8-94c38a97666c-cilium-cgroup\") pod \"cilium-g47p6\" (UID: \"98a54f6a-bc27-41d1-8fa8-94c38a97666c\") " pod="kube-system/cilium-g47p6" Apr 14 00:10:57.731388 kubelet[2593]: I0414 00:10:57.731040 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98a54f6a-bc27-41d1-8fa8-94c38a97666c-etc-cni-netd\") pod \"cilium-g47p6\" (UID: \"98a54f6a-bc27-41d1-8fa8-94c38a97666c\") " pod="kube-system/cilium-g47p6" Apr 14 00:10:57.731388 kubelet[2593]: I0414 00:10:57.731051 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98a54f6a-bc27-41d1-8fa8-94c38a97666c-lib-modules\") pod \"cilium-g47p6\" (UID: \"98a54f6a-bc27-41d1-8fa8-94c38a97666c\") " pod="kube-system/cilium-g47p6" Apr 14 00:10:57.732044 kubelet[2593]: I0414 00:10:57.731332 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98a54f6a-bc27-41d1-8fa8-94c38a97666c-hostproc\") pod \"cilium-g47p6\" (UID: \"98a54f6a-bc27-41d1-8fa8-94c38a97666c\") " pod="kube-system/cilium-g47p6" Apr 14 00:10:57.732044 kubelet[2593]: I0414 00:10:57.731391 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98a54f6a-bc27-41d1-8fa8-94c38a97666c-cni-path\") pod \"cilium-g47p6\" (UID: \"98a54f6a-bc27-41d1-8fa8-94c38a97666c\") " pod="kube-system/cilium-g47p6" Apr 14 00:10:57.732044 kubelet[2593]: I0414 00:10:57.731411 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/98a54f6a-bc27-41d1-8fa8-94c38a97666c-cilium-ipsec-secrets\") pod \"cilium-g47p6\" (UID: \"98a54f6a-bc27-41d1-8fa8-94c38a97666c\") " pod="kube-system/cilium-g47p6" Apr 14 00:10:57.732044 kubelet[2593]: I0414 00:10:57.731434 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98a54f6a-bc27-41d1-8fa8-94c38a97666c-host-proc-sys-net\") pod \"cilium-g47p6\" (UID: \"98a54f6a-bc27-41d1-8fa8-94c38a97666c\") " pod="kube-system/cilium-g47p6" Apr 14 00:10:57.732044 kubelet[2593]: I0414 00:10:57.731533 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98a54f6a-bc27-41d1-8fa8-94c38a97666c-clustermesh-secrets\") pod \"cilium-g47p6\" (UID: \"98a54f6a-bc27-41d1-8fa8-94c38a97666c\") " pod="kube-system/cilium-g47p6" Apr 14 00:10:57.732044 kubelet[2593]: I0414 00:10:57.731582 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98a54f6a-bc27-41d1-8fa8-94c38a97666c-cilium-config-path\") pod \"cilium-g47p6\" (UID: \"98a54f6a-bc27-41d1-8fa8-94c38a97666c\") " pod="kube-system/cilium-g47p6" Apr 14 00:10:57.732309 kubelet[2593]: I0414 00:10:57.731607 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98a54f6a-bc27-41d1-8fa8-94c38a97666c-xtables-lock\") pod \"cilium-g47p6\" (UID: \"98a54f6a-bc27-41d1-8fa8-94c38a97666c\") " pod="kube-system/cilium-g47p6" Apr 14 00:10:57.732309 kubelet[2593]: I0414 00:10:57.731637 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98a54f6a-bc27-41d1-8fa8-94c38a97666c-bpf-maps\") pod \"cilium-g47p6\" (UID: \"98a54f6a-bc27-41d1-8fa8-94c38a97666c\") " pod="kube-system/cilium-g47p6" Apr 14 00:10:57.732309 kubelet[2593]: I0414 00:10:57.731664 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98a54f6a-bc27-41d1-8fa8-94c38a97666c-hubble-tls\") pod \"cilium-g47p6\" (UID: \"98a54f6a-bc27-41d1-8fa8-94c38a97666c\") " pod="kube-system/cilium-g47p6" Apr 14 00:10:57.737710 sshd[5704]: pam_unix(sshd:session): session closed for user core Apr 14 00:10:57.783026 kubelet[2593]: E0414 00:10:57.782814 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:10:57.785514 systemd[1]: sshd@107-10.0.0.37:22-10.0.0.1:42306.service: Deactivated successfully. Apr 14 00:10:57.788521 systemd[1]: session-108.scope: Deactivated successfully. Apr 14 00:10:57.789933 systemd-logind[1450]: Session 108 logged out. Waiting for processes to exit. Apr 14 00:10:57.805895 systemd[1]: Started sshd@108-10.0.0.37:22-10.0.0.1:42316.service - OpenSSH per-connection server daemon (10.0.0.1:42316). Apr 14 00:10:57.814632 systemd-logind[1450]: Removed session 108. Apr 14 00:10:57.867208 sshd[5712]: Accepted publickey for core from 10.0.0.1 port 42316 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:10:57.866395 sshd[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:10:57.902439 systemd-logind[1450]: New session 109 of user core. Apr 14 00:10:57.907666 kubelet[2593]: E0414 00:10:57.907579 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:10:57.921652 containerd[1465]: time="2026-04-14T00:10:57.910506331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g47p6,Uid:98a54f6a-bc27-41d1-8fa8-94c38a97666c,Namespace:kube-system,Attempt:0,}" Apr 14 00:10:57.924054 systemd[1]: Started session-109.scope - Session 109 of User core. Apr 14 00:10:57.989767 containerd[1465]: time="2026-04-14T00:10:57.989485992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:10:57.989767 containerd[1465]: time="2026-04-14T00:10:57.989559197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:10:57.989767 containerd[1465]: time="2026-04-14T00:10:57.989568711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:10:57.989767 containerd[1465]: time="2026-04-14T00:10:57.989644246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:10:58.028892 systemd[1]: Started cri-containerd-4b31a570b00a9b831d6616f7e6976657e169f9820e95200255f3d219b10dcd7e.scope - libcontainer container 4b31a570b00a9b831d6616f7e6976657e169f9820e95200255f3d219b10dcd7e. Apr 14 00:10:58.132554 containerd[1465]: time="2026-04-14T00:10:58.132139370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g47p6,Uid:98a54f6a-bc27-41d1-8fa8-94c38a97666c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b31a570b00a9b831d6616f7e6976657e169f9820e95200255f3d219b10dcd7e\"" Apr 14 00:10:58.137308 kubelet[2593]: E0414 00:10:58.136988 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:10:58.179019 containerd[1465]: time="2026-04-14T00:10:58.178900892Z" level=info msg="CreateContainer within sandbox \"4b31a570b00a9b831d6616f7e6976657e169f9820e95200255f3d219b10dcd7e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 14 00:10:58.209679 containerd[1465]: time="2026-04-14T00:10:58.208727562Z" level=info msg="CreateContainer within sandbox \"4b31a570b00a9b831d6616f7e6976657e169f9820e95200255f3d219b10dcd7e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"859a989c9dcaa9b2ad0ec7918772a40e917975e7910cfedd4dc969ad6c8f45fc\"" Apr 14 00:10:58.211404 containerd[1465]: time="2026-04-14T00:10:58.210064247Z" level=info msg="StartContainer for \"859a989c9dcaa9b2ad0ec7918772a40e917975e7910cfedd4dc969ad6c8f45fc\"" Apr 14 00:10:58.286096 systemd[1]: Started cri-containerd-859a989c9dcaa9b2ad0ec7918772a40e917975e7910cfedd4dc969ad6c8f45fc.scope - libcontainer container 859a989c9dcaa9b2ad0ec7918772a40e917975e7910cfedd4dc969ad6c8f45fc. Apr 14 00:10:58.344018 containerd[1465]: time="2026-04-14T00:10:58.343809770Z" level=info msg="StartContainer for \"859a989c9dcaa9b2ad0ec7918772a40e917975e7910cfedd4dc969ad6c8f45fc\" returns successfully" Apr 14 00:10:58.405263 systemd[1]: cri-containerd-859a989c9dcaa9b2ad0ec7918772a40e917975e7910cfedd4dc969ad6c8f45fc.scope: Deactivated successfully. Apr 14 00:10:58.499924 containerd[1465]: time="2026-04-14T00:10:58.499777613Z" level=info msg="shim disconnected" id=859a989c9dcaa9b2ad0ec7918772a40e917975e7910cfedd4dc969ad6c8f45fc namespace=k8s.io Apr 14 00:10:58.499924 containerd[1465]: time="2026-04-14T00:10:58.499866379Z" level=warning msg="cleaning up after shim disconnected" id=859a989c9dcaa9b2ad0ec7918772a40e917975e7910cfedd4dc969ad6c8f45fc namespace=k8s.io Apr 14 00:10:58.499924 containerd[1465]: time="2026-04-14T00:10:58.499878445Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:10:58.788300 kubelet[2593]: E0414 00:10:58.787905 2593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-mjcvf" podUID="45bb15a9-86dc-4ff1-8bb6-52b73e813f0a" Apr 14 00:10:58.826351 kubelet[2593]: E0414 00:10:58.826228 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:10:58.846117 containerd[1465]: time="2026-04-14T00:10:58.845953389Z" level=info msg="CreateContainer within sandbox \"4b31a570b00a9b831d6616f7e6976657e169f9820e95200255f3d219b10dcd7e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 14 00:10:58.912837 containerd[1465]: time="2026-04-14T00:10:58.912649867Z" level=info msg="CreateContainer within sandbox \"4b31a570b00a9b831d6616f7e6976657e169f9820e95200255f3d219b10dcd7e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1ed4f7a3be726f88894de78406ab4c2f103982800b37b1198353945212ee2a6c\"" Apr 14 00:10:58.918703 containerd[1465]: time="2026-04-14T00:10:58.916759193Z" level=info msg="StartContainer for \"1ed4f7a3be726f88894de78406ab4c2f103982800b37b1198353945212ee2a6c\"" Apr 14 00:10:59.021629 systemd[1]: Started cri-containerd-1ed4f7a3be726f88894de78406ab4c2f103982800b37b1198353945212ee2a6c.scope - libcontainer container 1ed4f7a3be726f88894de78406ab4c2f103982800b37b1198353945212ee2a6c. Apr 14 00:10:59.120653 containerd[1465]: time="2026-04-14T00:10:59.120447025Z" level=info msg="StartContainer for \"1ed4f7a3be726f88894de78406ab4c2f103982800b37b1198353945212ee2a6c\" returns successfully" Apr 14 00:10:59.129845 systemd[1]: cri-containerd-1ed4f7a3be726f88894de78406ab4c2f103982800b37b1198353945212ee2a6c.scope: Deactivated successfully. Apr 14 00:10:59.226340 containerd[1465]: time="2026-04-14T00:10:59.226197937Z" level=info msg="shim disconnected" id=1ed4f7a3be726f88894de78406ab4c2f103982800b37b1198353945212ee2a6c namespace=k8s.io Apr 14 00:10:59.226340 containerd[1465]: time="2026-04-14T00:10:59.226290783Z" level=warning msg="cleaning up after shim disconnected" id=1ed4f7a3be726f88894de78406ab4c2f103982800b37b1198353945212ee2a6c namespace=k8s.io Apr 14 00:10:59.226340 containerd[1465]: time="2026-04-14T00:10:59.226301590Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:10:59.873487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ed4f7a3be726f88894de78406ab4c2f103982800b37b1198353945212ee2a6c-rootfs.mount: Deactivated successfully. Apr 14 00:10:59.876442 kubelet[2593]: E0414 00:10:59.875619 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:10:59.891024 containerd[1465]: time="2026-04-14T00:10:59.890882650Z" level=info msg="CreateContainer within sandbox \"4b31a570b00a9b831d6616f7e6976657e169f9820e95200255f3d219b10dcd7e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 14 00:10:59.919461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3835981958.mount: Deactivated successfully. Apr 14 00:10:59.930132 containerd[1465]: time="2026-04-14T00:10:59.930041334Z" level=info msg="CreateContainer within sandbox \"4b31a570b00a9b831d6616f7e6976657e169f9820e95200255f3d219b10dcd7e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"629df5802472b038604fb4258224c18ea74c2e0d5f057b387ee59cab384d527b\"" Apr 14 00:10:59.931816 containerd[1465]: time="2026-04-14T00:10:59.931673361Z" level=info msg="StartContainer for \"629df5802472b038604fb4258224c18ea74c2e0d5f057b387ee59cab384d527b\"" Apr 14 00:11:00.020921 systemd[1]: Started cri-containerd-629df5802472b038604fb4258224c18ea74c2e0d5f057b387ee59cab384d527b.scope - libcontainer container 629df5802472b038604fb4258224c18ea74c2e0d5f057b387ee59cab384d527b. Apr 14 00:11:00.107736 containerd[1465]: time="2026-04-14T00:11:00.107666173Z" level=info msg="StartContainer for \"629df5802472b038604fb4258224c18ea74c2e0d5f057b387ee59cab384d527b\" returns successfully" Apr 14 00:11:00.113754 systemd[1]: cri-containerd-629df5802472b038604fb4258224c18ea74c2e0d5f057b387ee59cab384d527b.scope: Deactivated successfully. Apr 14 00:11:00.204714 containerd[1465]: time="2026-04-14T00:11:00.204501825Z" level=info msg="shim disconnected" id=629df5802472b038604fb4258224c18ea74c2e0d5f057b387ee59cab384d527b namespace=k8s.io Apr 14 00:11:00.204714 containerd[1465]: time="2026-04-14T00:11:00.204577880Z" level=warning msg="cleaning up after shim disconnected" id=629df5802472b038604fb4258224c18ea74c2e0d5f057b387ee59cab384d527b namespace=k8s.io Apr 14 00:11:00.204714 containerd[1465]: time="2026-04-14T00:11:00.204585209Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:11:00.604128 kubelet[2593]: E0414 00:11:00.603914 2593 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:11:00.808526 kubelet[2593]: E0414 00:11:00.807731 2593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-mjcvf" podUID="45bb15a9-86dc-4ff1-8bb6-52b73e813f0a" Apr 14 00:11:00.845040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-629df5802472b038604fb4258224c18ea74c2e0d5f057b387ee59cab384d527b-rootfs.mount: Deactivated successfully. Apr 14 00:11:00.886777 kubelet[2593]: E0414 00:11:00.885739 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:11:00.905201 containerd[1465]: time="2026-04-14T00:11:00.904829274Z" level=info msg="CreateContainer within sandbox \"4b31a570b00a9b831d6616f7e6976657e169f9820e95200255f3d219b10dcd7e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 14 00:11:01.009011 containerd[1465]: time="2026-04-14T00:11:01.007730199Z" level=info msg="CreateContainer within sandbox \"4b31a570b00a9b831d6616f7e6976657e169f9820e95200255f3d219b10dcd7e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f6bad68c223cb475bdc543e978b4321e7fe79a26a15fe546cb2760b2952a1ed5\"" Apr 14 00:11:01.010615 containerd[1465]: time="2026-04-14T00:11:01.010572859Z" level=info msg="StartContainer for \"f6bad68c223cb475bdc543e978b4321e7fe79a26a15fe546cb2760b2952a1ed5\"" Apr 14 00:11:01.081999 systemd[1]: Started cri-containerd-f6bad68c223cb475bdc543e978b4321e7fe79a26a15fe546cb2760b2952a1ed5.scope - libcontainer container f6bad68c223cb475bdc543e978b4321e7fe79a26a15fe546cb2760b2952a1ed5. Apr 14 00:11:01.142502 systemd[1]: cri-containerd-f6bad68c223cb475bdc543e978b4321e7fe79a26a15fe546cb2760b2952a1ed5.scope: Deactivated successfully. Apr 14 00:11:01.148627 containerd[1465]: time="2026-04-14T00:11:01.148379746Z" level=info msg="StartContainer for \"f6bad68c223cb475bdc543e978b4321e7fe79a26a15fe546cb2760b2952a1ed5\" returns successfully" Apr 14 00:11:01.238758 containerd[1465]: time="2026-04-14T00:11:01.238652426Z" level=info msg="shim disconnected" id=f6bad68c223cb475bdc543e978b4321e7fe79a26a15fe546cb2760b2952a1ed5 namespace=k8s.io Apr 14 00:11:01.238758 containerd[1465]: time="2026-04-14T00:11:01.238732617Z" level=warning msg="cleaning up after shim disconnected" id=f6bad68c223cb475bdc543e978b4321e7fe79a26a15fe546cb2760b2952a1ed5 namespace=k8s.io Apr 14 00:11:01.238758 containerd[1465]: time="2026-04-14T00:11:01.238743638Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:11:01.845870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6bad68c223cb475bdc543e978b4321e7fe79a26a15fe546cb2760b2952a1ed5-rootfs.mount: Deactivated successfully. Apr 14 00:11:01.904576 kubelet[2593]: E0414 00:11:01.904509 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:11:01.924716 containerd[1465]: time="2026-04-14T00:11:01.924510605Z" level=info msg="CreateContainer within sandbox \"4b31a570b00a9b831d6616f7e6976657e169f9820e95200255f3d219b10dcd7e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 14 00:11:01.985683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount155301069.mount: Deactivated successfully. Apr 14 00:11:01.991638 containerd[1465]: time="2026-04-14T00:11:01.991571308Z" level=info msg="CreateContainer within sandbox \"4b31a570b00a9b831d6616f7e6976657e169f9820e95200255f3d219b10dcd7e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b55243fa776b5e32a97ad4bf70f0bdfc6b93e95d1fa4282b90deed3c01f6e8a0\"" Apr 14 00:11:01.993586 containerd[1465]: time="2026-04-14T00:11:01.992918139Z" level=info msg="StartContainer for \"b55243fa776b5e32a97ad4bf70f0bdfc6b93e95d1fa4282b90deed3c01f6e8a0\"" Apr 14 00:11:02.116106 systemd[1]: Started cri-containerd-b55243fa776b5e32a97ad4bf70f0bdfc6b93e95d1fa4282b90deed3c01f6e8a0.scope - libcontainer container b55243fa776b5e32a97ad4bf70f0bdfc6b93e95d1fa4282b90deed3c01f6e8a0. Apr 14 00:11:02.180442 containerd[1465]: time="2026-04-14T00:11:02.179842653Z" level=info msg="StartContainer for \"b55243fa776b5e32a97ad4bf70f0bdfc6b93e95d1fa4282b90deed3c01f6e8a0\" returns successfully" Apr 14 00:11:02.764985 kubelet[2593]: E0414 00:11:02.764854 2593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-mjcvf" podUID="45bb15a9-86dc-4ff1-8bb6-52b73e813f0a" Apr 14 00:11:02.916401 kubelet[2593]: E0414 00:11:02.916287 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:11:02.979741 kubelet[2593]: I0414 00:11:02.978711 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g47p6" podStartSLOduration=5.978681466 podStartE2EDuration="5.978681466s" podCreationTimestamp="2026-04-14 00:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:11:02.978449687 +0000 UTC m=+804.629998167" watchObservedRunningTime="2026-04-14 00:11:02.978681466 +0000 UTC m=+804.630229957" Apr 14 00:11:02.985351 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 14 00:11:03.918825 kubelet[2593]: E0414 00:11:03.918510 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:11:04.761034 kubelet[2593]: E0414 00:11:04.760913 2593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-mjcvf" podUID="45bb15a9-86dc-4ff1-8bb6-52b73e813f0a" Apr 14 00:11:06.761825 kubelet[2593]: E0414 00:11:06.761494 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:11:07.761549 kubelet[2593]: E0414 00:11:07.761251 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:11:08.606654 systemd-networkd[1390]: lxc_health: Link UP Apr 14 00:11:08.611798 systemd-networkd[1390]: lxc_health: Gained carrier Apr 14 00:11:09.911789 kubelet[2593]: E0414 00:11:09.911599 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:11:10.019604 kubelet[2593]: E0414 00:11:10.019349 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:11:10.332937 systemd-networkd[1390]: lxc_health: Gained IPv6LL Apr 14 00:11:11.034308 kubelet[2593]: E0414 00:11:11.034011 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:11:15.204737 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 14 00:11:15.229503 systemd-tmpfiles[6718]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 00:11:15.230578 systemd-tmpfiles[6718]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 00:11:15.232101 systemd-tmpfiles[6718]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 00:11:15.232639 systemd-tmpfiles[6718]: ACLs are not supported, ignoring. Apr 14 00:11:15.232708 systemd-tmpfiles[6718]: ACLs are not supported, ignoring. Apr 14 00:11:15.235754 systemd-tmpfiles[6718]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 00:11:15.235775 systemd-tmpfiles[6718]: Skipping /boot Apr 14 00:11:15.243010 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 14 00:11:15.243239 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 14 00:11:17.791444 kubelet[2593]: E0414 00:11:17.791396 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:11:22.204440 systemd[1]: run-containerd-runc-k8s.io-b55243fa776b5e32a97ad4bf70f0bdfc6b93e95d1fa4282b90deed3c01f6e8a0-runc.P8PEXW.mount: Deactivated successfully. Apr 14 00:11:24.748681 systemd[1]: run-containerd-runc-k8s.io-b55243fa776b5e32a97ad4bf70f0bdfc6b93e95d1fa4282b90deed3c01f6e8a0-runc.G8fU7L.mount: Deactivated successfully. Apr 14 00:11:27.345974 systemd[1]: run-containerd-runc-k8s.io-b55243fa776b5e32a97ad4bf70f0bdfc6b93e95d1fa4282b90deed3c01f6e8a0-runc.SWhGb0.mount: Deactivated successfully. Apr 14 00:11:39.018368 containerd[1465]: time="2026-04-14T00:11:39.016793511Z" level=info msg="StopPodSandbox for \"6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e\"" Apr 14 00:11:39.018368 containerd[1465]: time="2026-04-14T00:11:39.017968437Z" level=info msg="TearDown network for sandbox \"6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e\" successfully" Apr 14 00:11:39.018368 containerd[1465]: time="2026-04-14T00:11:39.018036803Z" level=info msg="StopPodSandbox for \"6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e\" returns successfully" Apr 14 00:11:39.082589 containerd[1465]: time="2026-04-14T00:11:39.019799611Z" level=info msg="RemovePodSandbox for \"6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e\"" Apr 14 00:11:39.082589 containerd[1465]: time="2026-04-14T00:11:39.019853573Z" level=info msg="Forcibly stopping sandbox \"6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e\"" Apr 14 00:11:39.082589 containerd[1465]: time="2026-04-14T00:11:39.019935030Z" level=info msg="TearDown network for sandbox \"6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e\" successfully" Apr 14 00:11:39.090405 containerd[1465]: time="2026-04-14T00:11:39.088828641Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 00:11:39.090753 containerd[1465]: time="2026-04-14T00:11:39.090585154Z" level=info msg="RemovePodSandbox \"6559c529ae7f627cb1726f7d8a1ac5ca2d4ff0b90e96708b914002f7f190b47e\" returns successfully" Apr 14 00:11:39.093265 containerd[1465]: time="2026-04-14T00:11:39.093071086Z" level=info msg="StopPodSandbox for \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\"" Apr 14 00:11:39.093541 containerd[1465]: time="2026-04-14T00:11:39.093377163Z" level=info msg="TearDown network for sandbox \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\" successfully" Apr 14 00:11:39.093541 containerd[1465]: time="2026-04-14T00:11:39.093393803Z" level=info msg="StopPodSandbox for \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\" returns successfully" Apr 14 00:11:39.099204 containerd[1465]: time="2026-04-14T00:11:39.097422031Z" level=info msg="RemovePodSandbox for \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\"" Apr 14 00:11:39.099204 containerd[1465]: time="2026-04-14T00:11:39.097458091Z" level=info msg="Forcibly stopping sandbox \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\"" Apr 14 00:11:39.099204 containerd[1465]: time="2026-04-14T00:11:39.097516745Z" level=info msg="TearDown network for sandbox \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\" successfully" Apr 14 00:11:39.203934 containerd[1465]: time="2026-04-14T00:11:39.203675627Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 00:11:39.203934 containerd[1465]: time="2026-04-14T00:11:39.203791687Z" level=info msg="RemovePodSandbox \"ea11a08b55a6ec7a0cc6bd79da58a5101cef44325c9f3e0568464996a4922faa\" returns successfully" Apr 14 00:11:42.018661 systemd[1]: run-containerd-runc-k8s.io-b55243fa776b5e32a97ad4bf70f0bdfc6b93e95d1fa4282b90deed3c01f6e8a0-runc.WBfdni.mount: Deactivated successfully. Apr 14 00:11:49.024770 systemd[1]: run-containerd-runc-k8s.io-b55243fa776b5e32a97ad4bf70f0bdfc6b93e95d1fa4282b90deed3c01f6e8a0-runc.XX2JuF.mount: Deactivated successfully. Apr 14 00:11:59.758525 kubelet[2593]: E0414 00:11:59.758474 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:12:00.387470 sshd[5712]: pam_unix(sshd:session): session closed for user core Apr 14 00:12:00.393268 systemd[1]: sshd@108-10.0.0.37:22-10.0.0.1:42316.service: Deactivated successfully. Apr 14 00:12:00.400665 systemd[1]: session-109.scope: Deactivated successfully. Apr 14 00:12:00.401120 systemd[1]: session-109.scope: Consumed 1.315s CPU time. Apr 14 00:12:00.404430 systemd-logind[1450]: Session 109 logged out. Waiting for processes to exit. Apr 14 00:12:00.406735 systemd-logind[1450]: Removed session 109. Apr 14 00:12:00.758344 kubelet[2593]: E0414 00:12:00.758130 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"