Apr 21 10:44:46.978923 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:44:46.978942 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:44:46.978951 kernel: BIOS-provided physical RAM map: Apr 21 10:44:46.978956 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 21 10:44:46.978962 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 21 10:44:46.978967 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 21 10:44:46.978973 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 21 10:44:46.978978 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 21 10:44:46.978983 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 10:44:46.978989 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 21 10:44:46.978994 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 10:44:46.979000 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 21 10:44:46.979005 kernel: NX (Execute Disable) protection: active Apr 21 10:44:46.979010 kernel: APIC: Static calls initialized Apr 21 10:44:46.979016 kernel: SMBIOS 2.8 present. Apr 21 10:44:46.979024 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 21 10:44:46.979029 kernel: Hypervisor detected: KVM Apr 21 10:44:46.979035 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:44:46.979040 kernel: kvm-clock: using sched offset of 4377868357 cycles Apr 21 10:44:46.979046 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:44:46.979052 kernel: tsc: Detected 2793.438 MHz processor Apr 21 10:44:46.979058 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:44:46.979064 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:44:46.979069 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 21 10:44:46.979076 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 21 10:44:46.979082 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:44:46.979088 kernel: Using GB pages for direct mapping Apr 21 10:44:46.979093 kernel: ACPI: Early table checksum verification disabled Apr 21 10:44:46.979126 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 21 10:44:46.979133 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:44:46.979138 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:44:46.979144 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:44:46.979150 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 21 10:44:46.979157 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:44:46.979163 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:44:46.979168 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:44:46.979174 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:44:46.979180 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 21 10:44:46.979185 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 21 10:44:46.979191 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 21 10:44:46.979199 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 21 10:44:46.979207 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 21 10:44:46.979212 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 21 10:44:46.979218 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 21 10:44:46.979224 kernel: No NUMA configuration found Apr 21 10:44:46.979230 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 21 10:44:46.979236 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 21 10:44:46.979242 kernel: Zone ranges: Apr 21 10:44:46.979250 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:44:46.979256 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 21 10:44:46.979261 kernel: Normal empty Apr 21 10:44:46.979266 kernel: Movable zone start for each node Apr 21 10:44:46.979271 kernel: Early memory node ranges Apr 21 10:44:46.979276 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 21 10:44:46.979281 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 21 10:44:46.979286 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 21 10:44:46.979291 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:44:46.979297 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 21 10:44:46.979302 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 21 10:44:46.979307 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:44:46.979375 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:44:46.979380 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:44:46.979385 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:44:46.979390 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:44:46.979395 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:44:46.979400 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:44:46.979407 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:44:46.979412 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:44:46.979417 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:44:46.979422 kernel: TSC deadline timer available Apr 21 10:44:46.979427 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 21 10:44:46.979432 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:44:46.979437 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 10:44:46.979442 kernel: kvm-guest: setup PV sched yield Apr 21 10:44:46.979447 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 21 10:44:46.979453 kernel: Booting paravirtualized kernel on KVM Apr 21 10:44:46.979458 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:44:46.979463 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 21 10:44:46.979468 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 21 10:44:46.979473 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 21 10:44:46.979478 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 21 10:44:46.979483 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:44:46.979488 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:44:46.979494 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:44:46.979500 kernel: random: crng init done Apr 21 10:44:46.979505 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:44:46.979510 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:44:46.979515 kernel: Fallback order for Node 0: 0 Apr 21 10:44:46.979520 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 21 10:44:46.979525 kernel: Policy zone: DMA32 Apr 21 10:44:46.979530 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:44:46.979535 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 137896K reserved, 0K cma-reserved) Apr 21 10:44:46.979542 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 21 10:44:46.979547 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:44:46.979552 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:44:46.979557 kernel: Dynamic Preempt: voluntary Apr 21 10:44:46.979562 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:44:46.979567 kernel: rcu: RCU event tracing is enabled. Apr 21 10:44:46.979572 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 21 10:44:46.979577 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:44:46.979582 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:44:46.979587 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:44:46.979594 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:44:46.979599 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 21 10:44:46.979604 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 21 10:44:46.979609 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:44:46.979613 kernel: Console: colour VGA+ 80x25 Apr 21 10:44:46.979618 kernel: printk: console [ttyS0] enabled Apr 21 10:44:46.979623 kernel: ACPI: Core revision 20230628 Apr 21 10:44:46.979628 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:44:46.979633 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:44:46.979640 kernel: x2apic enabled Apr 21 10:44:46.979645 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:44:46.979650 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 10:44:46.979655 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 10:44:46.979660 kernel: kvm-guest: setup PV IPIs Apr 21 10:44:46.979665 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:44:46.979670 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:44:46.979682 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 21 10:44:46.979687 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:44:46.979692 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 21 10:44:46.979698 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 21 10:44:46.979705 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:44:46.979710 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:44:46.979715 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:44:46.979721 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 10:44:46.979727 kernel: RETBleed: Vulnerable Apr 21 10:44:46.979734 kernel: Speculative Store Bypass: Vulnerable Apr 21 10:44:46.979739 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:44:46.979745 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 10:44:46.979750 kernel: active return thunk: its_return_thunk Apr 21 10:44:46.979756 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 10:44:46.979761 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:44:46.979767 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:44:46.979772 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:44:46.979778 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:44:46.979784 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:44:46.979790 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:44:46.979796 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:44:46.979801 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 10:44:46.979806 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 10:44:46.979812 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 10:44:46.979817 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 10:44:46.979823 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:44:46.979828 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:44:46.979835 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:44:46.979840 kernel: landlock: Up and running. Apr 21 10:44:46.979846 kernel: SELinux: Initializing. Apr 21 10:44:46.979852 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:44:46.979857 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:44:46.979863 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 21 10:44:46.979868 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:44:46.979874 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:44:46.979879 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:44:46.979886 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 21 10:44:46.979892 kernel: signal: max sigframe size: 3632 Apr 21 10:44:46.979897 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:44:46.979903 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:44:46.979908 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 10:44:46.979914 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:44:46.979919 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:44:46.979925 kernel: .... node #0, CPUs: #1 #2 #3 Apr 21 10:44:46.979930 kernel: smp: Brought up 1 node, 4 CPUs Apr 21 10:44:46.979937 kernel: smpboot: Max logical packages: 1 Apr 21 10:44:46.979943 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 21 10:44:46.979948 kernel: devtmpfs: initialized Apr 21 10:44:46.979954 kernel: x86/mm: Memory block size: 128MB Apr 21 10:44:46.979959 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:44:46.979965 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 21 10:44:46.979971 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:44:46.979976 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:44:46.979982 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:44:46.979989 kernel: audit: type=2000 audit(1776768285.465:1): state=initialized audit_enabled=0 res=1 Apr 21 10:44:46.979994 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:44:46.979999 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:44:46.980005 kernel: cpuidle: using governor menu Apr 21 10:44:46.980010 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:44:46.980016 kernel: dca service started, version 1.12.1 Apr 21 10:44:46.980021 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 21 10:44:46.980027 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 10:44:46.980032 kernel: PCI: Using configuration type 1 for base access Apr 21 10:44:46.980039 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:44:46.980045 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:44:46.980050 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:44:46.980056 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:44:46.980061 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:44:46.980066 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:44:46.980072 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:44:46.980077 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:44:46.980083 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:44:46.980089 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:44:46.980095 kernel: ACPI: Interpreter enabled Apr 21 10:44:46.980123 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 10:44:46.980129 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:44:46.980135 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:44:46.980140 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:44:46.980146 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:44:46.980151 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:44:46.980259 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:44:46.980387 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:44:46.980446 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:44:46.980453 kernel: PCI host bridge to bus 0000:00 Apr 21 10:44:46.980513 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:44:46.980566 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:44:46.980618 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:44:46.980673 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 21 10:44:46.980724 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 10:44:46.980775 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 21 10:44:46.980826 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:44:46.980900 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:44:46.980965 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 21 10:44:46.981027 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 21 10:44:46.981084 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 21 10:44:46.981172 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 21 10:44:46.981231 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:44:46.981295 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 21 10:44:46.981435 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 21 10:44:46.981494 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 21 10:44:46.981554 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 21 10:44:46.981616 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 21 10:44:46.981675 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 21 10:44:46.981731 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 21 10:44:46.981789 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 21 10:44:46.981850 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 21 10:44:46.981907 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 21 10:44:46.981967 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 21 10:44:46.982023 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 21 10:44:46.982080 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 21 10:44:46.982172 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:44:46.982231 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:44:46.982293 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:44:46.982395 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 21 10:44:46.982456 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 21 10:44:46.982517 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:44:46.982574 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 21 10:44:46.982581 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:44:46.982587 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:44:46.982593 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:44:46.982598 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:44:46.982605 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:44:46.982611 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:44:46.982616 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:44:46.982622 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:44:46.982627 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:44:46.982633 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:44:46.982638 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:44:46.982644 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:44:46.982649 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:44:46.982656 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:44:46.982661 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:44:46.982667 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:44:46.982672 kernel: iommu: Default domain type: Translated Apr 21 10:44:46.982678 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:44:46.982683 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:44:46.982689 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:44:46.982694 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 21 10:44:46.982700 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 21 10:44:46.982757 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:44:46.982815 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:44:46.982872 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:44:46.982879 kernel: vgaarb: loaded Apr 21 10:44:46.982884 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:44:46.982890 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:44:46.982896 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:44:46.982901 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:44:46.982906 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:44:46.982913 kernel: pnp: PnP ACPI init Apr 21 10:44:46.982984 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 10:44:46.982992 kernel: pnp: PnP ACPI: found 6 devices Apr 21 10:44:46.982997 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:44:46.983003 kernel: NET: Registered PF_INET protocol family Apr 21 10:44:46.983008 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:44:46.983014 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:44:46.983019 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:44:46.983027 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:44:46.983032 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:44:46.983038 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:44:46.983043 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:44:46.983049 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:44:46.983054 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:44:46.983060 kernel: NET: Registered PF_XDP protocol family Apr 21 10:44:46.983141 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:44:46.983194 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:44:46.983248 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:44:46.983299 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 21 10:44:46.983461 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 10:44:46.983513 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 21 10:44:46.983520 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:44:46.983525 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 10:44:46.983531 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:44:46.983537 kernel: Initialise system trusted keyrings Apr 21 10:44:46.983545 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:44:46.983550 kernel: Key type asymmetric registered Apr 21 10:44:46.983556 kernel: Asymmetric key parser 'x509' registered Apr 21 10:44:46.983561 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:44:46.983567 kernel: io scheduler mq-deadline registered Apr 21 10:44:46.983572 kernel: io scheduler kyber registered Apr 21 10:44:46.983578 kernel: io scheduler bfq registered Apr 21 10:44:46.983583 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:44:46.983589 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:44:46.983596 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:44:46.983601 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 21 10:44:46.983607 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:44:46.983612 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:44:46.983618 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:44:46.983623 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:44:46.983629 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:44:46.983687 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 21 10:44:46.983694 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:44:46.983747 kernel: rtc_cmos 00:04: registered as rtc0 Apr 21 10:44:46.983799 kernel: rtc_cmos 00:04: setting system clock to 2026-04-21T10:44:46 UTC (1776768286) Apr 21 10:44:46.983850 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 21 10:44:46.983857 kernel: intel_pstate: CPU model not supported Apr 21 10:44:46.983862 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:44:46.983868 kernel: Segment Routing with IPv6 Apr 21 10:44:46.983873 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:44:46.983879 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:44:46.983886 kernel: Key type dns_resolver registered Apr 21 10:44:46.983891 kernel: IPI shorthand broadcast: enabled Apr 21 10:44:46.983897 kernel: sched_clock: Marking stable (1282019174, 382838713)->(1791548586, -126690699) Apr 21 10:44:46.983903 kernel: registered taskstats version 1 Apr 21 10:44:46.983908 kernel: Loading compiled-in X.509 certificates Apr 21 10:44:46.983914 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:44:46.983919 kernel: Key type .fscrypt registered Apr 21 10:44:46.983924 kernel: Key type fscrypt-provisioning registered Apr 21 10:44:46.983930 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:44:46.983940 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:44:46.983946 kernel: ima: No architecture policies found Apr 21 10:44:46.983951 kernel: clk: Disabling unused clocks Apr 21 10:44:46.983956 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:44:46.983962 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:44:46.983967 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:44:46.983973 kernel: Run /init as init process Apr 21 10:44:46.983978 kernel: with arguments: Apr 21 10:44:46.984007 kernel: /init Apr 21 10:44:46.984015 kernel: with environment: Apr 21 10:44:46.984020 kernel: HOME=/ Apr 21 10:44:46.984026 kernel: TERM=linux Apr 21 10:44:46.984033 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:44:46.984041 systemd[1]: Detected virtualization kvm. Apr 21 10:44:46.984047 systemd[1]: Detected architecture x86-64. Apr 21 10:44:46.984052 systemd[1]: Running in initrd. Apr 21 10:44:46.984058 systemd[1]: No hostname configured, using default hostname. Apr 21 10:44:46.984065 systemd[1]: Hostname set to . Apr 21 10:44:46.984071 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:44:46.984077 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:44:46.984083 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:44:46.984089 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:44:46.984095 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:44:46.984128 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:44:46.984134 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:44:46.984142 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:44:46.984158 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:44:46.984164 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:44:46.984170 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:44:46.984176 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:44:46.984184 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:44:46.984190 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:44:46.984195 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:44:46.984202 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:44:46.984207 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:44:46.984213 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:44:46.984220 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:44:46.984229 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:44:46.984241 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:44:46.984251 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:44:46.984261 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:44:46.984269 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:44:46.984277 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:44:46.984287 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:44:46.984296 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:44:46.984455 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:44:46.984465 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:44:46.984478 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:44:46.984488 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:44:46.984516 systemd-journald[195]: Collecting audit messages is disabled. Apr 21 10:44:46.984537 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:44:46.984548 systemd-journald[195]: Journal started Apr 21 10:44:46.984573 systemd-journald[195]: Runtime Journal (/run/log/journal/6aeb1c6e783c4cc699f83768bc19c1a4) is 6.0M, max 48.4M, 42.3M free. Apr 21 10:44:46.990778 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:44:46.993506 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:44:46.996861 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:44:47.003625 systemd-modules-load[196]: Inserted module 'overlay' Apr 21 10:44:47.184643 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:44:47.184665 kernel: Bridge firewalling registered Apr 21 10:44:47.007524 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:44:47.029972 systemd-modules-load[196]: Inserted module 'br_netfilter' Apr 21 10:44:47.190920 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:44:47.192206 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:44:47.198240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:44:47.204153 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:44:47.210697 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:44:47.216549 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:44:47.222693 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:44:47.236875 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:44:47.240820 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:44:47.250914 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:44:47.260883 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:44:47.283595 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:44:47.287560 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:44:47.301167 dracut-cmdline[229]: dracut-dracut-053 Apr 21 10:44:47.307988 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:44:47.319149 systemd-resolved[231]: Positive Trust Anchors: Apr 21 10:44:47.319156 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:44:47.319181 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:44:47.321193 systemd-resolved[231]: Defaulting to hostname 'linux'. Apr 21 10:44:47.321925 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:44:47.327285 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:44:47.429400 kernel: SCSI subsystem initialized Apr 21 10:44:47.438392 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:44:47.450378 kernel: iscsi: registered transport (tcp) Apr 21 10:44:47.470443 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:44:47.470473 kernel: QLogic iSCSI HBA Driver Apr 21 10:44:47.503910 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:44:47.515532 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:44:47.547177 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:44:47.547218 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:44:47.549816 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:44:47.590560 kernel: raid6: avx512x4 gen() 45273 MB/s Apr 21 10:44:47.608409 kernel: raid6: avx512x2 gen() 44841 MB/s Apr 21 10:44:47.626409 kernel: raid6: avx512x1 gen() 44135 MB/s Apr 21 10:44:47.644416 kernel: raid6: avx2x4 gen() 36616 MB/s Apr 21 10:44:47.662410 kernel: raid6: avx2x2 gen() 36068 MB/s Apr 21 10:44:47.681491 kernel: raid6: avx2x1 gen() 28019 MB/s Apr 21 10:44:47.681507 kernel: raid6: using algorithm avx512x4 gen() 45273 MB/s Apr 21 10:44:47.701489 kernel: raid6: .... xor() 10108 MB/s, rmw enabled Apr 21 10:44:47.701509 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:44:47.722383 kernel: xor: automatically using best checksumming function avx Apr 21 10:44:47.863388 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:44:47.873266 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:44:47.889494 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:44:47.901397 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 21 10:44:47.904011 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:44:47.905696 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:44:47.926616 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Apr 21 10:44:47.952853 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:44:47.971586 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:44:48.006558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:44:48.018484 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:44:48.029723 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:44:48.035577 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:44:48.038948 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:44:48.039974 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:44:48.058344 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 21 10:44:48.060360 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:44:48.062552 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:44:48.071656 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 21 10:44:48.077414 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:44:48.090789 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:44:48.090805 kernel: GPT:9289727 != 19775487 Apr 21 10:44:48.090812 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:44:48.090819 kernel: GPT:9289727 != 19775487 Apr 21 10:44:48.090826 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:44:48.090833 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:44:48.088794 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:44:48.088900 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:44:48.096274 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:44:48.102167 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:44:48.102283 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:44:48.108473 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:44:48.131393 kernel: libata version 3.00 loaded. Apr 21 10:44:48.129876 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:44:48.140755 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:44:48.140780 kernel: AES CTR mode by8 optimization enabled Apr 21 10:44:48.144460 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:44:48.144689 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:44:48.149856 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:44:48.150022 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:44:48.159410 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (461) Apr 21 10:44:48.161711 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 21 10:44:48.370529 kernel: scsi host0: ahci Apr 21 10:44:48.370681 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (459) Apr 21 10:44:48.370691 kernel: scsi host1: ahci Apr 21 10:44:48.370771 kernel: scsi host2: ahci Apr 21 10:44:48.370845 kernel: scsi host3: ahci Apr 21 10:44:48.370922 kernel: scsi host4: ahci Apr 21 10:44:48.370989 kernel: scsi host5: ahci Apr 21 10:44:48.371055 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 Apr 21 10:44:48.371063 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 Apr 21 10:44:48.371070 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 Apr 21 10:44:48.371077 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 Apr 21 10:44:48.371087 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 Apr 21 10:44:48.371094 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 Apr 21 10:44:48.359242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:44:48.362875 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 21 10:44:48.372838 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:44:48.377345 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 21 10:44:48.386007 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 21 10:44:48.406499 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:44:48.411458 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:44:48.417268 disk-uuid[556]: Primary Header is updated. Apr 21 10:44:48.417268 disk-uuid[556]: Secondary Entries is updated. Apr 21 10:44:48.417268 disk-uuid[556]: Secondary Header is updated. Apr 21 10:44:48.422981 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:44:48.427860 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:44:48.431415 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:44:48.431538 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:44:48.484409 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 10:44:48.484454 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 10:44:48.484464 kernel: ata3.00: applying bridge limits Apr 21 10:44:48.489980 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:44:48.490014 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:44:48.492440 kernel: ata3.00: configured for UDMA/100 Apr 21 10:44:48.504399 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:44:48.508362 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:44:48.508390 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 10:44:48.511665 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 10:44:48.570899 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 10:44:48.571092 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 10:44:48.585436 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 21 10:44:49.429382 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:44:49.429939 disk-uuid[557]: The operation has completed successfully. Apr 21 10:44:49.453522 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:44:49.453639 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:44:49.471845 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:44:49.478145 sh[595]: Success Apr 21 10:44:49.490387 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:44:49.523949 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:44:49.544781 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:44:49.547284 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:44:49.561962 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:44:49.562004 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:44:49.562014 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:44:49.566845 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:44:49.566859 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:44:49.577295 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:44:49.583245 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:44:49.600565 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:44:49.606177 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:44:49.619368 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:44:49.619405 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:44:49.619413 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:44:49.625534 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:44:49.636234 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:44:49.641269 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:44:49.646844 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:44:49.654524 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:44:49.709304 ignition[673]: Ignition 2.19.0 Apr 21 10:44:49.709382 ignition[673]: Stage: fetch-offline Apr 21 10:44:49.709407 ignition[673]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:44:49.709413 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:44:49.709479 ignition[673]: parsed url from cmdline: "" Apr 21 10:44:49.709481 ignition[673]: no config URL provided Apr 21 10:44:49.709485 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:44:49.709489 ignition[673]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:44:49.709509 ignition[673]: op(1): [started] loading QEMU firmware config module Apr 21 10:44:49.709513 ignition[673]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 21 10:44:49.716676 ignition[673]: op(1): [finished] loading QEMU firmware config module Apr 21 10:44:49.778803 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:44:49.794627 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:44:49.815663 systemd-networkd[784]: lo: Link UP Apr 21 10:44:49.815699 systemd-networkd[784]: lo: Gained carrier Apr 21 10:44:49.816928 systemd-networkd[784]: Enumeration completed Apr 21 10:44:49.818067 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:44:49.818070 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:44:49.819533 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:44:49.822165 systemd-networkd[784]: eth0: Link UP Apr 21 10:44:49.822168 systemd-networkd[784]: eth0: Gained carrier Apr 21 10:44:49.822175 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:44:49.822196 systemd[1]: Reached target network.target - Network. Apr 21 10:44:49.851400 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:44:49.973405 ignition[673]: parsing config with SHA512: e49f4de2eccdaa605c8851b72778340d4727003f10d3fe1a465a6e783e55f3b39953633d3a5c2003c6415bfcd17198ea8ea730034387d23faccb0d3e80bc27da Apr 21 10:44:49.976960 unknown[673]: fetched base config from "system" Apr 21 10:44:49.976987 unknown[673]: fetched user config from "qemu" Apr 21 10:44:49.977393 ignition[673]: fetch-offline: fetch-offline passed Apr 21 10:44:49.979774 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:44:49.977441 ignition[673]: Ignition finished successfully Apr 21 10:44:49.984629 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 21 10:44:49.996524 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:44:50.013301 ignition[788]: Ignition 2.19.0 Apr 21 10:44:50.013386 ignition[788]: Stage: kargs Apr 21 10:44:50.013510 ignition[788]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:44:50.013517 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:44:50.014097 ignition[788]: kargs: kargs passed Apr 21 10:44:50.014163 ignition[788]: Ignition finished successfully Apr 21 10:44:50.026109 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:44:50.039497 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:44:50.055985 ignition[796]: Ignition 2.19.0 Apr 21 10:44:50.056009 ignition[796]: Stage: disks Apr 21 10:44:50.056152 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:44:50.056160 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:44:50.056862 ignition[796]: disks: disks passed Apr 21 10:44:50.056891 ignition[796]: Ignition finished successfully Apr 21 10:44:50.067504 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:44:50.068527 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:44:50.075232 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:44:50.076902 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:44:50.084023 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:44:50.090761 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:44:50.102485 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:44:50.116297 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:44:50.120305 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:44:50.131465 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:44:50.225386 kernel: EXT4-fs (vda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:44:50.225675 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:44:50.227191 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:44:50.236474 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:44:50.241041 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:44:50.244841 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:44:50.254807 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Apr 21 10:44:50.244868 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:44:50.263892 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:44:50.263908 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:44:50.263921 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:44:50.244884 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:44:50.272595 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:44:50.273994 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:44:50.277241 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:44:50.281884 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:44:50.319158 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:44:50.324088 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:44:50.330726 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:44:50.334537 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:44:50.423894 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:44:50.442468 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:44:50.448492 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:44:50.457451 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:44:50.473807 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:44:50.484221 ignition[929]: INFO : Ignition 2.19.0 Apr 21 10:44:50.484221 ignition[929]: INFO : Stage: mount Apr 21 10:44:50.487945 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:44:50.487945 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:44:50.487945 ignition[929]: INFO : mount: mount passed Apr 21 10:44:50.487945 ignition[929]: INFO : Ignition finished successfully Apr 21 10:44:50.498513 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:44:50.510404 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:44:50.557871 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:44:50.567581 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:44:50.580377 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Apr 21 10:44:50.585723 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:44:50.585739 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:44:50.585756 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:44:50.593402 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:44:50.594630 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:44:50.636393 ignition[960]: INFO : Ignition 2.19.0 Apr 21 10:44:50.636393 ignition[960]: INFO : Stage: files Apr 21 10:44:50.640596 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:44:50.640596 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:44:50.640596 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:44:50.640596 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:44:50.640596 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:44:50.657044 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:44:50.657044 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:44:50.657044 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:44:50.657044 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:44:50.657044 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:44:50.642178 unknown[960]: wrote ssh authorized keys file for user: core Apr 21 10:44:50.702585 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:44:50.783720 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:44:50.783720 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 10:44:50.783720 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 21 10:44:51.021526 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 10:44:51.200181 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 10:44:51.200181 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:44:51.200181 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:44:51.200181 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:44:51.218119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:44:51.218119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:44:51.218119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:44:51.218119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:44:51.218119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:44:51.218119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:44:51.218119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:44:51.218119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:44:51.218119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:44:51.218119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:44:51.218119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 21 10:44:51.332897 systemd-networkd[784]: eth0: Gained IPv6LL Apr 21 10:44:51.516948 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 21 10:44:51.911915 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:44:51.911915 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 21 10:44:51.920452 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:44:51.920452 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:44:51.920452 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 21 10:44:51.920452 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 21 10:44:51.920452 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:44:51.920452 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:44:51.920452 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 21 10:44:51.920452 ignition[960]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 21 10:44:51.974878 ignition[960]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:44:51.979481 ignition[960]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:44:51.979481 ignition[960]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 21 10:44:51.979481 ignition[960]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:44:51.979481 ignition[960]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:44:51.996504 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:44:51.996504 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:44:51.996504 ignition[960]: INFO : files: files passed Apr 21 10:44:51.996504 ignition[960]: INFO : Ignition finished successfully Apr 21 10:44:51.980825 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:44:52.013639 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:44:52.019103 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:44:52.030680 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:44:52.033003 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Apr 21 10:44:52.036437 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:44:52.036437 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:44:52.036412 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:44:52.055828 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:44:52.039508 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:44:52.046902 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:44:52.071793 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:44:52.093103 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:44:52.093250 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:44:52.097249 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:44:52.106609 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:44:52.107219 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:44:52.125575 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:44:52.142042 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:44:52.144020 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:44:52.159290 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:44:52.160964 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:44:52.166166 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:44:52.171391 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:44:52.171510 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:44:52.180864 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:44:52.185554 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:44:52.190122 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:44:52.191932 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:44:52.198441 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:44:52.204008 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:44:52.213205 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:44:52.214277 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:44:52.220452 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:44:52.225814 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:44:52.226294 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:44:52.226512 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:44:52.236092 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:44:52.238004 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:44:52.244406 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:44:52.244557 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:44:52.251657 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:44:52.251875 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:44:52.262384 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:44:52.262503 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:44:52.267738 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:44:52.272295 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:44:52.282283 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:44:52.283404 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:44:52.290088 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:44:52.294479 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:44:52.294580 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:44:52.300443 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:44:52.300495 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:44:52.304914 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:44:52.305009 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:44:52.306442 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:44:52.306501 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:44:52.329715 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:44:52.330863 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:44:52.331176 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:44:52.337922 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:44:52.346220 ignition[1014]: INFO : Ignition 2.19.0 Apr 21 10:44:52.346220 ignition[1014]: INFO : Stage: umount Apr 21 10:44:52.346220 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:44:52.346220 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:44:52.361301 ignition[1014]: INFO : umount: umount passed Apr 21 10:44:52.361301 ignition[1014]: INFO : Ignition finished successfully Apr 21 10:44:52.347218 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:44:52.347362 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:44:52.350283 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:44:52.350440 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:44:52.359441 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:44:52.359529 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:44:52.363651 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:44:52.363741 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:44:52.367430 systemd[1]: Stopped target network.target - Network. Apr 21 10:44:52.372101 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:44:52.372188 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:44:52.376733 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:44:52.376784 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:44:52.383458 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:44:52.383491 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:44:52.384950 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:44:52.384998 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:44:52.393523 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:44:52.401595 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:44:52.407223 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:44:52.419472 systemd-networkd[784]: eth0: DHCPv6 lease lost Apr 21 10:44:52.422035 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:44:52.422256 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:44:52.425390 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:44:52.425429 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:44:52.453555 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:44:52.456822 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:44:52.456876 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:44:52.463004 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:44:52.464447 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:44:52.464516 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:44:52.475579 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:44:52.475673 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:44:52.478790 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:44:52.478843 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:44:52.481162 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:44:52.481200 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:44:52.488207 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:44:52.488240 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:44:52.494105 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:44:52.494163 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:44:52.515710 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:44:52.515810 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:44:52.526217 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:44:52.526395 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:44:52.527393 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:44:52.527421 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:44:52.539454 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:44:52.539477 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:44:52.540850 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:44:52.540882 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:44:52.549198 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:44:52.549236 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:44:52.555775 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:44:52.555815 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:44:52.590850 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:44:52.592771 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:44:52.592826 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:44:52.599600 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:44:52.599651 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:44:52.605997 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:44:52.606120 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:44:52.614874 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:44:52.620750 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:44:52.632070 systemd[1]: Switching root. Apr 21 10:44:52.656880 systemd-journald[195]: Journal stopped Apr 21 10:44:53.555785 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Apr 21 10:44:53.555839 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:44:53.555851 kernel: SELinux: policy capability open_perms=1 Apr 21 10:44:53.555859 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:44:53.555866 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:44:53.555874 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:44:53.555885 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:44:53.555893 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:44:53.555900 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:44:53.555908 kernel: audit: type=1403 audit(1776768292.804:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:44:53.555918 systemd[1]: Successfully loaded SELinux policy in 45.163ms. Apr 21 10:44:53.555932 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.216ms. Apr 21 10:44:53.555944 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:44:53.555952 systemd[1]: Detected virtualization kvm. Apr 21 10:44:53.555960 systemd[1]: Detected architecture x86-64. Apr 21 10:44:53.555967 systemd[1]: Detected first boot. Apr 21 10:44:53.555975 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:44:53.555984 zram_generator::config[1056]: No configuration found. Apr 21 10:44:53.555993 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:44:53.556001 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:44:53.556009 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:44:53.556017 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:44:53.556024 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:44:53.556032 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:44:53.556040 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:44:53.556049 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:44:53.556056 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:44:53.556066 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:44:53.556074 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:44:53.556081 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:44:53.556088 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:44:53.556096 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:44:53.556103 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:44:53.556111 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:44:53.556120 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:44:53.556129 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:44:53.556136 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:44:53.556179 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:44:53.556188 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:44:53.556195 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:44:53.556204 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:44:53.556211 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:44:53.556221 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:44:53.556232 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:44:53.556240 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:44:53.556247 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:44:53.556255 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:44:53.556263 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:44:53.556271 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:44:53.556279 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:44:53.556287 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:44:53.556296 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:44:53.556304 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:44:53.556357 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:44:53.556365 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:44:53.556373 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:44:53.556381 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:44:53.556389 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:44:53.556397 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:44:53.556405 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:44:53.556415 systemd[1]: Reached target machines.target - Containers. Apr 21 10:44:53.556423 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:44:53.556431 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:44:53.556438 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:44:53.556446 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:44:53.556454 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:44:53.556462 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:44:53.556470 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:44:53.556480 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:44:53.556489 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:44:53.556497 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:44:53.556504 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:44:53.556512 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:44:53.556519 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:44:53.556530 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:44:53.556538 kernel: fuse: init (API version 7.39) Apr 21 10:44:53.556545 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:44:53.556554 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:44:53.556562 kernel: loop: module loaded Apr 21 10:44:53.556570 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:44:53.556577 kernel: ACPI: bus type drm_connector registered Apr 21 10:44:53.556585 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:44:53.556592 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:44:53.556612 systemd-journald[1130]: Collecting audit messages is disabled. Apr 21 10:44:53.556630 systemd-journald[1130]: Journal started Apr 21 10:44:53.556646 systemd-journald[1130]: Runtime Journal (/run/log/journal/6aeb1c6e783c4cc699f83768bc19c1a4) is 6.0M, max 48.4M, 42.3M free. Apr 21 10:44:53.165625 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:44:53.182103 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 21 10:44:53.182623 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:44:53.562588 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:44:53.562632 systemd[1]: Stopped verity-setup.service. Apr 21 10:44:53.569432 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:44:53.575730 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:44:53.576076 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:44:53.578826 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:44:53.581668 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:44:53.584251 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:44:53.587036 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:44:53.589856 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:44:53.592543 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:44:53.595793 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:44:53.599177 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:44:53.599294 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:44:53.602715 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:44:53.602856 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:44:53.605879 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:44:53.606021 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:44:53.608892 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:44:53.609026 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:44:53.612581 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:44:53.612712 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:44:53.615624 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:44:53.615768 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:44:53.618687 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:44:53.621668 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:44:53.624998 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:44:53.628396 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:44:53.639610 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:44:53.651502 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:44:53.655354 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:44:53.658138 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:44:53.658204 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:44:53.659836 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:44:53.664755 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:44:53.668509 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:44:53.671077 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:44:53.672099 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:44:53.678409 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:44:53.681917 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:44:53.684848 systemd-journald[1130]: Time spent on flushing to /var/log/journal/6aeb1c6e783c4cc699f83768bc19c1a4 is 18.927ms for 951 entries. Apr 21 10:44:53.684848 systemd-journald[1130]: System Journal (/var/log/journal/6aeb1c6e783c4cc699f83768bc19c1a4) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:44:53.719048 systemd-journald[1130]: Received client request to flush runtime journal. Apr 21 10:44:53.694498 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:44:53.697481 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:44:53.701747 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:44:53.706851 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:44:53.711573 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:44:53.716830 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:44:53.722590 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:44:53.726715 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:44:53.730783 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:44:53.733593 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:44:53.739883 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:44:53.749106 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:44:53.754512 kernel: loop0: detected capacity change from 0 to 217752 Apr 21 10:44:53.762954 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:44:53.767075 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 21 10:44:53.770766 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:44:53.784210 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:44:53.784449 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:44:53.795497 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:44:53.799450 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:44:53.799966 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:44:53.822388 kernel: loop1: detected capacity change from 0 to 142488 Apr 21 10:44:53.820264 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 21 10:44:53.820277 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 21 10:44:53.824659 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:44:53.863951 kernel: loop2: detected capacity change from 0 to 140768 Apr 21 10:44:53.897434 kernel: loop3: detected capacity change from 0 to 217752 Apr 21 10:44:53.911369 kernel: loop4: detected capacity change from 0 to 142488 Apr 21 10:44:53.928398 kernel: loop5: detected capacity change from 0 to 140768 Apr 21 10:44:53.942479 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 21 10:44:53.942803 (sd-merge)[1194]: Merged extensions into '/usr'. Apr 21 10:44:53.946412 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:44:53.946501 systemd[1]: Reloading... Apr 21 10:44:53.992410 zram_generator::config[1216]: No configuration found. Apr 21 10:44:54.023967 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:44:54.082591 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:44:54.112189 systemd[1]: Reloading finished in 165 ms. Apr 21 10:44:54.141540 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:44:54.144852 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:44:54.148231 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:44:54.166475 systemd[1]: Starting ensure-sysext.service... Apr 21 10:44:54.169397 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:44:54.173525 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:44:54.177958 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:44:54.177990 systemd[1]: Reloading... Apr 21 10:44:54.192526 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:44:54.192718 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:44:54.193231 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:44:54.193442 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 21 10:44:54.193477 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 21 10:44:54.195295 systemd-udevd[1260]: Using default interface naming scheme 'v255'. Apr 21 10:44:54.201616 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:44:54.201624 systemd-tmpfiles[1259]: Skipping /boot Apr 21 10:44:54.208203 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:44:54.208711 systemd-tmpfiles[1259]: Skipping /boot Apr 21 10:44:54.212359 zram_generator::config[1283]: No configuration found. Apr 21 10:44:54.256381 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1308) Apr 21 10:44:54.305431 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 21 10:44:54.302111 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:44:54.314438 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:44:54.314502 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 10:44:54.320771 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 21 10:44:54.320930 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 10:44:54.336449 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 21 10:44:54.349029 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:44:54.353906 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:44:54.353934 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 10:44:54.354370 systemd[1]: Reloading finished in 176 ms. Apr 21 10:44:54.365530 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:44:54.374767 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:44:54.401022 systemd[1]: Finished ensure-sysext.service. Apr 21 10:44:54.409357 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:44:54.463630 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:44:54.472960 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:44:54.476290 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:44:54.477270 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:44:54.481618 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:44:54.488742 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:44:54.493446 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:44:54.496670 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:44:54.500557 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:44:54.504932 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:44:54.536120 augenrules[1378]: No rules Apr 21 10:44:54.542078 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:44:54.552670 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:44:54.559754 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 10:44:54.569639 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:44:54.577519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:44:54.580728 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:44:54.581940 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:44:54.585833 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:44:54.590540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:44:54.590860 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:44:54.595056 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:44:54.595472 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:44:54.598809 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:44:54.600527 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:44:54.604090 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:44:54.604548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:44:54.606206 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:44:54.607099 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:44:54.617786 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:44:54.618196 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:44:54.646755 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:44:54.649549 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:44:54.651410 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:44:54.653702 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:44:54.655509 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:44:54.659663 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:44:54.661013 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:44:54.672435 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:44:54.685260 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:44:54.739597 systemd-networkd[1375]: lo: Link UP Apr 21 10:44:54.739628 systemd-networkd[1375]: lo: Gained carrier Apr 21 10:44:54.740589 systemd-networkd[1375]: Enumeration completed Apr 21 10:44:54.741410 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:44:54.741416 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:44:54.742235 systemd-networkd[1375]: eth0: Link UP Apr 21 10:44:54.742240 systemd-networkd[1375]: eth0: Gained carrier Apr 21 10:44:54.742249 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:44:54.748634 systemd-resolved[1383]: Positive Trust Anchors: Apr 21 10:44:54.748818 systemd-resolved[1383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:44:54.748874 systemd-resolved[1383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:44:54.753696 systemd-resolved[1383]: Defaulting to hostname 'linux'. Apr 21 10:44:54.760387 systemd-networkd[1375]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:44:54.760966 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Apr 21 10:44:55.357217 systemd-resolved[1383]: Clock change detected. Flushing caches. Apr 21 10:44:55.357242 systemd-timesyncd[1384]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 21 10:44:55.357277 systemd-timesyncd[1384]: Initial clock synchronization to Tue 2026-04-21 10:44:55.357167 UTC. Apr 21 10:44:55.444089 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:44:55.446948 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 10:44:55.450636 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:44:55.453944 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:44:55.457216 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:44:55.461581 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:44:55.464417 systemd[1]: Reached target network.target - Network. Apr 21 10:44:55.466743 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:44:55.470070 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:44:55.472915 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:44:55.476060 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:44:55.479265 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:44:55.482474 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:44:55.482514 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:44:55.484831 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:44:55.487570 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:44:55.490364 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:44:55.493564 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:44:55.496678 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:44:55.500796 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:44:55.517826 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:44:55.521793 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:44:55.525925 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:44:55.529327 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:44:55.530602 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:44:55.532447 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:44:55.535649 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:44:55.538640 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:44:55.538737 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:44:55.539814 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:44:55.543449 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:44:55.546828 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:44:55.550250 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:44:55.553908 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:44:55.554912 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:44:55.560995 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:44:55.565795 extend-filesystems[1423]: Found loop3 Apr 21 10:44:55.565795 extend-filesystems[1423]: Found loop4 Apr 21 10:44:55.565795 extend-filesystems[1423]: Found loop5 Apr 21 10:44:55.565795 extend-filesystems[1423]: Found sr0 Apr 21 10:44:55.565795 extend-filesystems[1423]: Found vda Apr 21 10:44:55.565795 extend-filesystems[1423]: Found vda1 Apr 21 10:44:55.565795 extend-filesystems[1423]: Found vda2 Apr 21 10:44:55.565795 extend-filesystems[1423]: Found vda3 Apr 21 10:44:55.565795 extend-filesystems[1423]: Found usr Apr 21 10:44:55.565795 extend-filesystems[1423]: Found vda4 Apr 21 10:44:55.565795 extend-filesystems[1423]: Found vda6 Apr 21 10:44:55.565795 extend-filesystems[1423]: Found vda7 Apr 21 10:44:55.565795 extend-filesystems[1423]: Found vda9 Apr 21 10:44:55.565795 extend-filesystems[1423]: Checking size of /dev/vda9 Apr 21 10:44:55.636204 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 21 10:44:55.636228 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1306) Apr 21 10:44:55.616059 dbus-daemon[1421]: [system] SELinux support is enabled Apr 21 10:44:55.565960 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:44:55.636451 extend-filesystems[1423]: Resized partition /dev/vda9 Apr 21 10:44:55.651106 jq[1422]: false Apr 21 10:44:55.571283 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:44:55.651389 extend-filesystems[1445]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:44:55.663914 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 21 10:44:55.577188 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:44:55.579955 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:44:55.686149 update_engine[1437]: I20260421 10:44:55.642213 1437 main.cc:92] Flatcar Update Engine starting Apr 21 10:44:55.686149 update_engine[1437]: I20260421 10:44:55.643285 1437 update_check_scheduler.cc:74] Next update check in 6m28s Apr 21 10:44:55.580338 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:44:55.687075 extend-filesystems[1445]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 21 10:44:55.687075 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 21 10:44:55.687075 extend-filesystems[1445]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 21 10:44:55.581007 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:44:55.699806 extend-filesystems[1423]: Resized filesystem in /dev/vda9 Apr 21 10:44:55.581988 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:44:55.701628 jq[1438]: true Apr 21 10:44:55.582825 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:44:55.587444 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:44:55.701917 tar[1444]: linux-amd64/LICENSE Apr 21 10:44:55.701917 tar[1444]: linux-amd64/helm Apr 21 10:44:55.587653 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:44:55.599939 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:44:55.702207 jq[1452]: true Apr 21 10:44:55.600056 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:44:55.617870 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:44:55.624994 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:44:55.625011 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:44:55.632421 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:44:55.632437 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:44:55.654260 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:44:55.654435 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:44:55.662383 (ntainerd)[1449]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:44:55.669370 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:44:55.681933 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:44:55.686868 systemd-logind[1436]: Watching system buttons on /dev/input/event1 (Power Button) Apr 21 10:44:55.686880 systemd-logind[1436]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:44:55.687166 systemd-logind[1436]: New seat seat0. Apr 21 10:44:55.687917 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:44:55.688059 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:44:55.698381 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:44:55.722751 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:44:55.736879 bash[1477]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:44:55.739449 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:44:55.745293 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 21 10:44:55.748941 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:44:55.752560 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:44:55.772014 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:44:55.780131 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:44:55.780505 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:44:55.790056 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:44:55.803378 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:44:55.820020 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:44:55.825619 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:44:55.829927 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:44:55.859518 containerd[1449]: time="2026-04-21T10:44:55.859413413Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:44:55.881341 containerd[1449]: time="2026-04-21T10:44:55.881291495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:44:55.884450 containerd[1449]: time="2026-04-21T10:44:55.883423144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:44:55.884450 containerd[1449]: time="2026-04-21T10:44:55.883449166Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:44:55.884450 containerd[1449]: time="2026-04-21T10:44:55.883462095Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:44:55.884450 containerd[1449]: time="2026-04-21T10:44:55.883614529Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:44:55.884450 containerd[1449]: time="2026-04-21T10:44:55.883627203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:44:55.884450 containerd[1449]: time="2026-04-21T10:44:55.883671264Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:44:55.884450 containerd[1449]: time="2026-04-21T10:44:55.883681436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:44:55.884450 containerd[1449]: time="2026-04-21T10:44:55.883879326Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:44:55.884450 containerd[1449]: time="2026-04-21T10:44:55.883890563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:44:55.884450 containerd[1449]: time="2026-04-21T10:44:55.883899587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:44:55.884450 containerd[1449]: time="2026-04-21T10:44:55.883906597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:44:55.884790 containerd[1449]: time="2026-04-21T10:44:55.883960033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:44:55.884790 containerd[1449]: time="2026-04-21T10:44:55.884090228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:44:55.884790 containerd[1449]: time="2026-04-21T10:44:55.884193857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:44:55.884790 containerd[1449]: time="2026-04-21T10:44:55.884203805Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:44:55.884790 containerd[1449]: time="2026-04-21T10:44:55.884252312Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:44:55.884790 containerd[1449]: time="2026-04-21T10:44:55.884299739Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:44:55.892083 containerd[1449]: time="2026-04-21T10:44:55.890729479Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:44:55.892083 containerd[1449]: time="2026-04-21T10:44:55.890788258Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:44:55.892083 containerd[1449]: time="2026-04-21T10:44:55.890804707Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:44:55.892083 containerd[1449]: time="2026-04-21T10:44:55.890815974Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:44:55.892083 containerd[1449]: time="2026-04-21T10:44:55.890825558Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:44:55.892083 containerd[1449]: time="2026-04-21T10:44:55.890926315Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:44:55.892083 containerd[1449]: time="2026-04-21T10:44:55.891143952Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:44:55.892083 containerd[1449]: time="2026-04-21T10:44:55.891213800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:44:55.892083 containerd[1449]: time="2026-04-21T10:44:55.891223887Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:44:55.892083 containerd[1449]: time="2026-04-21T10:44:55.891233792Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:44:55.892083 containerd[1449]: time="2026-04-21T10:44:55.891243003Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:44:55.892083 containerd[1449]: time="2026-04-21T10:44:55.891251723Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:44:55.892083 containerd[1449]: time="2026-04-21T10:44:55.891261252Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:44:55.892083 containerd[1449]: time="2026-04-21T10:44:55.891271341Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:44:55.892304 containerd[1449]: time="2026-04-21T10:44:55.891287086Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:44:55.892304 containerd[1449]: time="2026-04-21T10:44:55.891297671Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:44:55.892304 containerd[1449]: time="2026-04-21T10:44:55.891308739Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:44:55.892304 containerd[1449]: time="2026-04-21T10:44:55.891317224Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:44:55.892304 containerd[1449]: time="2026-04-21T10:44:55.891332055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892304 containerd[1449]: time="2026-04-21T10:44:55.891340834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892304 containerd[1449]: time="2026-04-21T10:44:55.891351594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892304 containerd[1449]: time="2026-04-21T10:44:55.891362613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892304 containerd[1449]: time="2026-04-21T10:44:55.891371868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892304 containerd[1449]: time="2026-04-21T10:44:55.891380815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892304 containerd[1449]: time="2026-04-21T10:44:55.891389403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892304 containerd[1449]: time="2026-04-21T10:44:55.891398061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892304 containerd[1449]: time="2026-04-21T10:44:55.891406730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892304 containerd[1449]: time="2026-04-21T10:44:55.891415868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892479 containerd[1449]: time="2026-04-21T10:44:55.891423655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892479 containerd[1449]: time="2026-04-21T10:44:55.891432259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892479 containerd[1449]: time="2026-04-21T10:44:55.891441631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892479 containerd[1449]: time="2026-04-21T10:44:55.891456020Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:44:55.892479 containerd[1449]: time="2026-04-21T10:44:55.891471701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892479 containerd[1449]: time="2026-04-21T10:44:55.891480415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892479 containerd[1449]: time="2026-04-21T10:44:55.891489494Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:44:55.892479 containerd[1449]: time="2026-04-21T10:44:55.891589383Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:44:55.892479 containerd[1449]: time="2026-04-21T10:44:55.891604405Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:44:55.892479 containerd[1449]: time="2026-04-21T10:44:55.891612165Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:44:55.892479 containerd[1449]: time="2026-04-21T10:44:55.891621287Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:44:55.892479 containerd[1449]: time="2026-04-21T10:44:55.891628650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892479 containerd[1449]: time="2026-04-21T10:44:55.891637263Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:44:55.892479 containerd[1449]: time="2026-04-21T10:44:55.891644250Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:44:55.892735 containerd[1449]: time="2026-04-21T10:44:55.891651620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:44:55.892850 containerd[1449]: time="2026-04-21T10:44:55.892787717Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:44:55.893022 containerd[1449]: time="2026-04-21T10:44:55.893011894Z" level=info msg="Connect containerd service" Apr 21 10:44:55.893090 containerd[1449]: time="2026-04-21T10:44:55.893079221Z" level=info msg="using legacy CRI server" Apr 21 10:44:55.893120 containerd[1449]: time="2026-04-21T10:44:55.893113413Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:44:55.893227 containerd[1449]: time="2026-04-21T10:44:55.893218060Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:44:55.894231 containerd[1449]: time="2026-04-21T10:44:55.894171114Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:44:55.894474 containerd[1449]: time="2026-04-21T10:44:55.894422982Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:44:55.894501 containerd[1449]: time="2026-04-21T10:44:55.894492000Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:44:55.894616 containerd[1449]: time="2026-04-21T10:44:55.894525356Z" level=info msg="Start subscribing containerd event" Apr 21 10:44:55.894634 containerd[1449]: time="2026-04-21T10:44:55.894616637Z" level=info msg="Start recovering state" Apr 21 10:44:55.894844 containerd[1449]: time="2026-04-21T10:44:55.894789281Z" level=info msg="Start event monitor" Apr 21 10:44:55.894935 containerd[1449]: time="2026-04-21T10:44:55.894899318Z" level=info msg="Start snapshots syncer" Apr 21 10:44:55.894935 containerd[1449]: time="2026-04-21T10:44:55.894917459Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:44:55.894935 containerd[1449]: time="2026-04-21T10:44:55.894925591Z" level=info msg="Start streaming server" Apr 21 10:44:55.895057 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:44:55.898035 containerd[1449]: time="2026-04-21T10:44:55.897986467Z" level=info msg="containerd successfully booted in 0.039582s" Apr 21 10:44:56.123809 tar[1444]: linux-amd64/README.md Apr 21 10:44:56.145780 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:44:57.048908 systemd-networkd[1375]: eth0: Gained IPv6LL Apr 21 10:44:57.051820 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:44:57.055897 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:44:57.070050 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 21 10:44:57.075360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:44:57.079600 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:44:57.099672 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 21 10:44:57.099925 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 21 10:44:57.103494 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:44:57.107606 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:44:57.813155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:44:57.816853 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:44:57.819839 systemd[1]: Startup finished in 1.418s (kernel) + 6.090s (initrd) + 4.461s (userspace) = 11.970s. Apr 21 10:44:57.880057 (kubelet)[1536]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:44:58.275605 kubelet[1536]: E0421 10:44:58.275429 1536 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:44:58.277795 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:44:58.277930 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:45:01.207679 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:45:01.209067 systemd[1]: Started sshd@0-10.0.0.151:22-10.0.0.1:41978.service - OpenSSH per-connection server daemon (10.0.0.1:41978). Apr 21 10:45:01.266055 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 41978 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:45:01.267828 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:45:01.276024 systemd-logind[1436]: New session 1 of user core. Apr 21 10:45:01.277049 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:45:01.293250 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:45:01.303243 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:45:01.305278 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:45:01.312985 (systemd)[1553]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:45:01.381671 systemd[1553]: Queued start job for default target default.target. Apr 21 10:45:01.392777 systemd[1553]: Created slice app.slice - User Application Slice. Apr 21 10:45:01.392825 systemd[1553]: Reached target paths.target - Paths. Apr 21 10:45:01.392838 systemd[1553]: Reached target timers.target - Timers. Apr 21 10:45:01.394017 systemd[1553]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:45:01.403970 systemd[1553]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:45:01.404035 systemd[1553]: Reached target sockets.target - Sockets. Apr 21 10:45:01.404046 systemd[1553]: Reached target basic.target - Basic System. Apr 21 10:45:01.404070 systemd[1553]: Reached target default.target - Main User Target. Apr 21 10:45:01.404088 systemd[1553]: Startup finished in 85ms. Apr 21 10:45:01.404300 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:45:01.405558 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:45:01.465328 systemd[1]: Started sshd@1-10.0.0.151:22-10.0.0.1:41984.service - OpenSSH per-connection server daemon (10.0.0.1:41984). Apr 21 10:45:01.502887 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 41984 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:45:01.504237 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:45:01.508154 systemd-logind[1436]: New session 2 of user core. Apr 21 10:45:01.517892 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:45:01.571752 sshd[1564]: pam_unix(sshd:session): session closed for user core Apr 21 10:45:01.577492 systemd[1]: sshd@1-10.0.0.151:22-10.0.0.1:41984.service: Deactivated successfully. Apr 21 10:45:01.578798 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:45:01.579931 systemd-logind[1436]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:45:01.580957 systemd[1]: Started sshd@2-10.0.0.151:22-10.0.0.1:41996.service - OpenSSH per-connection server daemon (10.0.0.1:41996). Apr 21 10:45:01.581781 systemd-logind[1436]: Removed session 2. Apr 21 10:45:01.613025 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 41996 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:45:01.614158 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:45:01.618625 systemd-logind[1436]: New session 3 of user core. Apr 21 10:45:01.635909 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:45:01.684429 sshd[1571]: pam_unix(sshd:session): session closed for user core Apr 21 10:45:01.698649 systemd[1]: sshd@2-10.0.0.151:22-10.0.0.1:41996.service: Deactivated successfully. Apr 21 10:45:01.699929 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:45:01.701157 systemd-logind[1436]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:45:01.702182 systemd[1]: Started sshd@3-10.0.0.151:22-10.0.0.1:42004.service - OpenSSH per-connection server daemon (10.0.0.1:42004). Apr 21 10:45:01.702998 systemd-logind[1436]: Removed session 3. Apr 21 10:45:01.732833 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 42004 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:45:01.734045 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:45:01.737547 systemd-logind[1436]: New session 4 of user core. Apr 21 10:45:01.753062 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:45:01.808497 sshd[1578]: pam_unix(sshd:session): session closed for user core Apr 21 10:45:01.816674 systemd[1]: sshd@3-10.0.0.151:22-10.0.0.1:42004.service: Deactivated successfully. Apr 21 10:45:01.817851 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:45:01.818919 systemd-logind[1436]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:45:01.819905 systemd[1]: Started sshd@4-10.0.0.151:22-10.0.0.1:42008.service - OpenSSH per-connection server daemon (10.0.0.1:42008). Apr 21 10:45:01.820661 systemd-logind[1436]: Removed session 4. Apr 21 10:45:01.850600 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 42008 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:45:01.851666 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:45:01.855663 systemd-logind[1436]: New session 5 of user core. Apr 21 10:45:01.861870 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:45:01.918660 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:45:01.918940 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:45:01.934001 sudo[1588]: pam_unix(sudo:session): session closed for user root Apr 21 10:45:01.935759 sshd[1585]: pam_unix(sshd:session): session closed for user core Apr 21 10:45:01.941496 systemd[1]: sshd@4-10.0.0.151:22-10.0.0.1:42008.service: Deactivated successfully. Apr 21 10:45:01.942657 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:45:01.943798 systemd-logind[1436]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:45:01.944823 systemd[1]: Started sshd@5-10.0.0.151:22-10.0.0.1:42012.service - OpenSSH per-connection server daemon (10.0.0.1:42012). Apr 21 10:45:01.945512 systemd-logind[1436]: Removed session 5. Apr 21 10:45:01.976129 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 42012 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:45:01.977140 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:45:01.980958 systemd-logind[1436]: New session 6 of user core. Apr 21 10:45:01.991929 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:45:02.044743 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:45:02.044963 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:45:02.048899 sudo[1597]: pam_unix(sudo:session): session closed for user root Apr 21 10:45:02.053922 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:45:02.054129 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:45:02.072027 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:45:02.073836 auditctl[1600]: No rules Apr 21 10:45:02.074091 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:45:02.074276 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:45:02.076261 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:45:02.104725 augenrules[1618]: No rules Apr 21 10:45:02.105784 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:45:02.106619 sudo[1596]: pam_unix(sudo:session): session closed for user root Apr 21 10:45:02.108109 sshd[1593]: pam_unix(sshd:session): session closed for user core Apr 21 10:45:02.115940 systemd[1]: sshd@5-10.0.0.151:22-10.0.0.1:42012.service: Deactivated successfully. Apr 21 10:45:02.116964 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:45:02.118001 systemd-logind[1436]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:45:02.127966 systemd[1]: Started sshd@6-10.0.0.151:22-10.0.0.1:42028.service - OpenSSH per-connection server daemon (10.0.0.1:42028). Apr 21 10:45:02.128887 systemd-logind[1436]: Removed session 6. Apr 21 10:45:02.156767 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 42028 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:45:02.157830 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:45:02.161467 systemd-logind[1436]: New session 7 of user core. Apr 21 10:45:02.170882 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:45:02.222124 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:45:02.222347 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:45:02.510009 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:45:02.510062 (dockerd)[1647]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:45:02.759744 dockerd[1647]: time="2026-04-21T10:45:02.759616513Z" level=info msg="Starting up" Apr 21 10:45:02.887834 dockerd[1647]: time="2026-04-21T10:45:02.887610372Z" level=info msg="Loading containers: start." Apr 21 10:45:03.008782 kernel: Initializing XFRM netlink socket Apr 21 10:45:03.092468 systemd-networkd[1375]: docker0: Link UP Apr 21 10:45:03.120650 dockerd[1647]: time="2026-04-21T10:45:03.120553785Z" level=info msg="Loading containers: done." Apr 21 10:45:03.135043 dockerd[1647]: time="2026-04-21T10:45:03.134973833Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:45:03.135157 dockerd[1647]: time="2026-04-21T10:45:03.135104656Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:45:03.135182 dockerd[1647]: time="2026-04-21T10:45:03.135175991Z" level=info msg="Daemon has completed initialization" Apr 21 10:45:03.171988 dockerd[1647]: time="2026-04-21T10:45:03.171815663Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:45:03.171976 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:45:03.576396 containerd[1449]: time="2026-04-21T10:45:03.576213852Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 21 10:45:04.051637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1619142380.mount: Deactivated successfully. Apr 21 10:45:04.809753 containerd[1449]: time="2026-04-21T10:45:04.809576290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:04.810623 containerd[1449]: time="2026-04-21T10:45:04.810554128Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 21 10:45:04.811975 containerd[1449]: time="2026-04-21T10:45:04.811935625Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:04.814454 containerd[1449]: time="2026-04-21T10:45:04.814408536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:04.815519 containerd[1449]: time="2026-04-21T10:45:04.815468272Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 1.239205303s" Apr 21 10:45:04.815519 containerd[1449]: time="2026-04-21T10:45:04.815515294Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 21 10:45:04.816297 containerd[1449]: time="2026-04-21T10:45:04.816260251Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 21 10:45:05.623931 containerd[1449]: time="2026-04-21T10:45:05.623814396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:05.625305 containerd[1449]: time="2026-04-21T10:45:05.625177202Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 21 10:45:05.626440 containerd[1449]: time="2026-04-21T10:45:05.626342626Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:05.629022 containerd[1449]: time="2026-04-21T10:45:05.628967620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:05.629658 containerd[1449]: time="2026-04-21T10:45:05.629609756Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 813.28303ms" Apr 21 10:45:05.629732 containerd[1449]: time="2026-04-21T10:45:05.629655939Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 21 10:45:05.630287 containerd[1449]: time="2026-04-21T10:45:05.630251308Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 21 10:45:06.361252 containerd[1449]: time="2026-04-21T10:45:06.361080790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:06.361869 containerd[1449]: time="2026-04-21T10:45:06.361823845Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 21 10:45:06.363785 containerd[1449]: time="2026-04-21T10:45:06.363629541Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:06.366381 containerd[1449]: time="2026-04-21T10:45:06.366311922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:06.369090 containerd[1449]: time="2026-04-21T10:45:06.367162248Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 736.870398ms" Apr 21 10:45:06.369090 containerd[1449]: time="2026-04-21T10:45:06.367499340Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 21 10:45:06.370043 containerd[1449]: time="2026-04-21T10:45:06.370001218Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 21 10:45:07.116506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount355805265.mount: Deactivated successfully. Apr 21 10:45:07.360427 containerd[1449]: time="2026-04-21T10:45:07.360291281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:07.361160 containerd[1449]: time="2026-04-21T10:45:07.361098441Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 21 10:45:07.362149 containerd[1449]: time="2026-04-21T10:45:07.362098019Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:07.363995 containerd[1449]: time="2026-04-21T10:45:07.363941627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:07.364671 containerd[1449]: time="2026-04-21T10:45:07.364626047Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 994.547759ms" Apr 21 10:45:07.364671 containerd[1449]: time="2026-04-21T10:45:07.364665299Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 21 10:45:07.365389 containerd[1449]: time="2026-04-21T10:45:07.365355089Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 21 10:45:07.749678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1773335954.mount: Deactivated successfully. Apr 21 10:45:08.528248 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:45:08.535006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:45:08.639475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:45:08.643582 (kubelet)[1930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:45:08.687420 kubelet[1930]: E0421 10:45:08.687341 1930 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:45:08.692500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:45:08.692670 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:45:08.800760 containerd[1449]: time="2026-04-21T10:45:08.800353931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:08.801575 containerd[1449]: time="2026-04-21T10:45:08.801510221Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 21 10:45:08.802845 containerd[1449]: time="2026-04-21T10:45:08.802794930Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:08.805430 containerd[1449]: time="2026-04-21T10:45:08.805383965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:08.806226 containerd[1449]: time="2026-04-21T10:45:08.806178359Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.440777429s" Apr 21 10:45:08.806226 containerd[1449]: time="2026-04-21T10:45:08.806226441Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 21 10:45:08.806882 containerd[1449]: time="2026-04-21T10:45:08.806783253Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 10:45:09.194258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444040909.mount: Deactivated successfully. Apr 21 10:45:09.203065 containerd[1449]: time="2026-04-21T10:45:09.202991572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:09.203805 containerd[1449]: time="2026-04-21T10:45:09.203745863Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 21 10:45:09.204795 containerd[1449]: time="2026-04-21T10:45:09.204658794Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:09.206765 containerd[1449]: time="2026-04-21T10:45:09.206667969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:09.207227 containerd[1449]: time="2026-04-21T10:45:09.207156325Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 400.33341ms" Apr 21 10:45:09.207227 containerd[1449]: time="2026-04-21T10:45:09.207221991Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 21 10:45:09.208022 containerd[1449]: time="2026-04-21T10:45:09.207989293Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 21 10:45:09.589042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3719272924.mount: Deactivated successfully. Apr 21 10:45:10.296115 kernel: hrtimer: interrupt took 3032690 ns Apr 21 10:45:11.081207 containerd[1449]: time="2026-04-21T10:45:11.080999630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:11.081931 containerd[1449]: time="2026-04-21T10:45:11.081865479Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 21 10:45:11.091878 containerd[1449]: time="2026-04-21T10:45:11.091825788Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:11.095920 containerd[1449]: time="2026-04-21T10:45:11.095870050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:11.096548 containerd[1449]: time="2026-04-21T10:45:11.096499688Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.888469892s" Apr 21 10:45:11.096548 containerd[1449]: time="2026-04-21T10:45:11.096544746Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 21 10:45:12.651872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:45:12.662962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:45:12.695250 systemd[1]: Reloading requested from client PID 2036 ('systemctl') (unit session-7.scope)... Apr 21 10:45:12.695319 systemd[1]: Reloading... Apr 21 10:45:12.766804 zram_generator::config[2072]: No configuration found. Apr 21 10:45:12.860918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:45:12.910207 systemd[1]: Reloading finished in 214 ms. Apr 21 10:45:12.953795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:45:12.956409 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:45:12.957628 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:45:12.957920 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:45:12.959332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:45:13.078724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:45:13.084540 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:45:13.290172 kubelet[2125]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:45:13.482860 kubelet[2125]: I0421 10:45:13.482769 2125 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 10:45:13.482860 kubelet[2125]: I0421 10:45:13.482839 2125 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:45:13.483002 kubelet[2125]: I0421 10:45:13.482915 2125 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:45:13.483002 kubelet[2125]: I0421 10:45:13.482920 2125 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:45:13.483356 kubelet[2125]: I0421 10:45:13.483310 2125 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 10:45:13.496248 kubelet[2125]: E0421 10:45:13.496157 2125 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:45:13.500128 kubelet[2125]: I0421 10:45:13.499489 2125 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:45:13.505767 kubelet[2125]: E0421 10:45:13.505585 2125 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:45:13.505857 kubelet[2125]: I0421 10:45:13.505793 2125 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:45:13.510943 kubelet[2125]: I0421 10:45:13.510859 2125 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:45:13.512105 kubelet[2125]: I0421 10:45:13.512022 2125 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:45:13.512319 kubelet[2125]: I0421 10:45:13.512064 2125 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:45:13.512319 kubelet[2125]: I0421 10:45:13.512285 2125 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 10:45:13.512319 kubelet[2125]: I0421 10:45:13.512293 2125 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 10:45:13.512609 kubelet[2125]: I0421 10:45:13.512543 2125 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:45:13.518828 kubelet[2125]: I0421 10:45:13.518779 2125 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 10:45:13.518984 kubelet[2125]: I0421 10:45:13.518943 2125 kubelet.go:482] "Attempting to sync node with API server" Apr 21 10:45:13.518984 kubelet[2125]: I0421 10:45:13.518977 2125 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:45:13.519495 kubelet[2125]: I0421 10:45:13.519478 2125 kubelet.go:394] "Adding apiserver pod source" Apr 21 10:45:13.519495 kubelet[2125]: I0421 10:45:13.519492 2125 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:45:13.523665 kubelet[2125]: I0421 10:45:13.523569 2125 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:45:13.529411 kubelet[2125]: I0421 10:45:13.529373 2125 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:45:13.529544 kubelet[2125]: I0421 10:45:13.529435 2125 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:45:13.545901 kubelet[2125]: W0421 10:45:13.545219 2125 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:45:13.549543 kubelet[2125]: I0421 10:45:13.549502 2125 server.go:1257] "Started kubelet" Apr 21 10:45:13.549948 kubelet[2125]: I0421 10:45:13.549866 2125 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:45:13.550000 kubelet[2125]: I0421 10:45:13.549939 2125 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:45:13.551526 kubelet[2125]: I0421 10:45:13.550493 2125 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:45:13.551526 kubelet[2125]: I0421 10:45:13.550667 2125 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:45:13.551526 kubelet[2125]: I0421 10:45:13.551153 2125 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 10:45:13.551784 kubelet[2125]: I0421 10:45:13.551554 2125 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:45:13.552631 kubelet[2125]: I0421 10:45:13.552397 2125 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:45:13.554359 kubelet[2125]: E0421 10:45:13.553741 2125 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:45:13.554359 kubelet[2125]: I0421 10:45:13.553776 2125 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 10:45:13.554359 kubelet[2125]: I0421 10:45:13.553892 2125 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:45:13.554359 kubelet[2125]: I0421 10:45:13.553917 2125 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:45:13.554489 kubelet[2125]: E0421 10:45:13.554391 2125 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="200ms" Apr 21 10:45:13.554622 kubelet[2125]: I0421 10:45:13.554579 2125 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:45:13.554891 kubelet[2125]: I0421 10:45:13.554762 2125 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:45:13.555462 kubelet[2125]: E0421 10:45:13.555400 2125 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:45:13.558323 kubelet[2125]: I0421 10:45:13.555974 2125 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:45:13.560909 kubelet[2125]: E0421 10:45:13.558910 2125 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.151:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.151:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8595e114907ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 10:45:13.549457326 +0000 UTC m=+0.447154415,LastTimestamp:2026-04-21 10:45:13.549457326 +0000 UTC m=+0.447154415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 10:45:13.572537 kubelet[2125]: I0421 10:45:13.572500 2125 cpu_manager.go:225] "Starting" policy="none" Apr 21 10:45:13.572537 kubelet[2125]: I0421 10:45:13.572535 2125 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 10:45:13.572731 kubelet[2125]: I0421 10:45:13.572550 2125 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 10:45:13.574013 kubelet[2125]: I0421 10:45:13.573962 2125 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:45:13.575773 kubelet[2125]: I0421 10:45:13.575665 2125 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:45:13.575773 kubelet[2125]: I0421 10:45:13.575767 2125 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 10:45:13.575842 kubelet[2125]: I0421 10:45:13.575792 2125 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 10:45:13.575842 kubelet[2125]: E0421 10:45:13.575834 2125 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:45:13.576071 kubelet[2125]: I0421 10:45:13.576021 2125 policy_none.go:50] "Start" Apr 21 10:45:13.576120 kubelet[2125]: I0421 10:45:13.576095 2125 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:45:13.576175 kubelet[2125]: I0421 10:45:13.576149 2125 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:45:13.579391 kubelet[2125]: I0421 10:45:13.579338 2125 policy_none.go:44] "Start" Apr 21 10:45:13.585327 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:45:13.618980 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:45:13.638534 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:45:13.658150 kubelet[2125]: E0421 10:45:13.658069 2125 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:45:13.659128 kubelet[2125]: E0421 10:45:13.658992 2125 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:45:13.659218 kubelet[2125]: I0421 10:45:13.659171 2125 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 10:45:13.659218 kubelet[2125]: I0421 10:45:13.659180 2125 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:45:13.659583 kubelet[2125]: I0421 10:45:13.659427 2125 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 10:45:13.661014 kubelet[2125]: E0421 10:45:13.660949 2125 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:45:13.661081 kubelet[2125]: E0421 10:45:13.661017 2125 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 21 10:45:13.718001 systemd[1]: Created slice kubepods-burstable-pod5e38bd8035597eb7adf14f83b23ab194.slice - libcontainer container kubepods-burstable-pod5e38bd8035597eb7adf14f83b23ab194.slice. Apr 21 10:45:13.743899 kubelet[2125]: E0421 10:45:13.743744 2125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:45:13.749586 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 21 10:45:13.752884 kubelet[2125]: E0421 10:45:13.752756 2125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:45:13.761069 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 21 10:45:13.761207 kubelet[2125]: I0421 10:45:13.761190 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:45:13.761239 kubelet[2125]: I0421 10:45:13.761214 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:45:13.761239 kubelet[2125]: I0421 10:45:13.761231 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e38bd8035597eb7adf14f83b23ab194-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5e38bd8035597eb7adf14f83b23ab194\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:45:13.762093 kubelet[2125]: I0421 10:45:13.761268 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e38bd8035597eb7adf14f83b23ab194-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5e38bd8035597eb7adf14f83b23ab194\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:45:13.762093 kubelet[2125]: I0421 10:45:13.761563 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:45:13.762093 kubelet[2125]: I0421 10:45:13.761753 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:45:13.762093 kubelet[2125]: I0421 10:45:13.761771 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:45:13.765062 kubelet[2125]: I0421 10:45:13.764928 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e38bd8035597eb7adf14f83b23ab194-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5e38bd8035597eb7adf14f83b23ab194\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:45:13.765062 kubelet[2125]: I0421 10:45:13.765001 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:45:13.765062 kubelet[2125]: E0421 10:45:13.763244 2125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:45:13.765062 kubelet[2125]: I0421 10:45:13.762280 2125 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:45:13.765302 kubelet[2125]: E0421 10:45:13.763107 2125 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="400ms" Apr 21 10:45:13.765541 kubelet[2125]: E0421 10:45:13.765511 2125 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Apr 21 10:45:13.968998 kubelet[2125]: I0421 10:45:13.968825 2125 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:45:13.970433 kubelet[2125]: E0421 10:45:13.970349 2125 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Apr 21 10:45:14.068132 kubelet[2125]: E0421 10:45:14.068054 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:14.070662 containerd[1449]: time="2026-04-21T10:45:14.070578151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5e38bd8035597eb7adf14f83b23ab194,Namespace:kube-system,Attempt:0,}" Apr 21 10:45:14.071212 kubelet[2125]: E0421 10:45:14.071073 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:14.071897 containerd[1449]: time="2026-04-21T10:45:14.071749174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 21 10:45:14.073746 kubelet[2125]: E0421 10:45:14.073605 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:14.074097 containerd[1449]: time="2026-04-21T10:45:14.074053635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 21 10:45:14.166419 kubelet[2125]: E0421 10:45:14.166299 2125 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="800ms" Apr 21 10:45:14.376767 kubelet[2125]: I0421 10:45:14.376482 2125 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:45:14.377107 kubelet[2125]: E0421 10:45:14.376907 2125 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Apr 21 10:45:14.468769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4092606742.mount: Deactivated successfully. Apr 21 10:45:14.477274 containerd[1449]: time="2026-04-21T10:45:14.477220173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:45:14.478886 containerd[1449]: time="2026-04-21T10:45:14.478734933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 21 10:45:14.483336 containerd[1449]: time="2026-04-21T10:45:14.483251127Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:45:14.484446 containerd[1449]: time="2026-04-21T10:45:14.484405516Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:45:14.485714 containerd[1449]: time="2026-04-21T10:45:14.485568514Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:45:14.486741 containerd[1449]: time="2026-04-21T10:45:14.486597786Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:45:14.487635 containerd[1449]: time="2026-04-21T10:45:14.487545087Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:45:14.488884 containerd[1449]: time="2026-04-21T10:45:14.488762636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:45:14.489375 containerd[1449]: time="2026-04-21T10:45:14.489344755Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 415.241655ms" Apr 21 10:45:14.492332 containerd[1449]: time="2026-04-21T10:45:14.492244215Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 421.555355ms" Apr 21 10:45:14.492670 containerd[1449]: time="2026-04-21T10:45:14.492611894Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 420.81394ms" Apr 21 10:45:14.843754 containerd[1449]: time="2026-04-21T10:45:14.843263221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:45:14.843754 containerd[1449]: time="2026-04-21T10:45:14.843325462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:45:14.843754 containerd[1449]: time="2026-04-21T10:45:14.843338437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:14.843754 containerd[1449]: time="2026-04-21T10:45:14.843582037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:14.855242 containerd[1449]: time="2026-04-21T10:45:14.854463053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:45:14.855242 containerd[1449]: time="2026-04-21T10:45:14.854526328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:45:14.855242 containerd[1449]: time="2026-04-21T10:45:14.854535150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:14.855242 containerd[1449]: time="2026-04-21T10:45:14.854583069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:14.855760 containerd[1449]: time="2026-04-21T10:45:14.855449791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:45:14.857418 containerd[1449]: time="2026-04-21T10:45:14.857383473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:45:14.857532 containerd[1449]: time="2026-04-21T10:45:14.857487823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:14.857766 containerd[1449]: time="2026-04-21T10:45:14.857670935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:15.037483 kubelet[2125]: E0421 10:45:15.037415 2125 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="1.6s" Apr 21 10:45:15.061022 systemd[1]: Started cri-containerd-30fd63da07cdf98ff1069793aa24e2cbf0c29832cd09d5a94d44910c64bec95b.scope - libcontainer container 30fd63da07cdf98ff1069793aa24e2cbf0c29832cd09d5a94d44910c64bec95b. Apr 21 10:45:15.097886 systemd[1]: Started cri-containerd-e25001ec47e0b344503cbea1c8d6b7dff830e9e6b962370a788f30946b7d8860.scope - libcontainer container e25001ec47e0b344503cbea1c8d6b7dff830e9e6b962370a788f30946b7d8860. Apr 21 10:45:15.101483 systemd[1]: Started cri-containerd-3f8c7b26d6ab617c1c88deb9c9caaa26ba443db3653838d90fa27c361788b01c.scope - libcontainer container 3f8c7b26d6ab617c1c88deb9c9caaa26ba443db3653838d90fa27c361788b01c. Apr 21 10:45:15.158451 containerd[1449]: time="2026-04-21T10:45:15.158381857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5e38bd8035597eb7adf14f83b23ab194,Namespace:kube-system,Attempt:0,} returns sandbox id \"e25001ec47e0b344503cbea1c8d6b7dff830e9e6b962370a788f30946b7d8860\"" Apr 21 10:45:15.161382 kubelet[2125]: E0421 10:45:15.160320 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:15.168061 containerd[1449]: time="2026-04-21T10:45:15.167994272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"30fd63da07cdf98ff1069793aa24e2cbf0c29832cd09d5a94d44910c64bec95b\"" Apr 21 10:45:15.168676 containerd[1449]: time="2026-04-21T10:45:15.168580470Z" level=info msg="CreateContainer within sandbox \"e25001ec47e0b344503cbea1c8d6b7dff830e9e6b962370a788f30946b7d8860\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:45:15.169061 kubelet[2125]: E0421 10:45:15.168977 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:15.173455 containerd[1449]: time="2026-04-21T10:45:15.173345310Z" level=info msg="CreateContainer within sandbox \"30fd63da07cdf98ff1069793aa24e2cbf0c29832cd09d5a94d44910c64bec95b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:45:15.179529 containerd[1449]: time="2026-04-21T10:45:15.179403509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f8c7b26d6ab617c1c88deb9c9caaa26ba443db3653838d90fa27c361788b01c\"" Apr 21 10:45:15.180430 kubelet[2125]: I0421 10:45:15.180344 2125 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:45:15.181902 kubelet[2125]: E0421 10:45:15.180904 2125 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Apr 21 10:45:15.181902 kubelet[2125]: E0421 10:45:15.181131 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:15.185596 containerd[1449]: time="2026-04-21T10:45:15.185559612Z" level=info msg="CreateContainer within sandbox \"3f8c7b26d6ab617c1c88deb9c9caaa26ba443db3653838d90fa27c361788b01c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:45:15.197347 containerd[1449]: time="2026-04-21T10:45:15.197184675Z" level=info msg="CreateContainer within sandbox \"e25001ec47e0b344503cbea1c8d6b7dff830e9e6b962370a788f30946b7d8860\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"28cd42e8e020869f1159371c085dd5b9fb7e67ad1589d2f45a8fce7fe4043e5d\"" Apr 21 10:45:15.198416 containerd[1449]: time="2026-04-21T10:45:15.198370232Z" level=info msg="StartContainer for \"28cd42e8e020869f1159371c085dd5b9fb7e67ad1589d2f45a8fce7fe4043e5d\"" Apr 21 10:45:15.201243 containerd[1449]: time="2026-04-21T10:45:15.201203876Z" level=info msg="CreateContainer within sandbox \"30fd63da07cdf98ff1069793aa24e2cbf0c29832cd09d5a94d44910c64bec95b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"791ea857b8bba0615a6c16c93668c39d907d6c5e118f10f5f381835795475f29\"" Apr 21 10:45:15.201771 containerd[1449]: time="2026-04-21T10:45:15.201672151Z" level=info msg="StartContainer for \"791ea857b8bba0615a6c16c93668c39d907d6c5e118f10f5f381835795475f29\"" Apr 21 10:45:15.209570 containerd[1449]: time="2026-04-21T10:45:15.209375063Z" level=info msg="CreateContainer within sandbox \"3f8c7b26d6ab617c1c88deb9c9caaa26ba443db3653838d90fa27c361788b01c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4a4d458f2e92311f14b0f046c0c787fc20af00ebd28cc284d33df0c81ad180e2\"" Apr 21 10:45:15.211122 containerd[1449]: time="2026-04-21T10:45:15.210202299Z" level=info msg="StartContainer for \"4a4d458f2e92311f14b0f046c0c787fc20af00ebd28cc284d33df0c81ad180e2\"" Apr 21 10:45:15.233240 systemd[1]: Started cri-containerd-28cd42e8e020869f1159371c085dd5b9fb7e67ad1589d2f45a8fce7fe4043e5d.scope - libcontainer container 28cd42e8e020869f1159371c085dd5b9fb7e67ad1589d2f45a8fce7fe4043e5d. Apr 21 10:45:15.235989 systemd[1]: Started cri-containerd-4a4d458f2e92311f14b0f046c0c787fc20af00ebd28cc284d33df0c81ad180e2.scope - libcontainer container 4a4d458f2e92311f14b0f046c0c787fc20af00ebd28cc284d33df0c81ad180e2. Apr 21 10:45:15.240240 systemd[1]: Started cri-containerd-791ea857b8bba0615a6c16c93668c39d907d6c5e118f10f5f381835795475f29.scope - libcontainer container 791ea857b8bba0615a6c16c93668c39d907d6c5e118f10f5f381835795475f29. Apr 21 10:45:15.307843 containerd[1449]: time="2026-04-21T10:45:15.307787801Z" level=info msg="StartContainer for \"4a4d458f2e92311f14b0f046c0c787fc20af00ebd28cc284d33df0c81ad180e2\" returns successfully" Apr 21 10:45:15.307953 containerd[1449]: time="2026-04-21T10:45:15.307895117Z" level=info msg="StartContainer for \"791ea857b8bba0615a6c16c93668c39d907d6c5e118f10f5f381835795475f29\" returns successfully" Apr 21 10:45:15.307953 containerd[1449]: time="2026-04-21T10:45:15.307913847Z" level=info msg="StartContainer for \"28cd42e8e020869f1159371c085dd5b9fb7e67ad1589d2f45a8fce7fe4043e5d\" returns successfully" Apr 21 10:45:15.606577 kubelet[2125]: E0421 10:45:15.602947 2125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:45:15.606577 kubelet[2125]: E0421 10:45:15.603106 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:15.612401 kubelet[2125]: E0421 10:45:15.612332 2125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:45:15.612481 kubelet[2125]: E0421 10:45:15.612453 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:15.616554 kubelet[2125]: E0421 10:45:15.616499 2125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:45:15.616794 kubelet[2125]: E0421 10:45:15.616612 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:15.665483 kubelet[2125]: E0421 10:45:15.665425 2125 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:45:16.619182 kubelet[2125]: E0421 10:45:16.619140 2125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:45:16.619450 kubelet[2125]: E0421 10:45:16.619334 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:16.620241 kubelet[2125]: E0421 10:45:16.620202 2125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:45:16.620341 kubelet[2125]: E0421 10:45:16.620316 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:16.821103 kubelet[2125]: I0421 10:45:16.820589 2125 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:45:17.786173 kubelet[2125]: E0421 10:45:17.786082 2125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:45:17.786475 kubelet[2125]: E0421 10:45:17.786400 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:17.961199 kubelet[2125]: E0421 10:45:17.961076 2125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:45:17.961342 kubelet[2125]: E0421 10:45:17.961237 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:18.730904 kubelet[2125]: E0421 10:45:18.730872 2125 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 21 10:45:18.823244 kubelet[2125]: I0421 10:45:18.823144 2125 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 21 10:45:18.856980 kubelet[2125]: I0421 10:45:18.855002 2125 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:45:18.862526 kubelet[2125]: E0421 10:45:18.862256 2125 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 21 10:45:18.862526 kubelet[2125]: I0421 10:45:18.862280 2125 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:45:18.864352 kubelet[2125]: E0421 10:45:18.864285 2125 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 21 10:45:18.864352 kubelet[2125]: I0421 10:45:18.864326 2125 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:45:18.865876 kubelet[2125]: E0421 10:45:18.865854 2125 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:45:19.602064 kubelet[2125]: I0421 10:45:19.602013 2125 apiserver.go:52] "Watching apiserver" Apr 21 10:45:19.654427 kubelet[2125]: I0421 10:45:19.654337 2125 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:45:20.810074 kubelet[2125]: I0421 10:45:20.809766 2125 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:45:20.822047 kubelet[2125]: E0421 10:45:20.821967 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:21.051293 systemd[1]: Reloading requested from client PID 2410 ('systemctl') (unit session-7.scope)... Apr 21 10:45:21.051324 systemd[1]: Reloading... Apr 21 10:45:21.123970 zram_generator::config[2448]: No configuration found. Apr 21 10:45:21.198988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:45:21.253127 systemd[1]: Reloading finished in 201 ms. Apr 21 10:45:21.289206 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:45:21.304806 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:45:21.305013 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:45:21.305074 systemd[1]: kubelet.service: Consumed 1.847s CPU time, 129.6M memory peak, 0B memory swap peak. Apr 21 10:45:21.313246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:45:21.793257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:45:21.797263 (kubelet)[2494]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:45:21.844557 kubelet[2494]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:45:21.851773 kubelet[2494]: I0421 10:45:21.850818 2494 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 10:45:21.851773 kubelet[2494]: I0421 10:45:21.850848 2494 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:45:21.851773 kubelet[2494]: I0421 10:45:21.850861 2494 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:45:21.851773 kubelet[2494]: I0421 10:45:21.850865 2494 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:45:21.851773 kubelet[2494]: I0421 10:45:21.851082 2494 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 10:45:21.854205 kubelet[2494]: I0421 10:45:21.854190 2494 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:45:21.856383 kubelet[2494]: I0421 10:45:21.856368 2494 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:45:21.872899 kubelet[2494]: E0421 10:45:21.872794 2494 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:45:21.872899 kubelet[2494]: I0421 10:45:21.872847 2494 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:45:21.876392 kubelet[2494]: I0421 10:45:21.876348 2494 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:45:21.876609 kubelet[2494]: I0421 10:45:21.876561 2494 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:45:21.877258 kubelet[2494]: I0421 10:45:21.876604 2494 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:45:21.877395 kubelet[2494]: I0421 10:45:21.877275 2494 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 10:45:21.877395 kubelet[2494]: I0421 10:45:21.877284 2494 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 10:45:21.877395 kubelet[2494]: I0421 10:45:21.877302 2494 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:45:21.877529 kubelet[2494]: I0421 10:45:21.877490 2494 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 10:45:21.877649 kubelet[2494]: I0421 10:45:21.877626 2494 kubelet.go:482] "Attempting to sync node with API server" Apr 21 10:45:21.877759 kubelet[2494]: I0421 10:45:21.877677 2494 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:45:21.877781 kubelet[2494]: I0421 10:45:21.877770 2494 kubelet.go:394] "Adding apiserver pod source" Apr 21 10:45:21.877781 kubelet[2494]: I0421 10:45:21.877777 2494 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:45:21.878969 kubelet[2494]: I0421 10:45:21.878926 2494 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:45:21.879538 kubelet[2494]: I0421 10:45:21.879511 2494 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:45:21.879602 kubelet[2494]: I0421 10:45:21.879552 2494 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:45:21.886366 kubelet[2494]: I0421 10:45:21.884479 2494 server.go:1257] "Started kubelet" Apr 21 10:45:21.886366 kubelet[2494]: I0421 10:45:21.884748 2494 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:45:21.886366 kubelet[2494]: I0421 10:45:21.884789 2494 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:45:21.886366 kubelet[2494]: I0421 10:45:21.884964 2494 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:45:21.886366 kubelet[2494]: I0421 10:45:21.885412 2494 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 10:45:21.886366 kubelet[2494]: I0421 10:45:21.886078 2494 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:45:21.887947 kubelet[2494]: I0421 10:45:21.887900 2494 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:45:21.888178 kubelet[2494]: I0421 10:45:21.888144 2494 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:45:21.891511 kubelet[2494]: I0421 10:45:21.891500 2494 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 10:45:21.892126 kubelet[2494]: I0421 10:45:21.892117 2494 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:45:21.893943 kubelet[2494]: I0421 10:45:21.892427 2494 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:45:21.893943 kubelet[2494]: I0421 10:45:21.892864 2494 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:45:21.893943 kubelet[2494]: I0421 10:45:21.893926 2494 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:45:21.894166 kubelet[2494]: I0421 10:45:21.893977 2494 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:45:21.895272 kubelet[2494]: E0421 10:45:21.895104 2494 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:45:21.912121 kubelet[2494]: I0421 10:45:21.912073 2494 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:45:21.914574 kubelet[2494]: I0421 10:45:21.914528 2494 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:45:21.914574 kubelet[2494]: I0421 10:45:21.914562 2494 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 10:45:21.914574 kubelet[2494]: I0421 10:45:21.914579 2494 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 10:45:21.914680 kubelet[2494]: E0421 10:45:21.914619 2494 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:45:21.933371 kubelet[2494]: I0421 10:45:21.933095 2494 cpu_manager.go:225] "Starting" policy="none" Apr 21 10:45:21.933371 kubelet[2494]: I0421 10:45:21.933106 2494 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 10:45:21.933371 kubelet[2494]: I0421 10:45:21.933120 2494 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 10:45:21.933371 kubelet[2494]: I0421 10:45:21.933251 2494 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 21 10:45:21.933371 kubelet[2494]: I0421 10:45:21.933259 2494 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 21 10:45:21.933371 kubelet[2494]: I0421 10:45:21.933271 2494 policy_none.go:50] "Start" Apr 21 10:45:21.933371 kubelet[2494]: I0421 10:45:21.933277 2494 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:45:21.933371 kubelet[2494]: I0421 10:45:21.933284 2494 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:45:21.934104 kubelet[2494]: I0421 10:45:21.934033 2494 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 10:45:21.934240 kubelet[2494]: I0421 10:45:21.934235 2494 policy_none.go:44] "Start" Apr 21 10:45:21.938962 kubelet[2494]: E0421 10:45:21.938925 2494 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:45:21.939235 kubelet[2494]: I0421 10:45:21.939112 2494 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 10:45:21.939235 kubelet[2494]: I0421 10:45:21.939145 2494 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:45:21.939411 kubelet[2494]: I0421 10:45:21.939341 2494 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 10:45:21.942609 kubelet[2494]: E0421 10:45:21.942585 2494 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:45:22.018551 kubelet[2494]: I0421 10:45:22.018461 2494 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:45:22.018892 kubelet[2494]: I0421 10:45:22.018803 2494 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:45:22.018974 kubelet[2494]: I0421 10:45:22.018475 2494 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:45:22.028625 kubelet[2494]: E0421 10:45:22.028576 2494 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:45:22.045543 kubelet[2494]: I0421 10:45:22.045421 2494 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:45:22.047444 sudo[2538]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 21 10:45:22.048028 sudo[2538]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 21 10:45:22.057815 kubelet[2494]: I0421 10:45:22.057172 2494 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 21 10:45:22.057815 kubelet[2494]: I0421 10:45:22.057225 2494 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 21 10:45:22.197257 kubelet[2494]: I0421 10:45:22.197076 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:45:22.197257 kubelet[2494]: I0421 10:45:22.197149 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:45:22.197257 kubelet[2494]: I0421 10:45:22.197162 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e38bd8035597eb7adf14f83b23ab194-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5e38bd8035597eb7adf14f83b23ab194\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:45:22.197257 kubelet[2494]: I0421 10:45:22.197175 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e38bd8035597eb7adf14f83b23ab194-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5e38bd8035597eb7adf14f83b23ab194\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:45:22.197257 kubelet[2494]: I0421 10:45:22.197213 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:45:22.197491 kubelet[2494]: I0421 10:45:22.197225 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:45:22.197491 kubelet[2494]: I0421 10:45:22.197263 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:45:22.197491 kubelet[2494]: I0421 10:45:22.197277 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:45:22.197491 kubelet[2494]: I0421 10:45:22.197340 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e38bd8035597eb7adf14f83b23ab194-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5e38bd8035597eb7adf14f83b23ab194\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:45:22.328235 kubelet[2494]: E0421 10:45:22.327561 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:22.330968 kubelet[2494]: E0421 10:45:22.330888 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:22.331361 kubelet[2494]: E0421 10:45:22.331235 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:23.004854 kubelet[2494]: I0421 10:45:23.000823 2494 apiserver.go:52] "Watching apiserver" Apr 21 10:45:23.012307 kubelet[2494]: I0421 10:45:23.010554 2494 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:45:23.013038 kubelet[2494]: I0421 10:45:23.011104 2494 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:45:23.024928 kubelet[2494]: E0421 10:45:23.021944 2494 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 21 10:45:23.024928 kubelet[2494]: E0421 10:45:23.022164 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:23.031481 kubelet[2494]: E0421 10:45:23.030531 2494 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 10:45:23.036192 kubelet[2494]: E0421 10:45:23.036114 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:23.040171 kubelet[2494]: E0421 10:45:23.040086 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:23.098160 kubelet[2494]: I0421 10:45:23.097672 2494 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:45:23.166449 sudo[2538]: pam_unix(sudo:session): session closed for user root Apr 21 10:45:23.179604 kubelet[2494]: I0421 10:45:23.179431 2494 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.179325284 podStartE2EDuration="3.179325284s" podCreationTimestamp="2026-04-21 10:45:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:45:23.17920006 +0000 UTC m=+1.374969925" watchObservedRunningTime="2026-04-21 10:45:23.179325284 +0000 UTC m=+1.375095157" Apr 21 10:45:23.193588 kubelet[2494]: I0421 10:45:23.193509 2494 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.193493281 podStartE2EDuration="1.193493281s" podCreationTimestamp="2026-04-21 10:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:45:23.193238907 +0000 UTC m=+1.389008775" watchObservedRunningTime="2026-04-21 10:45:23.193493281 +0000 UTC m=+1.389263155" Apr 21 10:45:24.021575 kubelet[2494]: E0421 10:45:24.021476 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:24.029167 kubelet[2494]: E0421 10:45:24.029073 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:24.079438 kubelet[2494]: I0421 10:45:24.079323 2494 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.07930236 podStartE2EDuration="2.07930236s" podCreationTimestamp="2026-04-21 10:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:45:23.314411878 +0000 UTC m=+1.510181753" watchObservedRunningTime="2026-04-21 10:45:24.07930236 +0000 UTC m=+2.275072224" Apr 21 10:45:25.020210 kubelet[2494]: E0421 10:45:25.019911 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:25.028957 kubelet[2494]: E0421 10:45:25.026132 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:26.807061 sudo[1629]: pam_unix(sudo:session): session closed for user root Apr 21 10:45:26.808379 sshd[1626]: pam_unix(sshd:session): session closed for user core Apr 21 10:45:26.811461 systemd[1]: sshd@6-10.0.0.151:22-10.0.0.1:42028.service: Deactivated successfully. Apr 21 10:45:26.813254 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:45:26.813425 systemd[1]: session-7.scope: Consumed 6.239s CPU time, 164.6M memory peak, 0B memory swap peak. Apr 21 10:45:26.814117 systemd-logind[1436]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:45:26.815853 systemd-logind[1436]: Removed session 7. Apr 21 10:45:27.656089 kubelet[2494]: E0421 10:45:27.656023 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:28.064648 kubelet[2494]: I0421 10:45:28.064217 2494 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:45:28.065150 containerd[1449]: time="2026-04-21T10:45:28.064977025Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:45:28.068919 kubelet[2494]: I0421 10:45:28.068492 2494 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:45:28.462477 kubelet[2494]: E0421 10:45:28.462290 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:28.749141 kubelet[2494]: E0421 10:45:28.748672 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:29.155647 systemd[1]: Created slice kubepods-burstable-podf3a7b6e4_8033_458b_885c_e336932661cb.slice - libcontainer container kubepods-burstable-podf3a7b6e4_8033_458b_885c_e336932661cb.slice. Apr 21 10:45:29.163463 systemd[1]: Created slice kubepods-besteffort-poda67e876f_35ba_42ad_9dee_ef665b72aeee.slice - libcontainer container kubepods-besteffort-poda67e876f_35ba_42ad_9dee_ef665b72aeee.slice. Apr 21 10:45:29.250820 kubelet[2494]: I0421 10:45:29.250631 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c57p\" (UniqueName: \"kubernetes.io/projected/f3a7b6e4-8033-458b-885c-e336932661cb-kube-api-access-5c57p\") pod \"cilium-ksm25\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " pod="kube-system/cilium-ksm25" Apr 21 10:45:29.250820 kubelet[2494]: I0421 10:45:29.250773 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-bpf-maps\") pod \"cilium-ksm25\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " pod="kube-system/cilium-ksm25" Apr 21 10:45:29.250820 kubelet[2494]: I0421 10:45:29.250793 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-xtables-lock\") pod \"cilium-ksm25\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " pod="kube-system/cilium-ksm25" Apr 21 10:45:29.250820 kubelet[2494]: I0421 10:45:29.250806 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3a7b6e4-8033-458b-885c-e336932661cb-cilium-config-path\") pod \"cilium-ksm25\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " pod="kube-system/cilium-ksm25" Apr 21 10:45:29.250820 kubelet[2494]: I0421 10:45:29.250817 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-host-proc-sys-net\") pod \"cilium-ksm25\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " pod="kube-system/cilium-ksm25" Apr 21 10:45:29.250820 kubelet[2494]: I0421 10:45:29.250829 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a67e876f-35ba-42ad-9dee-ef665b72aeee-lib-modules\") pod \"kube-proxy-csrcs\" (UID: \"a67e876f-35ba-42ad-9dee-ef665b72aeee\") " pod="kube-system/kube-proxy-csrcs" Apr 21 10:45:29.251224 kubelet[2494]: I0421 10:45:29.250840 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-cilium-run\") pod \"cilium-ksm25\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " pod="kube-system/cilium-ksm25" Apr 21 10:45:29.251224 kubelet[2494]: I0421 10:45:29.250852 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-hostproc\") pod \"cilium-ksm25\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " pod="kube-system/cilium-ksm25" Apr 21 10:45:29.251224 kubelet[2494]: I0421 10:45:29.250862 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-etc-cni-netd\") pod \"cilium-ksm25\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " pod="kube-system/cilium-ksm25" Apr 21 10:45:29.251224 kubelet[2494]: I0421 10:45:29.250872 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-lib-modules\") pod \"cilium-ksm25\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " pod="kube-system/cilium-ksm25" Apr 21 10:45:29.251224 kubelet[2494]: I0421 10:45:29.250882 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-host-proc-sys-kernel\") pod \"cilium-ksm25\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " pod="kube-system/cilium-ksm25" Apr 21 10:45:29.251224 kubelet[2494]: I0421 10:45:29.250948 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-cilium-cgroup\") pod \"cilium-ksm25\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " pod="kube-system/cilium-ksm25" Apr 21 10:45:29.251395 kubelet[2494]: I0421 10:45:29.250980 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f3a7b6e4-8033-458b-885c-e336932661cb-hubble-tls\") pod \"cilium-ksm25\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " pod="kube-system/cilium-ksm25" Apr 21 10:45:29.251395 kubelet[2494]: I0421 10:45:29.251040 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a67e876f-35ba-42ad-9dee-ef665b72aeee-kube-proxy\") pod \"kube-proxy-csrcs\" (UID: \"a67e876f-35ba-42ad-9dee-ef665b72aeee\") " pod="kube-system/kube-proxy-csrcs" Apr 21 10:45:29.251395 kubelet[2494]: I0421 10:45:29.251071 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a67e876f-35ba-42ad-9dee-ef665b72aeee-xtables-lock\") pod \"kube-proxy-csrcs\" (UID: \"a67e876f-35ba-42ad-9dee-ef665b72aeee\") " pod="kube-system/kube-proxy-csrcs" Apr 21 10:45:29.251395 kubelet[2494]: I0421 10:45:29.251097 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc4sx\" (UniqueName: \"kubernetes.io/projected/a67e876f-35ba-42ad-9dee-ef665b72aeee-kube-api-access-rc4sx\") pod \"kube-proxy-csrcs\" (UID: \"a67e876f-35ba-42ad-9dee-ef665b72aeee\") " pod="kube-system/kube-proxy-csrcs" Apr 21 10:45:29.251395 kubelet[2494]: I0421 10:45:29.251117 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-cni-path\") pod \"cilium-ksm25\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " pod="kube-system/cilium-ksm25" Apr 21 10:45:29.251395 kubelet[2494]: I0421 10:45:29.251149 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f3a7b6e4-8033-458b-885c-e336932661cb-clustermesh-secrets\") pod \"cilium-ksm25\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " pod="kube-system/cilium-ksm25" Apr 21 10:45:29.317317 systemd[1]: Created slice kubepods-besteffort-pod840da7cd_dd7a_4436_80f6_84d12ea3d902.slice - libcontainer container kubepods-besteffort-pod840da7cd_dd7a_4436_80f6_84d12ea3d902.slice. Apr 21 10:45:29.353894 kubelet[2494]: I0421 10:45:29.353574 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwvpz\" (UniqueName: \"kubernetes.io/projected/840da7cd-dd7a-4436-80f6-84d12ea3d902-kube-api-access-cwvpz\") pod \"cilium-operator-78cf5644cb-mjg2c\" (UID: \"840da7cd-dd7a-4436-80f6-84d12ea3d902\") " pod="kube-system/cilium-operator-78cf5644cb-mjg2c" Apr 21 10:45:29.353894 kubelet[2494]: I0421 10:45:29.353900 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/840da7cd-dd7a-4436-80f6-84d12ea3d902-cilium-config-path\") pod \"cilium-operator-78cf5644cb-mjg2c\" (UID: \"840da7cd-dd7a-4436-80f6-84d12ea3d902\") " pod="kube-system/cilium-operator-78cf5644cb-mjg2c" Apr 21 10:45:29.465128 kubelet[2494]: E0421 10:45:29.464044 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:29.466950 containerd[1449]: time="2026-04-21T10:45:29.466657132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ksm25,Uid:f3a7b6e4-8033-458b-885c-e336932661cb,Namespace:kube-system,Attempt:0,}" Apr 21 10:45:29.483599 kubelet[2494]: E0421 10:45:29.483138 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:29.486335 containerd[1449]: time="2026-04-21T10:45:29.485988172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-csrcs,Uid:a67e876f-35ba-42ad-9dee-ef665b72aeee,Namespace:kube-system,Attempt:0,}" Apr 21 10:45:29.630972 kubelet[2494]: E0421 10:45:29.630830 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:29.634099 containerd[1449]: time="2026-04-21T10:45:29.634002973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-mjg2c,Uid:840da7cd-dd7a-4436-80f6-84d12ea3d902,Namespace:kube-system,Attempt:0,}" Apr 21 10:45:29.682835 containerd[1449]: time="2026-04-21T10:45:29.681662571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:45:29.682835 containerd[1449]: time="2026-04-21T10:45:29.682052103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:45:29.682835 containerd[1449]: time="2026-04-21T10:45:29.682065290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:29.682835 containerd[1449]: time="2026-04-21T10:45:29.682375866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:29.711885 containerd[1449]: time="2026-04-21T10:45:29.711469217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:45:29.711885 containerd[1449]: time="2026-04-21T10:45:29.711615549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:45:29.711885 containerd[1449]: time="2026-04-21T10:45:29.711627384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:29.712420 containerd[1449]: time="2026-04-21T10:45:29.712148886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:29.713864 containerd[1449]: time="2026-04-21T10:45:29.713597424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:45:29.713864 containerd[1449]: time="2026-04-21T10:45:29.713627932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:45:29.713864 containerd[1449]: time="2026-04-21T10:45:29.713636358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:29.713864 containerd[1449]: time="2026-04-21T10:45:29.713680381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:29.727861 systemd[1]: Started cri-containerd-f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24.scope - libcontainer container f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24. Apr 21 10:45:29.765011 systemd[1]: Started cri-containerd-a6714746aecb0521ff022df1111b62408bf2414817f85642c765198480be7794.scope - libcontainer container a6714746aecb0521ff022df1111b62408bf2414817f85642c765198480be7794. Apr 21 10:45:29.785820 systemd[1]: Started cri-containerd-1247d4ff2cb9ebf827d55c742763d96e849bf628e6f467fd791f501e6752fb71.scope - libcontainer container 1247d4ff2cb9ebf827d55c742763d96e849bf628e6f467fd791f501e6752fb71. Apr 21 10:45:29.853534 containerd[1449]: time="2026-04-21T10:45:29.852175539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ksm25,Uid:f3a7b6e4-8033-458b-885c-e336932661cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24\"" Apr 21 10:45:30.151122 kubelet[2494]: E0421 10:45:30.135850 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:30.196264 containerd[1449]: time="2026-04-21T10:45:30.196018682Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 21 10:45:30.263391 containerd[1449]: time="2026-04-21T10:45:30.263326178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-csrcs,Uid:a67e876f-35ba-42ad-9dee-ef665b72aeee,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6714746aecb0521ff022df1111b62408bf2414817f85642c765198480be7794\"" Apr 21 10:45:30.264367 kubelet[2494]: E0421 10:45:30.264318 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:30.272049 containerd[1449]: time="2026-04-21T10:45:30.271944637Z" level=info msg="CreateContainer within sandbox \"a6714746aecb0521ff022df1111b62408bf2414817f85642c765198480be7794\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:45:30.294771 containerd[1449]: time="2026-04-21T10:45:30.294069001Z" level=info msg="CreateContainer within sandbox \"a6714746aecb0521ff022df1111b62408bf2414817f85642c765198480be7794\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4f7ed1e645048aa6e56d0022124b8cd23ff58e16382a35a9228c9adb2863e27b\"" Apr 21 10:45:30.295471 containerd[1449]: time="2026-04-21T10:45:30.295419546Z" level=info msg="StartContainer for \"4f7ed1e645048aa6e56d0022124b8cd23ff58e16382a35a9228c9adb2863e27b\"" Apr 21 10:45:30.306217 containerd[1449]: time="2026-04-21T10:45:30.306149468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-mjg2c,Uid:840da7cd-dd7a-4436-80f6-84d12ea3d902,Namespace:kube-system,Attempt:0,} returns sandbox id \"1247d4ff2cb9ebf827d55c742763d96e849bf628e6f467fd791f501e6752fb71\"" Apr 21 10:45:30.307180 kubelet[2494]: E0421 10:45:30.307040 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:30.353937 systemd[1]: Started cri-containerd-4f7ed1e645048aa6e56d0022124b8cd23ff58e16382a35a9228c9adb2863e27b.scope - libcontainer container 4f7ed1e645048aa6e56d0022124b8cd23ff58e16382a35a9228c9adb2863e27b. Apr 21 10:45:30.419871 containerd[1449]: time="2026-04-21T10:45:30.419606584Z" level=info msg="StartContainer for \"4f7ed1e645048aa6e56d0022124b8cd23ff58e16382a35a9228c9adb2863e27b\" returns successfully" Apr 21 10:45:31.240619 kubelet[2494]: E0421 10:45:31.240503 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:31.253633 kubelet[2494]: I0421 10:45:31.253511 2494 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-csrcs" podStartSLOduration=2.253500487 podStartE2EDuration="2.253500487s" podCreationTimestamp="2026-04-21 10:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:45:31.253210581 +0000 UTC m=+9.448980456" watchObservedRunningTime="2026-04-21 10:45:31.253500487 +0000 UTC m=+9.449270361" Apr 21 10:45:36.971064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2164814555.mount: Deactivated successfully. Apr 21 10:45:37.665934 kubelet[2494]: E0421 10:45:37.665793 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:38.466480 kubelet[2494]: E0421 10:45:38.466292 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:38.753614 kubelet[2494]: E0421 10:45:38.752996 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:38.854030 containerd[1449]: time="2026-04-21T10:45:38.853957129Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:38.854457 containerd[1449]: time="2026-04-21T10:45:38.854351626Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 21 10:45:38.855573 containerd[1449]: time="2026-04-21T10:45:38.855519778Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:38.856774 containerd[1449]: time="2026-04-21T10:45:38.856639526Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.660480395s" Apr 21 10:45:38.856774 containerd[1449]: time="2026-04-21T10:45:38.856761959Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 21 10:45:38.860890 containerd[1449]: time="2026-04-21T10:45:38.860325613Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 21 10:45:38.867335 containerd[1449]: time="2026-04-21T10:45:38.867272072Z" level=info msg="CreateContainer within sandbox \"f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 10:45:38.889525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3956555879.mount: Deactivated successfully. Apr 21 10:45:38.897372 containerd[1449]: time="2026-04-21T10:45:38.897233993Z" level=info msg="CreateContainer within sandbox \"f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727\"" Apr 21 10:45:38.900353 containerd[1449]: time="2026-04-21T10:45:38.900180621Z" level=info msg="StartContainer for \"fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727\"" Apr 21 10:45:38.965916 systemd[1]: Started cri-containerd-fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727.scope - libcontainer container fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727. Apr 21 10:45:38.998967 containerd[1449]: time="2026-04-21T10:45:38.998826521Z" level=info msg="StartContainer for \"fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727\" returns successfully" Apr 21 10:45:39.008637 systemd[1]: cri-containerd-fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727.scope: Deactivated successfully. Apr 21 10:45:39.172447 containerd[1449]: time="2026-04-21T10:45:39.172369930Z" level=info msg="shim disconnected" id=fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727 namespace=k8s.io Apr 21 10:45:39.172447 containerd[1449]: time="2026-04-21T10:45:39.172435007Z" level=warning msg="cleaning up after shim disconnected" id=fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727 namespace=k8s.io Apr 21 10:45:39.172447 containerd[1449]: time="2026-04-21T10:45:39.172442436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:45:39.268673 kubelet[2494]: E0421 10:45:39.268364 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:39.276061 containerd[1449]: time="2026-04-21T10:45:39.275955278Z" level=info msg="CreateContainer within sandbox \"f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 10:45:39.292137 containerd[1449]: time="2026-04-21T10:45:39.291352495Z" level=info msg="CreateContainer within sandbox \"f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670\"" Apr 21 10:45:39.297749 containerd[1449]: time="2026-04-21T10:45:39.295076825Z" level=info msg="StartContainer for \"1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670\"" Apr 21 10:45:39.336056 systemd[1]: Started cri-containerd-1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670.scope - libcontainer container 1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670. Apr 21 10:45:39.370442 containerd[1449]: time="2026-04-21T10:45:39.370397294Z" level=info msg="StartContainer for \"1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670\" returns successfully" Apr 21 10:45:39.385086 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:45:39.385267 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:45:39.385339 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:45:39.393902 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:45:39.394233 systemd[1]: cri-containerd-1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670.scope: Deactivated successfully. Apr 21 10:45:39.422520 containerd[1449]: time="2026-04-21T10:45:39.422407524Z" level=info msg="shim disconnected" id=1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670 namespace=k8s.io Apr 21 10:45:39.422520 containerd[1449]: time="2026-04-21T10:45:39.422474656Z" level=warning msg="cleaning up after shim disconnected" id=1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670 namespace=k8s.io Apr 21 10:45:39.422520 containerd[1449]: time="2026-04-21T10:45:39.422481866Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:45:39.427851 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:45:39.902376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727-rootfs.mount: Deactivated successfully. Apr 21 10:45:40.131504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount591085448.mount: Deactivated successfully. Apr 21 10:45:40.277985 kubelet[2494]: E0421 10:45:40.276472 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:40.345237 containerd[1449]: time="2026-04-21T10:45:40.341800252Z" level=info msg="CreateContainer within sandbox \"f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 10:45:40.362801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1033966998.mount: Deactivated successfully. Apr 21 10:45:40.377237 containerd[1449]: time="2026-04-21T10:45:40.377134782Z" level=info msg="CreateContainer within sandbox \"f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4\"" Apr 21 10:45:40.378583 containerd[1449]: time="2026-04-21T10:45:40.378517909Z" level=info msg="StartContainer for \"24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4\"" Apr 21 10:45:40.432398 systemd[1]: Started cri-containerd-24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4.scope - libcontainer container 24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4. Apr 21 10:45:40.465269 systemd[1]: cri-containerd-24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4.scope: Deactivated successfully. Apr 21 10:45:40.472733 containerd[1449]: time="2026-04-21T10:45:40.472438241Z" level=info msg="StartContainer for \"24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4\" returns successfully" Apr 21 10:45:40.507511 containerd[1449]: time="2026-04-21T10:45:40.507424940Z" level=info msg="shim disconnected" id=24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4 namespace=k8s.io Apr 21 10:45:40.507511 containerd[1449]: time="2026-04-21T10:45:40.507493790Z" level=warning msg="cleaning up after shim disconnected" id=24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4 namespace=k8s.io Apr 21 10:45:40.507511 containerd[1449]: time="2026-04-21T10:45:40.507505510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:45:40.640582 containerd[1449]: time="2026-04-21T10:45:40.640393020Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:40.641215 containerd[1449]: time="2026-04-21T10:45:40.641179114Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 21 10:45:40.642905 containerd[1449]: time="2026-04-21T10:45:40.642812594Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:40.644422 containerd[1449]: time="2026-04-21T10:45:40.644395107Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.783938148s" Apr 21 10:45:40.644461 containerd[1449]: time="2026-04-21T10:45:40.644430149Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 21 10:45:40.651239 containerd[1449]: time="2026-04-21T10:45:40.651186226Z" level=info msg="CreateContainer within sandbox \"1247d4ff2cb9ebf827d55c742763d96e849bf628e6f467fd791f501e6752fb71\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 21 10:45:40.663810 containerd[1449]: time="2026-04-21T10:45:40.663754339Z" level=info msg="CreateContainer within sandbox \"1247d4ff2cb9ebf827d55c742763d96e849bf628e6f467fd791f501e6752fb71\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df\"" Apr 21 10:45:40.664178 containerd[1449]: time="2026-04-21T10:45:40.664134349Z" level=info msg="StartContainer for \"2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df\"" Apr 21 10:45:40.687860 systemd[1]: Started cri-containerd-2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df.scope - libcontainer container 2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df. Apr 21 10:45:40.708307 containerd[1449]: time="2026-04-21T10:45:40.708127722Z" level=info msg="StartContainer for \"2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df\" returns successfully" Apr 21 10:45:40.851025 update_engine[1437]: I20260421 10:45:40.850898 1437 update_attempter.cc:509] Updating boot flags... Apr 21 10:45:40.875742 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3146) Apr 21 10:45:40.904724 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3148) Apr 21 10:45:41.287679 kubelet[2494]: E0421 10:45:41.287540 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:41.289582 kubelet[2494]: E0421 10:45:41.289006 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:41.311777 containerd[1449]: time="2026-04-21T10:45:41.311568112Z" level=info msg="CreateContainer within sandbox \"f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 10:45:41.338285 containerd[1449]: time="2026-04-21T10:45:41.338161102Z" level=info msg="CreateContainer within sandbox \"f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e\"" Apr 21 10:45:41.340168 containerd[1449]: time="2026-04-21T10:45:41.339836728Z" level=info msg="StartContainer for \"65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e\"" Apr 21 10:45:41.351530 kubelet[2494]: I0421 10:45:41.351443 2494 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-mjg2c" podStartSLOduration=2.013536202 podStartE2EDuration="12.351421918s" podCreationTimestamp="2026-04-21 10:45:29 +0000 UTC" firstStartedPulling="2026-04-21 10:45:30.307930057 +0000 UTC m=+8.503699922" lastFinishedPulling="2026-04-21 10:45:40.645815773 +0000 UTC m=+18.841585638" observedRunningTime="2026-04-21 10:45:41.351121212 +0000 UTC m=+19.546891087" watchObservedRunningTime="2026-04-21 10:45:41.351421918 +0000 UTC m=+19.547191793" Apr 21 10:45:41.461410 systemd[1]: Started cri-containerd-65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e.scope - libcontainer container 65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e. Apr 21 10:45:41.488023 systemd[1]: cri-containerd-65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e.scope: Deactivated successfully. Apr 21 10:45:41.493990 containerd[1449]: time="2026-04-21T10:45:41.493955228Z" level=info msg="StartContainer for \"65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e\" returns successfully" Apr 21 10:45:41.522927 containerd[1449]: time="2026-04-21T10:45:41.522821068Z" level=info msg="shim disconnected" id=65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e namespace=k8s.io Apr 21 10:45:41.522927 containerd[1449]: time="2026-04-21T10:45:41.522880108Z" level=warning msg="cleaning up after shim disconnected" id=65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e namespace=k8s.io Apr 21 10:45:41.522927 containerd[1449]: time="2026-04-21T10:45:41.522888130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:45:41.891505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e-rootfs.mount: Deactivated successfully. Apr 21 10:45:42.294435 kubelet[2494]: E0421 10:45:42.294272 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:42.294435 kubelet[2494]: E0421 10:45:42.294287 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:42.301790 containerd[1449]: time="2026-04-21T10:45:42.301713106Z" level=info msg="CreateContainer within sandbox \"f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 10:45:42.321292 containerd[1449]: time="2026-04-21T10:45:42.321236913Z" level=info msg="CreateContainer within sandbox \"f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94\"" Apr 21 10:45:42.325133 containerd[1449]: time="2026-04-21T10:45:42.325072395Z" level=info msg="StartContainer for \"a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94\"" Apr 21 10:45:42.354889 systemd[1]: Started cri-containerd-a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94.scope - libcontainer container a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94. Apr 21 10:45:42.397921 containerd[1449]: time="2026-04-21T10:45:42.397638064Z" level=info msg="StartContainer for \"a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94\" returns successfully" Apr 21 10:45:42.578375 kubelet[2494]: I0421 10:45:42.577920 2494 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 21 10:45:42.664380 systemd[1]: Created slice kubepods-burstable-podd49fb683_4dba_4d5f_b894_4259e92950fc.slice - libcontainer container kubepods-burstable-podd49fb683_4dba_4d5f_b894_4259e92950fc.slice. Apr 21 10:45:42.671932 systemd[1]: Created slice kubepods-burstable-podb80edad8_9128_4e86_871b_fff032d0ce70.slice - libcontainer container kubepods-burstable-podb80edad8_9128_4e86_871b_fff032d0ce70.slice. Apr 21 10:45:42.799786 kubelet[2494]: I0421 10:45:42.799747 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d49fb683-4dba-4d5f-b894-4259e92950fc-config-volume\") pod \"coredns-7d764666f9-4gtn4\" (UID: \"d49fb683-4dba-4d5f-b894-4259e92950fc\") " pod="kube-system/coredns-7d764666f9-4gtn4" Apr 21 10:45:42.799786 kubelet[2494]: I0421 10:45:42.799782 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b80edad8-9128-4e86-871b-fff032d0ce70-config-volume\") pod \"coredns-7d764666f9-4xprl\" (UID: \"b80edad8-9128-4e86-871b-fff032d0ce70\") " pod="kube-system/coredns-7d764666f9-4xprl" Apr 21 10:45:42.800036 kubelet[2494]: I0421 10:45:42.799828 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnqwc\" (UniqueName: \"kubernetes.io/projected/b80edad8-9128-4e86-871b-fff032d0ce70-kube-api-access-rnqwc\") pod \"coredns-7d764666f9-4xprl\" (UID: \"b80edad8-9128-4e86-871b-fff032d0ce70\") " pod="kube-system/coredns-7d764666f9-4xprl" Apr 21 10:45:42.800036 kubelet[2494]: I0421 10:45:42.799846 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhdbc\" (UniqueName: \"kubernetes.io/projected/d49fb683-4dba-4d5f-b894-4259e92950fc-kube-api-access-fhdbc\") pod \"coredns-7d764666f9-4gtn4\" (UID: \"d49fb683-4dba-4d5f-b894-4259e92950fc\") " pod="kube-system/coredns-7d764666f9-4gtn4" Apr 21 10:45:42.979545 kubelet[2494]: E0421 10:45:42.979268 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:42.983577 kubelet[2494]: E0421 10:45:42.983474 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:42.993207 containerd[1449]: time="2026-04-21T10:45:42.992659850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-4xprl,Uid:b80edad8-9128-4e86-871b-fff032d0ce70,Namespace:kube-system,Attempt:0,}" Apr 21 10:45:42.993863 containerd[1449]: time="2026-04-21T10:45:42.993243221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-4gtn4,Uid:d49fb683-4dba-4d5f-b894-4259e92950fc,Namespace:kube-system,Attempt:0,}" Apr 21 10:45:43.302895 kubelet[2494]: E0421 10:45:43.302795 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:43.338303 kubelet[2494]: I0421 10:45:43.338206 2494 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-ksm25" podStartSLOduration=2.233764682 podStartE2EDuration="14.338181921s" podCreationTimestamp="2026-04-21 10:45:29 +0000 UTC" firstStartedPulling="2026-04-21 10:45:30.190468346 +0000 UTC m=+8.386238210" lastFinishedPulling="2026-04-21 10:45:42.294885577 +0000 UTC m=+20.490655449" observedRunningTime="2026-04-21 10:45:43.336475009 +0000 UTC m=+21.532244887" watchObservedRunningTime="2026-04-21 10:45:43.338181921 +0000 UTC m=+21.533951796" Apr 21 10:45:44.341005 kubelet[2494]: E0421 10:45:44.340009 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:44.506586 systemd-networkd[1375]: cilium_host: Link UP Apr 21 10:45:44.506743 systemd-networkd[1375]: cilium_net: Link UP Apr 21 10:45:44.506745 systemd-networkd[1375]: cilium_net: Gained carrier Apr 21 10:45:44.506841 systemd-networkd[1375]: cilium_host: Gained carrier Apr 21 10:45:44.507120 systemd-networkd[1375]: cilium_host: Gained IPv6LL Apr 21 10:45:44.632868 systemd-networkd[1375]: cilium_vxlan: Link UP Apr 21 10:45:44.634254 systemd-networkd[1375]: cilium_vxlan: Gained carrier Apr 21 10:45:44.870764 kernel: NET: Registered PF_ALG protocol family Apr 21 10:45:44.952059 systemd-networkd[1375]: cilium_net: Gained IPv6LL Apr 21 10:45:45.346619 kubelet[2494]: E0421 10:45:45.346516 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:45.532025 systemd-networkd[1375]: lxc_health: Link UP Apr 21 10:45:45.540976 systemd-networkd[1375]: lxc_health: Gained carrier Apr 21 10:45:46.108050 systemd-networkd[1375]: lxce4a9a9dcb3cb: Link UP Apr 21 10:45:46.130547 kernel: eth0: renamed from tmp6ca05 Apr 21 10:45:46.139215 systemd-networkd[1375]: lxcfb446de98fad: Link UP Apr 21 10:45:46.142999 systemd-networkd[1375]: lxce4a9a9dcb3cb: Gained carrier Apr 21 10:45:46.145732 kernel: eth0: renamed from tmpe6dde Apr 21 10:45:46.157837 systemd-networkd[1375]: lxcfb446de98fad: Gained carrier Apr 21 10:45:46.200237 systemd-networkd[1375]: cilium_vxlan: Gained IPv6LL Apr 21 10:45:47.226762 systemd-networkd[1375]: lxcfb446de98fad: Gained IPv6LL Apr 21 10:45:47.462578 kubelet[2494]: E0421 10:45:47.462267 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:47.480359 systemd-networkd[1375]: lxc_health: Gained IPv6LL Apr 21 10:45:47.608579 systemd-networkd[1375]: lxce4a9a9dcb3cb: Gained IPv6LL Apr 21 10:45:48.354129 kubelet[2494]: E0421 10:45:48.354009 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:49.356764 kubelet[2494]: E0421 10:45:49.356594 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:49.801205 containerd[1449]: time="2026-04-21T10:45:49.801028932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:45:49.801205 containerd[1449]: time="2026-04-21T10:45:49.801131205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:45:49.801205 containerd[1449]: time="2026-04-21T10:45:49.801143432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:49.802172 containerd[1449]: time="2026-04-21T10:45:49.801219701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:49.803390 containerd[1449]: time="2026-04-21T10:45:49.803304195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:45:49.803490 containerd[1449]: time="2026-04-21T10:45:49.803386942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:45:49.803490 containerd[1449]: time="2026-04-21T10:45:49.803405614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:49.803575 containerd[1449]: time="2026-04-21T10:45:49.803469283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:49.834014 systemd[1]: Started cri-containerd-6ca053059f82efa91602dbe6c153ca461152d8e53a4c50978430291ac9565c79.scope - libcontainer container 6ca053059f82efa91602dbe6c153ca461152d8e53a4c50978430291ac9565c79. Apr 21 10:45:49.836961 systemd[1]: Started cri-containerd-e6dde1b557e7f195a0c063ad4a19bc3ca4ba28c08120b9955f0c433e45cbd973.scope - libcontainer container e6dde1b557e7f195a0c063ad4a19bc3ca4ba28c08120b9955f0c433e45cbd973. Apr 21 10:45:49.847484 systemd-resolved[1383]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:45:49.850739 systemd-resolved[1383]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:45:49.873142 containerd[1449]: time="2026-04-21T10:45:49.872378340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-4gtn4,Uid:d49fb683-4dba-4d5f-b894-4259e92950fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6dde1b557e7f195a0c063ad4a19bc3ca4ba28c08120b9955f0c433e45cbd973\"" Apr 21 10:45:49.873466 kubelet[2494]: E0421 10:45:49.873416 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:49.878805 containerd[1449]: time="2026-04-21T10:45:49.878772340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-4xprl,Uid:b80edad8-9128-4e86-871b-fff032d0ce70,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ca053059f82efa91602dbe6c153ca461152d8e53a4c50978430291ac9565c79\"" Apr 21 10:45:49.883064 containerd[1449]: time="2026-04-21T10:45:49.883042156Z" level=info msg="CreateContainer within sandbox \"e6dde1b557e7f195a0c063ad4a19bc3ca4ba28c08120b9955f0c433e45cbd973\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:45:49.883771 kubelet[2494]: E0421 10:45:49.883306 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:49.887381 containerd[1449]: time="2026-04-21T10:45:49.887328295Z" level=info msg="CreateContainer within sandbox \"6ca053059f82efa91602dbe6c153ca461152d8e53a4c50978430291ac9565c79\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:45:49.904072 containerd[1449]: time="2026-04-21T10:45:49.903344713Z" level=info msg="CreateContainer within sandbox \"e6dde1b557e7f195a0c063ad4a19bc3ca4ba28c08120b9955f0c433e45cbd973\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a176125399fa75b1563306058c4b5438c24e19b3fdc68a4fb7f536226f839a5\"" Apr 21 10:45:49.905123 containerd[1449]: time="2026-04-21T10:45:49.904749979Z" level=info msg="StartContainer for \"5a176125399fa75b1563306058c4b5438c24e19b3fdc68a4fb7f536226f839a5\"" Apr 21 10:45:49.916960 containerd[1449]: time="2026-04-21T10:45:49.916883193Z" level=info msg="CreateContainer within sandbox \"6ca053059f82efa91602dbe6c153ca461152d8e53a4c50978430291ac9565c79\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bc08e49250033f7eb87af40c5d0c8c1c5bde70e5d0d99e4222d9a42907a4e9f\"" Apr 21 10:45:49.917761 containerd[1449]: time="2026-04-21T10:45:49.917670651Z" level=info msg="StartContainer for \"9bc08e49250033f7eb87af40c5d0c8c1c5bde70e5d0d99e4222d9a42907a4e9f\"" Apr 21 10:45:49.935355 systemd[1]: Started cri-containerd-5a176125399fa75b1563306058c4b5438c24e19b3fdc68a4fb7f536226f839a5.scope - libcontainer container 5a176125399fa75b1563306058c4b5438c24e19b3fdc68a4fb7f536226f839a5. Apr 21 10:45:49.958284 systemd[1]: Started cri-containerd-9bc08e49250033f7eb87af40c5d0c8c1c5bde70e5d0d99e4222d9a42907a4e9f.scope - libcontainer container 9bc08e49250033f7eb87af40c5d0c8c1c5bde70e5d0d99e4222d9a42907a4e9f. Apr 21 10:45:49.979754 containerd[1449]: time="2026-04-21T10:45:49.979599952Z" level=info msg="StartContainer for \"5a176125399fa75b1563306058c4b5438c24e19b3fdc68a4fb7f536226f839a5\" returns successfully" Apr 21 10:45:49.990337 containerd[1449]: time="2026-04-21T10:45:49.990269227Z" level=info msg="StartContainer for \"9bc08e49250033f7eb87af40c5d0c8c1c5bde70e5d0d99e4222d9a42907a4e9f\" returns successfully" Apr 21 10:45:50.368486 kubelet[2494]: E0421 10:45:50.363938 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:50.370555 kubelet[2494]: E0421 10:45:50.370485 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:50.431399 kubelet[2494]: I0421 10:45:50.431070 2494 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-4gtn4" podStartSLOduration=21.431022137 podStartE2EDuration="21.431022137s" podCreationTimestamp="2026-04-21 10:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:45:50.430753887 +0000 UTC m=+28.626523761" watchObservedRunningTime="2026-04-21 10:45:50.431022137 +0000 UTC m=+28.626792012" Apr 21 10:45:51.375469 kubelet[2494]: E0421 10:45:51.375303 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:51.378127 kubelet[2494]: E0421 10:45:51.378088 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:52.379545 kubelet[2494]: E0421 10:45:52.379396 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:52.379545 kubelet[2494]: E0421 10:45:52.379401 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:45:57.272824 systemd[1]: Started sshd@7-10.0.0.151:22-10.0.0.1:47752.service - OpenSSH per-connection server daemon (10.0.0.1:47752). Apr 21 10:45:57.312918 sshd[3902]: Accepted publickey for core from 10.0.0.1 port 47752 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:45:57.314634 sshd[3902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:45:57.325492 systemd-logind[1436]: New session 8 of user core. Apr 21 10:45:57.338739 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:45:57.505487 sshd[3902]: pam_unix(sshd:session): session closed for user core Apr 21 10:45:57.509923 systemd[1]: sshd@7-10.0.0.151:22-10.0.0.1:47752.service: Deactivated successfully. Apr 21 10:45:57.511758 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:45:57.512401 systemd-logind[1436]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:45:57.513575 systemd-logind[1436]: Removed session 8. Apr 21 10:46:02.537302 systemd[1]: Started sshd@8-10.0.0.151:22-10.0.0.1:41028.service - OpenSSH per-connection server daemon (10.0.0.1:41028). Apr 21 10:46:02.569345 sshd[3920]: Accepted publickey for core from 10.0.0.1 port 41028 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:02.571008 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:02.575883 systemd-logind[1436]: New session 9 of user core. Apr 21 10:46:02.584869 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:46:02.750408 sshd[3920]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:02.753214 systemd[1]: sshd@8-10.0.0.151:22-10.0.0.1:41028.service: Deactivated successfully. Apr 21 10:46:02.754548 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:46:02.755097 systemd-logind[1436]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:46:02.755828 systemd-logind[1436]: Removed session 9. Apr 21 10:46:07.762223 systemd[1]: Started sshd@9-10.0.0.151:22-10.0.0.1:41042.service - OpenSSH per-connection server daemon (10.0.0.1:41042). Apr 21 10:46:07.796911 sshd[3935]: Accepted publickey for core from 10.0.0.1 port 41042 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:07.798210 sshd[3935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:07.804366 systemd-logind[1436]: New session 10 of user core. Apr 21 10:46:07.814903 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:46:07.962662 sshd[3935]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:07.966261 systemd[1]: sshd@9-10.0.0.151:22-10.0.0.1:41042.service: Deactivated successfully. Apr 21 10:46:07.967792 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:46:07.968400 systemd-logind[1436]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:46:07.969141 systemd-logind[1436]: Removed session 10. Apr 21 10:46:12.980897 systemd[1]: Started sshd@10-10.0.0.151:22-10.0.0.1:60652.service - OpenSSH per-connection server daemon (10.0.0.1:60652). Apr 21 10:46:13.019525 sshd[3950]: Accepted publickey for core from 10.0.0.1 port 60652 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:13.021537 sshd[3950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:13.067815 systemd-logind[1436]: New session 11 of user core. Apr 21 10:46:13.075505 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:46:13.227765 sshd[3950]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:13.235117 systemd[1]: sshd@10-10.0.0.151:22-10.0.0.1:60652.service: Deactivated successfully. Apr 21 10:46:13.238403 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:46:13.239048 systemd-logind[1436]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:46:13.240024 systemd-logind[1436]: Removed session 11. Apr 21 10:46:18.239569 systemd[1]: Started sshd@11-10.0.0.151:22-10.0.0.1:60666.service - OpenSSH per-connection server daemon (10.0.0.1:60666). Apr 21 10:46:18.270931 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 60666 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:18.272231 sshd[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:18.275911 systemd-logind[1436]: New session 12 of user core. Apr 21 10:46:18.283844 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:46:18.402352 sshd[3966]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:18.413220 systemd[1]: sshd@11-10.0.0.151:22-10.0.0.1:60666.service: Deactivated successfully. Apr 21 10:46:18.414941 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:46:18.416254 systemd-logind[1436]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:46:18.424995 systemd[1]: Started sshd@12-10.0.0.151:22-10.0.0.1:60670.service - OpenSSH per-connection server daemon (10.0.0.1:60670). Apr 21 10:46:18.426395 systemd-logind[1436]: Removed session 12. Apr 21 10:46:18.455668 sshd[3982]: Accepted publickey for core from 10.0.0.1 port 60670 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:18.456846 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:18.461032 systemd-logind[1436]: New session 13 of user core. Apr 21 10:46:18.472621 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:46:18.634618 sshd[3982]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:18.642009 systemd[1]: sshd@12-10.0.0.151:22-10.0.0.1:60670.service: Deactivated successfully. Apr 21 10:46:18.643501 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:46:18.645597 systemd-logind[1436]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:46:18.652678 systemd[1]: Started sshd@13-10.0.0.151:22-10.0.0.1:60680.service - OpenSSH per-connection server daemon (10.0.0.1:60680). Apr 21 10:46:18.657165 systemd-logind[1436]: Removed session 13. Apr 21 10:46:18.688328 sshd[3995]: Accepted publickey for core from 10.0.0.1 port 60680 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:18.689385 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:18.694144 systemd-logind[1436]: New session 14 of user core. Apr 21 10:46:18.713934 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:46:18.831358 sshd[3995]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:18.834868 systemd[1]: sshd@13-10.0.0.151:22-10.0.0.1:60680.service: Deactivated successfully. Apr 21 10:46:18.836376 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:46:18.837079 systemd-logind[1436]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:46:18.837960 systemd-logind[1436]: Removed session 14. Apr 21 10:46:23.843675 systemd[1]: Started sshd@14-10.0.0.151:22-10.0.0.1:57642.service - OpenSSH per-connection server daemon (10.0.0.1:57642). Apr 21 10:46:23.908288 sshd[4011]: Accepted publickey for core from 10.0.0.1 port 57642 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:23.911628 sshd[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:23.919910 systemd-logind[1436]: New session 15 of user core. Apr 21 10:46:23.926401 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:46:24.067547 sshd[4011]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:24.070556 systemd[1]: sshd@14-10.0.0.151:22-10.0.0.1:57642.service: Deactivated successfully. Apr 21 10:46:24.072104 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:46:24.073159 systemd-logind[1436]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:46:24.073922 systemd-logind[1436]: Removed session 15. Apr 21 10:46:29.081957 systemd[1]: Started sshd@15-10.0.0.151:22-10.0.0.1:57652.service - OpenSSH per-connection server daemon (10.0.0.1:57652). Apr 21 10:46:29.165334 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 57652 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:29.166774 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:29.170941 systemd-logind[1436]: New session 16 of user core. Apr 21 10:46:29.180659 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:46:29.333760 sshd[4025]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:29.339185 systemd[1]: sshd@15-10.0.0.151:22-10.0.0.1:57652.service: Deactivated successfully. Apr 21 10:46:29.341046 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:46:29.341604 systemd-logind[1436]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:46:29.342463 systemd-logind[1436]: Removed session 16. Apr 21 10:46:34.350238 systemd[1]: Started sshd@16-10.0.0.151:22-10.0.0.1:33376.service - OpenSSH per-connection server daemon (10.0.0.1:33376). Apr 21 10:46:34.423656 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 33376 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:34.425131 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:34.430895 systemd-logind[1436]: New session 17 of user core. Apr 21 10:46:34.440008 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:46:34.579341 sshd[4041]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:34.589634 systemd[1]: sshd@16-10.0.0.151:22-10.0.0.1:33376.service: Deactivated successfully. Apr 21 10:46:34.591054 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:46:34.592172 systemd-logind[1436]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:46:34.598987 systemd[1]: Started sshd@17-10.0.0.151:22-10.0.0.1:33386.service - OpenSSH per-connection server daemon (10.0.0.1:33386). Apr 21 10:46:34.599664 systemd-logind[1436]: Removed session 17. Apr 21 10:46:34.627652 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 33386 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:34.628918 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:34.632851 systemd-logind[1436]: New session 18 of user core. Apr 21 10:46:34.639802 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:46:34.865584 sshd[4055]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:34.871625 systemd[1]: sshd@17-10.0.0.151:22-10.0.0.1:33386.service: Deactivated successfully. Apr 21 10:46:34.872915 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:46:34.874208 systemd-logind[1436]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:46:34.875250 systemd[1]: Started sshd@18-10.0.0.151:22-10.0.0.1:33400.service - OpenSSH per-connection server daemon (10.0.0.1:33400). Apr 21 10:46:34.875905 systemd-logind[1436]: Removed session 18. Apr 21 10:46:34.918081 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 33400 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:34.919569 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:34.923728 systemd-logind[1436]: New session 19 of user core. Apr 21 10:46:34.928903 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:46:35.423484 sshd[4068]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:35.434680 systemd[1]: sshd@18-10.0.0.151:22-10.0.0.1:33400.service: Deactivated successfully. Apr 21 10:46:35.437443 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:46:35.438740 systemd-logind[1436]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:46:35.450273 systemd[1]: Started sshd@19-10.0.0.151:22-10.0.0.1:33416.service - OpenSSH per-connection server daemon (10.0.0.1:33416). Apr 21 10:46:35.456142 systemd-logind[1436]: Removed session 19. Apr 21 10:46:35.486890 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 33416 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:35.488115 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:35.494189 systemd-logind[1436]: New session 20 of user core. Apr 21 10:46:35.507885 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:46:35.726335 sshd[4090]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:35.732900 systemd[1]: sshd@19-10.0.0.151:22-10.0.0.1:33416.service: Deactivated successfully. Apr 21 10:46:35.734301 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:46:35.735805 systemd-logind[1436]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:46:35.742921 systemd[1]: Started sshd@20-10.0.0.151:22-10.0.0.1:33428.service - OpenSSH per-connection server daemon (10.0.0.1:33428). Apr 21 10:46:35.744532 systemd-logind[1436]: Removed session 20. Apr 21 10:46:35.773867 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 33428 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:35.775168 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:35.779468 systemd-logind[1436]: New session 21 of user core. Apr 21 10:46:35.786837 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:46:35.878824 sshd[4104]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:35.880993 systemd[1]: sshd@20-10.0.0.151:22-10.0.0.1:33428.service: Deactivated successfully. Apr 21 10:46:35.884862 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:46:35.886319 systemd-logind[1436]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:46:35.887126 systemd-logind[1436]: Removed session 21. Apr 21 10:46:40.916347 systemd[1]: Started sshd@21-10.0.0.151:22-10.0.0.1:54220.service - OpenSSH per-connection server daemon (10.0.0.1:54220). Apr 21 10:46:40.952044 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 54220 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:40.953409 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:40.958108 systemd-logind[1436]: New session 22 of user core. Apr 21 10:46:40.973925 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 10:46:41.105734 sshd[4120]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:41.108770 systemd[1]: sshd@21-10.0.0.151:22-10.0.0.1:54220.service: Deactivated successfully. Apr 21 10:46:41.110085 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 10:46:41.110602 systemd-logind[1436]: Session 22 logged out. Waiting for processes to exit. Apr 21 10:46:41.111436 systemd-logind[1436]: Removed session 22. Apr 21 10:46:46.133540 systemd[1]: Started sshd@22-10.0.0.151:22-10.0.0.1:54224.service - OpenSSH per-connection server daemon (10.0.0.1:54224). Apr 21 10:46:46.171286 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 54224 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:46.172888 sshd[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:46.177305 systemd-logind[1436]: New session 23 of user core. Apr 21 10:46:46.184353 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 21 10:46:46.294380 sshd[4136]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:46.298360 systemd[1]: sshd@22-10.0.0.151:22-10.0.0.1:54224.service: Deactivated successfully. Apr 21 10:46:46.300241 systemd[1]: session-23.scope: Deactivated successfully. Apr 21 10:46:46.300880 systemd-logind[1436]: Session 23 logged out. Waiting for processes to exit. Apr 21 10:46:46.301661 systemd-logind[1436]: Removed session 23. Apr 21 10:46:51.303976 systemd[1]: Started sshd@23-10.0.0.151:22-10.0.0.1:49506.service - OpenSSH per-connection server daemon (10.0.0.1:49506). Apr 21 10:46:51.334422 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 49506 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:51.335679 sshd[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:51.339298 systemd-logind[1436]: New session 24 of user core. Apr 21 10:46:51.349903 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 21 10:46:51.452899 sshd[4151]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:51.461996 systemd[1]: sshd@23-10.0.0.151:22-10.0.0.1:49506.service: Deactivated successfully. Apr 21 10:46:51.463392 systemd[1]: session-24.scope: Deactivated successfully. Apr 21 10:46:51.464653 systemd-logind[1436]: Session 24 logged out. Waiting for processes to exit. Apr 21 10:46:51.465836 systemd[1]: Started sshd@24-10.0.0.151:22-10.0.0.1:49510.service - OpenSSH per-connection server daemon (10.0.0.1:49510). Apr 21 10:46:51.466393 systemd-logind[1436]: Removed session 24. Apr 21 10:46:51.496222 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 49510 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:51.497427 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:51.501075 systemd-logind[1436]: New session 25 of user core. Apr 21 10:46:51.516888 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 21 10:46:51.918523 kubelet[2494]: E0421 10:46:51.918449 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:46:51.919193 kubelet[2494]: E0421 10:46:51.918537 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:46:52.880552 kubelet[2494]: I0421 10:46:52.880438 2494 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-4xprl" podStartSLOduration=83.880427764 podStartE2EDuration="1m23.880427764s" podCreationTimestamp="2026-04-21 10:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:45:50.472782857 +0000 UTC m=+28.668552729" watchObservedRunningTime="2026-04-21 10:46:52.880427764 +0000 UTC m=+91.076197639" Apr 21 10:46:52.962306 systemd[1]: run-containerd-runc-k8s.io-a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94-runc.Ha0eT9.mount: Deactivated successfully. Apr 21 10:46:52.970064 containerd[1449]: time="2026-04-21T10:46:52.969920667Z" level=info msg="StopContainer for \"2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df\" with timeout 30 (s)" Apr 21 10:46:52.974068 containerd[1449]: time="2026-04-21T10:46:52.971997160Z" level=info msg="Stop container \"2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df\" with signal terminated" Apr 21 10:46:52.975551 containerd[1449]: time="2026-04-21T10:46:52.975528001Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:46:52.983215 containerd[1449]: time="2026-04-21T10:46:52.983182704Z" level=info msg="StopContainer for \"a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94\" with timeout 2 (s)" Apr 21 10:46:52.983845 containerd[1449]: time="2026-04-21T10:46:52.983794034Z" level=info msg="Stop container \"a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94\" with signal terminated" Apr 21 10:46:52.986342 systemd[1]: cri-containerd-2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df.scope: Deactivated successfully. Apr 21 10:46:52.994882 systemd-networkd[1375]: lxc_health: Link DOWN Apr 21 10:46:52.994888 systemd-networkd[1375]: lxc_health: Lost carrier Apr 21 10:46:53.011086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df-rootfs.mount: Deactivated successfully. Apr 21 10:46:53.019012 systemd[1]: cri-containerd-a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94.scope: Deactivated successfully. Apr 21 10:46:53.019609 containerd[1449]: time="2026-04-21T10:46:53.019422346Z" level=info msg="shim disconnected" id=2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df namespace=k8s.io Apr 21 10:46:53.019609 containerd[1449]: time="2026-04-21T10:46:53.019470031Z" level=warning msg="cleaning up after shim disconnected" id=2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df namespace=k8s.io Apr 21 10:46:53.019609 containerd[1449]: time="2026-04-21T10:46:53.019481697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:46:53.019975 systemd[1]: cri-containerd-a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94.scope: Consumed 7.013s CPU time. Apr 21 10:46:53.046386 containerd[1449]: time="2026-04-21T10:46:53.046286212Z" level=info msg="StopContainer for \"2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df\" returns successfully" Apr 21 10:46:53.047392 containerd[1449]: time="2026-04-21T10:46:53.047371417Z" level=info msg="StopPodSandbox for \"1247d4ff2cb9ebf827d55c742763d96e849bf628e6f467fd791f501e6752fb71\"" Apr 21 10:46:53.047478 containerd[1449]: time="2026-04-21T10:46:53.047467926Z" level=info msg="Container to stop \"2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:46:53.049550 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1247d4ff2cb9ebf827d55c742763d96e849bf628e6f467fd791f501e6752fb71-shm.mount: Deactivated successfully. Apr 21 10:46:53.059992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94-rootfs.mount: Deactivated successfully. Apr 21 10:46:53.068118 containerd[1449]: time="2026-04-21T10:46:53.067944062Z" level=info msg="shim disconnected" id=a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94 namespace=k8s.io Apr 21 10:46:53.068118 containerd[1449]: time="2026-04-21T10:46:53.068075940Z" level=warning msg="cleaning up after shim disconnected" id=a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94 namespace=k8s.io Apr 21 10:46:53.068118 containerd[1449]: time="2026-04-21T10:46:53.068086281Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:46:53.068848 systemd[1]: cri-containerd-1247d4ff2cb9ebf827d55c742763d96e849bf628e6f467fd791f501e6752fb71.scope: Deactivated successfully. Apr 21 10:46:53.089091 containerd[1449]: time="2026-04-21T10:46:53.088813590Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:46:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:46:53.094481 containerd[1449]: time="2026-04-21T10:46:53.094311012Z" level=info msg="StopContainer for \"a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94\" returns successfully" Apr 21 10:46:53.095811 containerd[1449]: time="2026-04-21T10:46:53.095723563Z" level=info msg="StopPodSandbox for \"f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24\"" Apr 21 10:46:53.095884 containerd[1449]: time="2026-04-21T10:46:53.095812273Z" level=info msg="Container to stop \"fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:46:53.095884 containerd[1449]: time="2026-04-21T10:46:53.095823278Z" level=info msg="Container to stop \"1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:46:53.095884 containerd[1449]: time="2026-04-21T10:46:53.095830175Z" level=info msg="Container to stop \"24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:46:53.095884 containerd[1449]: time="2026-04-21T10:46:53.095837103Z" level=info msg="Container to stop \"65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:46:53.095884 containerd[1449]: time="2026-04-21T10:46:53.095843958Z" level=info msg="Container to stop \"a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:46:53.107294 systemd[1]: cri-containerd-f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24.scope: Deactivated successfully. Apr 21 10:46:53.108841 containerd[1449]: time="2026-04-21T10:46:53.108248529Z" level=info msg="shim disconnected" id=1247d4ff2cb9ebf827d55c742763d96e849bf628e6f467fd791f501e6752fb71 namespace=k8s.io Apr 21 10:46:53.108841 containerd[1449]: time="2026-04-21T10:46:53.108282982Z" level=warning msg="cleaning up after shim disconnected" id=1247d4ff2cb9ebf827d55c742763d96e849bf628e6f467fd791f501e6752fb71 namespace=k8s.io Apr 21 10:46:53.108841 containerd[1449]: time="2026-04-21T10:46:53.108288847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:46:53.138454 containerd[1449]: time="2026-04-21T10:46:53.135666202Z" level=info msg="shim disconnected" id=f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24 namespace=k8s.io Apr 21 10:46:53.138454 containerd[1449]: time="2026-04-21T10:46:53.135779643Z" level=warning msg="cleaning up after shim disconnected" id=f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24 namespace=k8s.io Apr 21 10:46:53.138454 containerd[1449]: time="2026-04-21T10:46:53.135790221Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:46:53.138454 containerd[1449]: time="2026-04-21T10:46:53.136446980Z" level=info msg="TearDown network for sandbox \"1247d4ff2cb9ebf827d55c742763d96e849bf628e6f467fd791f501e6752fb71\" successfully" Apr 21 10:46:53.138454 containerd[1449]: time="2026-04-21T10:46:53.136463312Z" level=info msg="StopPodSandbox for \"1247d4ff2cb9ebf827d55c742763d96e849bf628e6f467fd791f501e6752fb71\" returns successfully" Apr 21 10:46:53.155873 containerd[1449]: time="2026-04-21T10:46:53.155832031Z" level=info msg="TearDown network for sandbox \"f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24\" successfully" Apr 21 10:46:53.155873 containerd[1449]: time="2026-04-21T10:46:53.155867283Z" level=info msg="StopPodSandbox for \"f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24\" returns successfully" Apr 21 10:46:53.246098 kubelet[2494]: I0421 10:46:53.245930 2494 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/840da7cd-dd7a-4436-80f6-84d12ea3d902-kube-api-access-cwvpz\" (UniqueName: \"kubernetes.io/projected/840da7cd-dd7a-4436-80f6-84d12ea3d902-kube-api-access-cwvpz\") pod \"840da7cd-dd7a-4436-80f6-84d12ea3d902\" (UID: \"840da7cd-dd7a-4436-80f6-84d12ea3d902\") " Apr 21 10:46:53.246098 kubelet[2494]: I0421 10:46:53.246056 2494 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-cilium-cgroup\") pod \"f3a7b6e4-8033-458b-885c-e336932661cb\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " Apr 21 10:46:53.246098 kubelet[2494]: I0421 10:46:53.246101 2494 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/f3a7b6e4-8033-458b-885c-e336932661cb-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3a7b6e4-8033-458b-885c-e336932661cb-cilium-config-path\") pod \"f3a7b6e4-8033-458b-885c-e336932661cb\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " Apr 21 10:46:53.246098 kubelet[2494]: I0421 10:46:53.246117 2494 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/f3a7b6e4-8033-458b-885c-e336932661cb-hubble-tls\" (UniqueName: \"kubernetes.io/projected/f3a7b6e4-8033-458b-885c-e336932661cb-hubble-tls\") pod \"f3a7b6e4-8033-458b-885c-e336932661cb\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " Apr 21 10:46:53.246098 kubelet[2494]: I0421 10:46:53.246133 2494 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/f3a7b6e4-8033-458b-885c-e336932661cb-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f3a7b6e4-8033-458b-885c-e336932661cb-clustermesh-secrets\") pod \"f3a7b6e4-8033-458b-885c-e336932661cb\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " Apr 21 10:46:53.247380 kubelet[2494]: I0421 10:46:53.247318 2494 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-hostproc\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-hostproc\") pod \"f3a7b6e4-8033-458b-885c-e336932661cb\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " Apr 21 10:46:53.247469 kubelet[2494]: I0421 10:46:53.247390 2494 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-lib-modules\") pod \"f3a7b6e4-8033-458b-885c-e336932661cb\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " Apr 21 10:46:53.247469 kubelet[2494]: I0421 10:46:53.247409 2494 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/f3a7b6e4-8033-458b-885c-e336932661cb-kube-api-access-5c57p\" (UniqueName: \"kubernetes.io/projected/f3a7b6e4-8033-458b-885c-e336932661cb-kube-api-access-5c57p\") pod \"f3a7b6e4-8033-458b-885c-e336932661cb\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " Apr 21 10:46:53.247469 kubelet[2494]: I0421 10:46:53.247426 2494 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-cilium-run\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-cilium-run\") pod \"f3a7b6e4-8033-458b-885c-e336932661cb\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " Apr 21 10:46:53.247469 kubelet[2494]: I0421 10:46:53.247438 2494 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-bpf-maps\") pod \"f3a7b6e4-8033-458b-885c-e336932661cb\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " Apr 21 10:46:53.247469 kubelet[2494]: I0421 10:46:53.247451 2494 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-host-proc-sys-kernel\") pod \"f3a7b6e4-8033-458b-885c-e336932661cb\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " Apr 21 10:46:53.247609 kubelet[2494]: I0421 10:46:53.247466 2494 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/840da7cd-dd7a-4436-80f6-84d12ea3d902-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/840da7cd-dd7a-4436-80f6-84d12ea3d902-cilium-config-path\") pod \"840da7cd-dd7a-4436-80f6-84d12ea3d902\" (UID: \"840da7cd-dd7a-4436-80f6-84d12ea3d902\") " Apr 21 10:46:53.247609 kubelet[2494]: I0421 10:46:53.247506 2494 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-host-proc-sys-net\") pod \"f3a7b6e4-8033-458b-885c-e336932661cb\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " Apr 21 10:46:53.247609 kubelet[2494]: I0421 10:46:53.247519 2494 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-xtables-lock\") pod \"f3a7b6e4-8033-458b-885c-e336932661cb\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " Apr 21 10:46:53.247609 kubelet[2494]: I0421 10:46:53.247548 2494 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-cni-path\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-cni-path\") pod \"f3a7b6e4-8033-458b-885c-e336932661cb\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " Apr 21 10:46:53.247609 kubelet[2494]: I0421 10:46:53.247559 2494 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-etc-cni-netd\") pod \"f3a7b6e4-8033-458b-885c-e336932661cb\" (UID: \"f3a7b6e4-8033-458b-885c-e336932661cb\") " Apr 21 10:46:53.247817 kubelet[2494]: I0421 10:46:53.247604 2494 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-etc-cni-netd" pod "f3a7b6e4-8033-458b-885c-e336932661cb" (UID: "f3a7b6e4-8033-458b-885c-e336932661cb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:46:53.247817 kubelet[2494]: I0421 10:46:53.247632 2494 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-hostproc" pod "f3a7b6e4-8033-458b-885c-e336932661cb" (UID: "f3a7b6e4-8033-458b-885c-e336932661cb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:46:53.247817 kubelet[2494]: I0421 10:46:53.247642 2494 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-lib-modules" pod "f3a7b6e4-8033-458b-885c-e336932661cb" (UID: "f3a7b6e4-8033-458b-885c-e336932661cb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:46:53.248837 kubelet[2494]: I0421 10:46:53.248207 2494 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-cilium-run" pod "f3a7b6e4-8033-458b-885c-e336932661cb" (UID: "f3a7b6e4-8033-458b-885c-e336932661cb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:46:53.248837 kubelet[2494]: I0421 10:46:53.248213 2494 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3a7b6e4-8033-458b-885c-e336932661cb-cilium-config-path" pod "f3a7b6e4-8033-458b-885c-e336932661cb" (UID: "f3a7b6e4-8033-458b-885c-e336932661cb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:46:53.248837 kubelet[2494]: I0421 10:46:53.248232 2494 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-cilium-cgroup" pod "f3a7b6e4-8033-458b-885c-e336932661cb" (UID: "f3a7b6e4-8033-458b-885c-e336932661cb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:46:53.248837 kubelet[2494]: I0421 10:46:53.248298 2494 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-bpf-maps" pod "f3a7b6e4-8033-458b-885c-e336932661cb" (UID: "f3a7b6e4-8033-458b-885c-e336932661cb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:46:53.248837 kubelet[2494]: I0421 10:46:53.248316 2494 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-host-proc-sys-kernel" pod "f3a7b6e4-8033-458b-885c-e336932661cb" (UID: "f3a7b6e4-8033-458b-885c-e336932661cb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:46:53.249129 kubelet[2494]: I0421 10:46:53.248406 2494 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-xtables-lock" pod "f3a7b6e4-8033-458b-885c-e336932661cb" (UID: "f3a7b6e4-8033-458b-885c-e336932661cb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:46:53.249129 kubelet[2494]: I0421 10:46:53.248427 2494 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-host-proc-sys-net" pod "f3a7b6e4-8033-458b-885c-e336932661cb" (UID: "f3a7b6e4-8033-458b-885c-e336932661cb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:46:53.249129 kubelet[2494]: I0421 10:46:53.248437 2494 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-cni-path" pod "f3a7b6e4-8033-458b-885c-e336932661cb" (UID: "f3a7b6e4-8033-458b-885c-e336932661cb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:46:53.250240 kubelet[2494]: I0421 10:46:53.250207 2494 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a7b6e4-8033-458b-885c-e336932661cb-kube-api-access-5c57p" pod "f3a7b6e4-8033-458b-885c-e336932661cb" (UID: "f3a7b6e4-8033-458b-885c-e336932661cb"). InnerVolumeSpecName "kube-api-access-5c57p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:46:53.250713 kubelet[2494]: I0421 10:46:53.250651 2494 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a7b6e4-8033-458b-885c-e336932661cb-clustermesh-secrets" pod "f3a7b6e4-8033-458b-885c-e336932661cb" (UID: "f3a7b6e4-8033-458b-885c-e336932661cb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:46:53.251174 kubelet[2494]: I0421 10:46:53.251150 2494 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/840da7cd-dd7a-4436-80f6-84d12ea3d902-cilium-config-path" pod "840da7cd-dd7a-4436-80f6-84d12ea3d902" (UID: "840da7cd-dd7a-4436-80f6-84d12ea3d902"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:46:53.251256 kubelet[2494]: I0421 10:46:53.251238 2494 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/840da7cd-dd7a-4436-80f6-84d12ea3d902-kube-api-access-cwvpz" pod "840da7cd-dd7a-4436-80f6-84d12ea3d902" (UID: "840da7cd-dd7a-4436-80f6-84d12ea3d902"). InnerVolumeSpecName "kube-api-access-cwvpz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:46:53.251970 kubelet[2494]: I0421 10:46:53.251916 2494 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a7b6e4-8033-458b-885c-e336932661cb-hubble-tls" pod "f3a7b6e4-8033-458b-885c-e336932661cb" (UID: "f3a7b6e4-8033-458b-885c-e336932661cb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:46:53.349438 kubelet[2494]: I0421 10:46:53.349093 2494 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/840da7cd-dd7a-4436-80f6-84d12ea3d902-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 21 10:46:53.349438 kubelet[2494]: I0421 10:46:53.349328 2494 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 21 10:46:53.349438 kubelet[2494]: I0421 10:46:53.349390 2494 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 21 10:46:53.349438 kubelet[2494]: I0421 10:46:53.349463 2494 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 21 10:46:53.350060 kubelet[2494]: I0421 10:46:53.349509 2494 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 21 10:46:53.350060 kubelet[2494]: I0421 10:46:53.349517 2494 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cwvpz\" (UniqueName: \"kubernetes.io/projected/840da7cd-dd7a-4436-80f6-84d12ea3d902-kube-api-access-cwvpz\") on node \"localhost\" DevicePath \"\"" Apr 21 10:46:53.350060 kubelet[2494]: I0421 10:46:53.349524 2494 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 21 10:46:53.350060 kubelet[2494]: I0421 10:46:53.349557 2494 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3a7b6e4-8033-458b-885c-e336932661cb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 21 10:46:53.350060 kubelet[2494]: I0421 10:46:53.349563 2494 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f3a7b6e4-8033-458b-885c-e336932661cb-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 21 10:46:53.350060 kubelet[2494]: I0421 10:46:53.349570 2494 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f3a7b6e4-8033-458b-885c-e336932661cb-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 21 10:46:53.350060 kubelet[2494]: I0421 10:46:53.349576 2494 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 21 10:46:53.350060 kubelet[2494]: I0421 10:46:53.349582 2494 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 21 10:46:53.350250 kubelet[2494]: I0421 10:46:53.349588 2494 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5c57p\" (UniqueName: \"kubernetes.io/projected/f3a7b6e4-8033-458b-885c-e336932661cb-kube-api-access-5c57p\") on node \"localhost\" DevicePath \"\"" Apr 21 10:46:53.350250 kubelet[2494]: I0421 10:46:53.349594 2494 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 21 10:46:53.350250 kubelet[2494]: I0421 10:46:53.349599 2494 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 21 10:46:53.350250 kubelet[2494]: I0421 10:46:53.349651 2494 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f3a7b6e4-8033-458b-885c-e336932661cb-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 21 10:46:53.699087 kubelet[2494]: I0421 10:46:53.699028 2494 scope.go:122] "RemoveContainer" containerID="a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94" Apr 21 10:46:53.701579 containerd[1449]: time="2026-04-21T10:46:53.701100348Z" level=info msg="RemoveContainer for \"a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94\"" Apr 21 10:46:53.703217 systemd[1]: Removed slice kubepods-burstable-podf3a7b6e4_8033_458b_885c_e336932661cb.slice - libcontainer container kubepods-burstable-podf3a7b6e4_8033_458b_885c_e336932661cb.slice. Apr 21 10:46:53.703341 systemd[1]: kubepods-burstable-podf3a7b6e4_8033_458b_885c_e336932661cb.slice: Consumed 7.123s CPU time. Apr 21 10:46:53.704562 systemd[1]: Removed slice kubepods-besteffort-pod840da7cd_dd7a_4436_80f6_84d12ea3d902.slice - libcontainer container kubepods-besteffort-pod840da7cd_dd7a_4436_80f6_84d12ea3d902.slice. Apr 21 10:46:53.708964 containerd[1449]: time="2026-04-21T10:46:53.708927357Z" level=info msg="RemoveContainer for \"a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94\" returns successfully" Apr 21 10:46:53.709365 kubelet[2494]: I0421 10:46:53.709175 2494 scope.go:122] "RemoveContainer" containerID="65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e" Apr 21 10:46:53.710115 containerd[1449]: time="2026-04-21T10:46:53.710090469Z" level=info msg="RemoveContainer for \"65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e\"" Apr 21 10:46:53.713906 containerd[1449]: time="2026-04-21T10:46:53.713831278Z" level=info msg="RemoveContainer for \"65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e\" returns successfully" Apr 21 10:46:53.714366 kubelet[2494]: I0421 10:46:53.714337 2494 scope.go:122] "RemoveContainer" containerID="24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4" Apr 21 10:46:53.718516 containerd[1449]: time="2026-04-21T10:46:53.718439883Z" level=info msg="RemoveContainer for \"24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4\"" Apr 21 10:46:53.737624 containerd[1449]: time="2026-04-21T10:46:53.737555991Z" level=info msg="RemoveContainer for \"24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4\" returns successfully" Apr 21 10:46:53.738054 kubelet[2494]: I0421 10:46:53.738016 2494 scope.go:122] "RemoveContainer" containerID="1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670" Apr 21 10:46:53.739252 containerd[1449]: time="2026-04-21T10:46:53.739231890Z" level=info msg="RemoveContainer for \"1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670\"" Apr 21 10:46:53.742474 containerd[1449]: time="2026-04-21T10:46:53.742431742Z" level=info msg="RemoveContainer for \"1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670\" returns successfully" Apr 21 10:46:53.742647 kubelet[2494]: I0421 10:46:53.742614 2494 scope.go:122] "RemoveContainer" containerID="fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727" Apr 21 10:46:53.743658 containerd[1449]: time="2026-04-21T10:46:53.743600137Z" level=info msg="RemoveContainer for \"fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727\"" Apr 21 10:46:53.746590 containerd[1449]: time="2026-04-21T10:46:53.746549544Z" level=info msg="RemoveContainer for \"fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727\" returns successfully" Apr 21 10:46:53.746775 kubelet[2494]: I0421 10:46:53.746763 2494 scope.go:122] "RemoveContainer" containerID="a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94" Apr 21 10:46:53.750911 containerd[1449]: time="2026-04-21T10:46:53.750857072Z" level=error msg="ContainerStatus for \"a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94\": not found" Apr 21 10:46:53.759418 kubelet[2494]: E0421 10:46:53.759268 2494 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94\": not found" containerID="a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94" Apr 21 10:46:53.759815 kubelet[2494]: I0421 10:46:53.759372 2494 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94"} err="failed to get container status \"a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94\": rpc error: code = NotFound desc = an error occurred when try to find container \"a06b500a5b8017bacb4f8ed48da0fb525dc208c6a344d8688aae3f3954841e94\": not found" Apr 21 10:46:53.759815 kubelet[2494]: I0421 10:46:53.759486 2494 scope.go:122] "RemoveContainer" containerID="65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e" Apr 21 10:46:53.759993 containerd[1449]: time="2026-04-21T10:46:53.759897625Z" level=error msg="ContainerStatus for \"65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e\": not found" Apr 21 10:46:53.760174 kubelet[2494]: E0421 10:46:53.760123 2494 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e\": not found" containerID="65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e" Apr 21 10:46:53.760203 kubelet[2494]: I0421 10:46:53.760160 2494 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e"} err="failed to get container status \"65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e\": rpc error: code = NotFound desc = an error occurred when try to find container \"65ba7322e345fd086d2ba4fa92f4cdca978a4f510ccaa86aa22b76eee8125f3e\": not found" Apr 21 10:46:53.760203 kubelet[2494]: I0421 10:46:53.760193 2494 scope.go:122] "RemoveContainer" containerID="24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4" Apr 21 10:46:53.760515 containerd[1449]: time="2026-04-21T10:46:53.760486368Z" level=error msg="ContainerStatus for \"24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4\": not found" Apr 21 10:46:53.760788 kubelet[2494]: E0421 10:46:53.760740 2494 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4\": not found" containerID="24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4" Apr 21 10:46:53.760788 kubelet[2494]: I0421 10:46:53.760757 2494 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4"} err="failed to get container status \"24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4\": rpc error: code = NotFound desc = an error occurred when try to find container \"24c28fabaae2827fe79dfa236bf5fff8e7bd86fcc3815c06c19620ef0a851ee4\": not found" Apr 21 10:46:53.760788 kubelet[2494]: I0421 10:46:53.760766 2494 scope.go:122] "RemoveContainer" containerID="1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670" Apr 21 10:46:53.761123 containerd[1449]: time="2026-04-21T10:46:53.760960371Z" level=error msg="ContainerStatus for \"1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670\": not found" Apr 21 10:46:53.761175 kubelet[2494]: E0421 10:46:53.761040 2494 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670\": not found" containerID="1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670" Apr 21 10:46:53.761175 kubelet[2494]: I0421 10:46:53.761061 2494 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670"} err="failed to get container status \"1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670\": rpc error: code = NotFound desc = an error occurred when try to find container \"1656aa295342fec8950ea56b11a6223ed482c6fd46a8c055b65c8c36257fc670\": not found" Apr 21 10:46:53.761175 kubelet[2494]: I0421 10:46:53.761075 2494 scope.go:122] "RemoveContainer" containerID="fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727" Apr 21 10:46:53.761456 containerd[1449]: time="2026-04-21T10:46:53.761388126Z" level=error msg="ContainerStatus for \"fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727\": not found" Apr 21 10:46:53.761615 kubelet[2494]: E0421 10:46:53.761536 2494 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727\": not found" containerID="fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727" Apr 21 10:46:53.761615 kubelet[2494]: I0421 10:46:53.761554 2494 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727"} err="failed to get container status \"fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727\": rpc error: code = NotFound desc = an error occurred when try to find container \"fee5dda19aca8d296896038b0402ec44355b9dbcf16e4009de0576f6d7cd3727\": not found" Apr 21 10:46:53.761615 kubelet[2494]: I0421 10:46:53.761565 2494 scope.go:122] "RemoveContainer" containerID="2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df" Apr 21 10:46:53.762920 containerd[1449]: time="2026-04-21T10:46:53.762890563Z" level=info msg="RemoveContainer for \"2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df\"" Apr 21 10:46:53.766257 containerd[1449]: time="2026-04-21T10:46:53.766229895Z" level=info msg="RemoveContainer for \"2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df\" returns successfully" Apr 21 10:46:53.766425 kubelet[2494]: I0421 10:46:53.766375 2494 scope.go:122] "RemoveContainer" containerID="2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df" Apr 21 10:46:53.766627 containerd[1449]: time="2026-04-21T10:46:53.766559220Z" level=error msg="ContainerStatus for \"2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df\": not found" Apr 21 10:46:53.766722 kubelet[2494]: E0421 10:46:53.766668 2494 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df\": not found" containerID="2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df" Apr 21 10:46:53.766766 kubelet[2494]: I0421 10:46:53.766741 2494 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df"} err="failed to get container status \"2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c920aa8be13b5e2cb39912df4e200982064b25eb4a514a474566e788c9609df\": not found" Apr 21 10:46:53.918384 kubelet[2494]: I0421 10:46:53.918290 2494 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="840da7cd-dd7a-4436-80f6-84d12ea3d902" path="/var/lib/kubelet/pods/840da7cd-dd7a-4436-80f6-84d12ea3d902/volumes" Apr 21 10:46:53.918800 kubelet[2494]: I0421 10:46:53.918767 2494 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f3a7b6e4-8033-458b-885c-e336932661cb" path="/var/lib/kubelet/pods/f3a7b6e4-8033-458b-885c-e336932661cb/volumes" Apr 21 10:46:53.957046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1247d4ff2cb9ebf827d55c742763d96e849bf628e6f467fd791f501e6752fb71-rootfs.mount: Deactivated successfully. Apr 21 10:46:53.957140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24-rootfs.mount: Deactivated successfully. Apr 21 10:46:53.957189 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6d023a30a70165e93fda9b9a48c95e624c9ff2b9cdeb0f70d94c073f2ba5c24-shm.mount: Deactivated successfully. Apr 21 10:46:53.957234 systemd[1]: var-lib-kubelet-pods-840da7cd\x2ddd7a\x2d4436\x2d80f6\x2d84d12ea3d902-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcwvpz.mount: Deactivated successfully. Apr 21 10:46:53.957275 systemd[1]: var-lib-kubelet-pods-f3a7b6e4\x2d8033\x2d458b\x2d885c\x2de336932661cb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5c57p.mount: Deactivated successfully. Apr 21 10:46:53.957323 systemd[1]: var-lib-kubelet-pods-f3a7b6e4\x2d8033\x2d458b\x2d885c\x2de336932661cb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 21 10:46:53.957363 systemd[1]: var-lib-kubelet-pods-f3a7b6e4\x2d8033\x2d458b\x2d885c\x2de336932661cb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 21 10:46:54.821834 sshd[4165]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:54.831875 systemd[1]: sshd@24-10.0.0.151:22-10.0.0.1:49510.service: Deactivated successfully. Apr 21 10:46:54.833088 systemd[1]: session-25.scope: Deactivated successfully. Apr 21 10:46:54.834254 systemd-logind[1436]: Session 25 logged out. Waiting for processes to exit. Apr 21 10:46:54.841965 systemd[1]: Started sshd@25-10.0.0.151:22-10.0.0.1:49526.service - OpenSSH per-connection server daemon (10.0.0.1:49526). Apr 21 10:46:54.842636 systemd-logind[1436]: Removed session 25. Apr 21 10:46:54.871635 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 49526 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:54.873091 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:54.878954 systemd-logind[1436]: New session 26 of user core. Apr 21 10:46:54.885892 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 21 10:46:54.916838 kubelet[2494]: E0421 10:46:54.916661 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:46:55.635136 sshd[4326]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:55.646189 systemd[1]: sshd@25-10.0.0.151:22-10.0.0.1:49526.service: Deactivated successfully. Apr 21 10:46:55.647602 systemd[1]: session-26.scope: Deactivated successfully. Apr 21 10:46:55.650803 systemd-logind[1436]: Session 26 logged out. Waiting for processes to exit. Apr 21 10:46:55.659367 systemd[1]: Started sshd@26-10.0.0.151:22-10.0.0.1:49528.service - OpenSSH per-connection server daemon (10.0.0.1:49528). Apr 21 10:46:55.666262 systemd-logind[1436]: Removed session 26. Apr 21 10:46:55.671886 kubelet[2494]: I0421 10:46:55.668670 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5c53238-03ed-4fcc-aab1-3ec6d2f422bf-host-proc-sys-net\") pod \"cilium-8khbj\" (UID: \"b5c53238-03ed-4fcc-aab1-3ec6d2f422bf\") " pod="kube-system/cilium-8khbj" Apr 21 10:46:55.671886 kubelet[2494]: I0421 10:46:55.668766 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tkhj\" (UniqueName: \"kubernetes.io/projected/b5c53238-03ed-4fcc-aab1-3ec6d2f422bf-kube-api-access-6tkhj\") pod \"cilium-8khbj\" (UID: \"b5c53238-03ed-4fcc-aab1-3ec6d2f422bf\") " pod="kube-system/cilium-8khbj" Apr 21 10:46:55.671886 kubelet[2494]: I0421 10:46:55.668836 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5c53238-03ed-4fcc-aab1-3ec6d2f422bf-clustermesh-secrets\") pod \"cilium-8khbj\" (UID: \"b5c53238-03ed-4fcc-aab1-3ec6d2f422bf\") " pod="kube-system/cilium-8khbj" Apr 21 10:46:55.671886 kubelet[2494]: I0421 10:46:55.668859 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5c53238-03ed-4fcc-aab1-3ec6d2f422bf-cilium-config-path\") pod \"cilium-8khbj\" (UID: \"b5c53238-03ed-4fcc-aab1-3ec6d2f422bf\") " pod="kube-system/cilium-8khbj" Apr 21 10:46:55.671886 kubelet[2494]: I0421 10:46:55.668878 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5c53238-03ed-4fcc-aab1-3ec6d2f422bf-etc-cni-netd\") pod \"cilium-8khbj\" (UID: \"b5c53238-03ed-4fcc-aab1-3ec6d2f422bf\") " pod="kube-system/cilium-8khbj" Apr 21 10:46:55.672198 kubelet[2494]: I0421 10:46:55.668889 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5c53238-03ed-4fcc-aab1-3ec6d2f422bf-lib-modules\") pod \"cilium-8khbj\" (UID: \"b5c53238-03ed-4fcc-aab1-3ec6d2f422bf\") " pod="kube-system/cilium-8khbj" Apr 21 10:46:55.672198 kubelet[2494]: I0421 10:46:55.668901 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5c53238-03ed-4fcc-aab1-3ec6d2f422bf-host-proc-sys-kernel\") pod \"cilium-8khbj\" (UID: \"b5c53238-03ed-4fcc-aab1-3ec6d2f422bf\") " pod="kube-system/cilium-8khbj" Apr 21 10:46:55.672198 kubelet[2494]: I0421 10:46:55.668917 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5c53238-03ed-4fcc-aab1-3ec6d2f422bf-cilium-run\") pod \"cilium-8khbj\" (UID: \"b5c53238-03ed-4fcc-aab1-3ec6d2f422bf\") " pod="kube-system/cilium-8khbj" Apr 21 10:46:55.672198 kubelet[2494]: I0421 10:46:55.668930 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5c53238-03ed-4fcc-aab1-3ec6d2f422bf-cilium-cgroup\") pod \"cilium-8khbj\" (UID: \"b5c53238-03ed-4fcc-aab1-3ec6d2f422bf\") " pod="kube-system/cilium-8khbj" Apr 21 10:46:55.672198 kubelet[2494]: I0421 10:46:55.668944 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5c53238-03ed-4fcc-aab1-3ec6d2f422bf-bpf-maps\") pod \"cilium-8khbj\" (UID: \"b5c53238-03ed-4fcc-aab1-3ec6d2f422bf\") " pod="kube-system/cilium-8khbj" Apr 21 10:46:55.672198 kubelet[2494]: I0421 10:46:55.668955 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5c53238-03ed-4fcc-aab1-3ec6d2f422bf-hostproc\") pod \"cilium-8khbj\" (UID: \"b5c53238-03ed-4fcc-aab1-3ec6d2f422bf\") " pod="kube-system/cilium-8khbj" Apr 21 10:46:55.672316 kubelet[2494]: I0421 10:46:55.668966 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b5c53238-03ed-4fcc-aab1-3ec6d2f422bf-cilium-ipsec-secrets\") pod \"cilium-8khbj\" (UID: \"b5c53238-03ed-4fcc-aab1-3ec6d2f422bf\") " pod="kube-system/cilium-8khbj" Apr 21 10:46:55.672316 kubelet[2494]: I0421 10:46:55.668976 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5c53238-03ed-4fcc-aab1-3ec6d2f422bf-hubble-tls\") pod \"cilium-8khbj\" (UID: \"b5c53238-03ed-4fcc-aab1-3ec6d2f422bf\") " pod="kube-system/cilium-8khbj" Apr 21 10:46:55.672316 kubelet[2494]: I0421 10:46:55.668997 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5c53238-03ed-4fcc-aab1-3ec6d2f422bf-cni-path\") pod \"cilium-8khbj\" (UID: \"b5c53238-03ed-4fcc-aab1-3ec6d2f422bf\") " pod="kube-system/cilium-8khbj" Apr 21 10:46:55.672316 kubelet[2494]: I0421 10:46:55.669007 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5c53238-03ed-4fcc-aab1-3ec6d2f422bf-xtables-lock\") pod \"cilium-8khbj\" (UID: \"b5c53238-03ed-4fcc-aab1-3ec6d2f422bf\") " pod="kube-system/cilium-8khbj" Apr 21 10:46:55.688682 systemd[1]: Created slice kubepods-burstable-podb5c53238_03ed_4fcc_aab1_3ec6d2f422bf.slice - libcontainer container kubepods-burstable-podb5c53238_03ed_4fcc_aab1_3ec6d2f422bf.slice. Apr 21 10:46:55.706404 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 49528 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:55.705150 sshd[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:55.717525 systemd-logind[1436]: New session 27 of user core. Apr 21 10:46:55.725619 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 21 10:46:55.791949 sshd[4339]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:55.812162 systemd[1]: sshd@26-10.0.0.151:22-10.0.0.1:49528.service: Deactivated successfully. Apr 21 10:46:55.813935 systemd[1]: session-27.scope: Deactivated successfully. Apr 21 10:46:55.814606 systemd-logind[1436]: Session 27 logged out. Waiting for processes to exit. Apr 21 10:46:55.816529 systemd[1]: Started sshd@27-10.0.0.151:22-10.0.0.1:49532.service - OpenSSH per-connection server daemon (10.0.0.1:49532). Apr 21 10:46:55.817576 systemd-logind[1436]: Removed session 27. Apr 21 10:46:55.849621 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 49532 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:46:55.851293 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:55.855134 systemd-logind[1436]: New session 28 of user core. Apr 21 10:46:55.870896 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 21 10:46:55.920756 kubelet[2494]: E0421 10:46:55.920604 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:46:56.006627 kubelet[2494]: E0421 10:46:56.001675 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:46:56.007074 containerd[1449]: time="2026-04-21T10:46:56.005124284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8khbj,Uid:b5c53238-03ed-4fcc-aab1-3ec6d2f422bf,Namespace:kube-system,Attempt:0,}" Apr 21 10:46:56.034346 containerd[1449]: time="2026-04-21T10:46:56.033443243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:46:56.034346 containerd[1449]: time="2026-04-21T10:46:56.034316466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:46:56.034472 containerd[1449]: time="2026-04-21T10:46:56.034328589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:46:56.034472 containerd[1449]: time="2026-04-21T10:46:56.034391418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:46:56.053111 systemd[1]: Started cri-containerd-14146be3c7f0a542548e60d9c85d5b37eee47bf8956df0be1930608201434ef8.scope - libcontainer container 14146be3c7f0a542548e60d9c85d5b37eee47bf8956df0be1930608201434ef8. Apr 21 10:46:56.095736 containerd[1449]: time="2026-04-21T10:46:56.095594328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8khbj,Uid:b5c53238-03ed-4fcc-aab1-3ec6d2f422bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"14146be3c7f0a542548e60d9c85d5b37eee47bf8956df0be1930608201434ef8\"" Apr 21 10:46:56.097418 kubelet[2494]: E0421 10:46:56.097332 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:46:56.106070 containerd[1449]: time="2026-04-21T10:46:56.105917346Z" level=info msg="CreateContainer within sandbox \"14146be3c7f0a542548e60d9c85d5b37eee47bf8956df0be1930608201434ef8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 10:46:56.121985 containerd[1449]: time="2026-04-21T10:46:56.121942252Z" level=info msg="CreateContainer within sandbox \"14146be3c7f0a542548e60d9c85d5b37eee47bf8956df0be1930608201434ef8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f16b0108e44913c5523f51b551e2f9067254967d7dd4a7021f1c60686f051d0b\"" Apr 21 10:46:56.122396 containerd[1449]: time="2026-04-21T10:46:56.122366401Z" level=info msg="StartContainer for \"f16b0108e44913c5523f51b551e2f9067254967d7dd4a7021f1c60686f051d0b\"" Apr 21 10:46:56.160107 systemd[1]: Started cri-containerd-f16b0108e44913c5523f51b551e2f9067254967d7dd4a7021f1c60686f051d0b.scope - libcontainer container f16b0108e44913c5523f51b551e2f9067254967d7dd4a7021f1c60686f051d0b. Apr 21 10:46:56.182099 containerd[1449]: time="2026-04-21T10:46:56.181922699Z" level=info msg="StartContainer for \"f16b0108e44913c5523f51b551e2f9067254967d7dd4a7021f1c60686f051d0b\" returns successfully" Apr 21 10:46:56.193539 systemd[1]: cri-containerd-f16b0108e44913c5523f51b551e2f9067254967d7dd4a7021f1c60686f051d0b.scope: Deactivated successfully. Apr 21 10:46:56.230120 containerd[1449]: time="2026-04-21T10:46:56.229951123Z" level=info msg="shim disconnected" id=f16b0108e44913c5523f51b551e2f9067254967d7dd4a7021f1c60686f051d0b namespace=k8s.io Apr 21 10:46:56.230120 containerd[1449]: time="2026-04-21T10:46:56.230045273Z" level=warning msg="cleaning up after shim disconnected" id=f16b0108e44913c5523f51b551e2f9067254967d7dd4a7021f1c60686f051d0b namespace=k8s.io Apr 21 10:46:56.230120 containerd[1449]: time="2026-04-21T10:46:56.230052349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:46:56.247670 containerd[1449]: time="2026-04-21T10:46:56.247521649Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:46:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:46:56.724393 kubelet[2494]: E0421 10:46:56.724330 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:46:56.737593 containerd[1449]: time="2026-04-21T10:46:56.737437975Z" level=info msg="CreateContainer within sandbox \"14146be3c7f0a542548e60d9c85d5b37eee47bf8956df0be1930608201434ef8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 10:46:56.758736 containerd[1449]: time="2026-04-21T10:46:56.757625361Z" level=info msg="CreateContainer within sandbox \"14146be3c7f0a542548e60d9c85d5b37eee47bf8956df0be1930608201434ef8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8c82d79f09ea4a9058677db34bd339ea48ad4da46a20f996dd0b064eb7f2db62\"" Apr 21 10:46:56.758736 containerd[1449]: time="2026-04-21T10:46:56.758323581Z" level=info msg="StartContainer for \"8c82d79f09ea4a9058677db34bd339ea48ad4da46a20f996dd0b064eb7f2db62\"" Apr 21 10:46:56.789461 systemd[1]: Started cri-containerd-8c82d79f09ea4a9058677db34bd339ea48ad4da46a20f996dd0b064eb7f2db62.scope - libcontainer container 8c82d79f09ea4a9058677db34bd339ea48ad4da46a20f996dd0b064eb7f2db62. Apr 21 10:46:56.815037 containerd[1449]: time="2026-04-21T10:46:56.814990853Z" level=info msg="StartContainer for \"8c82d79f09ea4a9058677db34bd339ea48ad4da46a20f996dd0b064eb7f2db62\" returns successfully" Apr 21 10:46:56.821970 systemd[1]: cri-containerd-8c82d79f09ea4a9058677db34bd339ea48ad4da46a20f996dd0b064eb7f2db62.scope: Deactivated successfully. Apr 21 10:46:56.846667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c82d79f09ea4a9058677db34bd339ea48ad4da46a20f996dd0b064eb7f2db62-rootfs.mount: Deactivated successfully. Apr 21 10:46:56.862750 containerd[1449]: time="2026-04-21T10:46:56.862626140Z" level=info msg="shim disconnected" id=8c82d79f09ea4a9058677db34bd339ea48ad4da46a20f996dd0b064eb7f2db62 namespace=k8s.io Apr 21 10:46:56.862750 containerd[1449]: time="2026-04-21T10:46:56.862750330Z" level=warning msg="cleaning up after shim disconnected" id=8c82d79f09ea4a9058677db34bd339ea48ad4da46a20f996dd0b064eb7f2db62 namespace=k8s.io Apr 21 10:46:56.862893 containerd[1449]: time="2026-04-21T10:46:56.862760684Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:46:56.917155 kubelet[2494]: E0421 10:46:56.917036 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:46:56.991863 kubelet[2494]: E0421 10:46:56.991581 2494 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 10:46:57.734408 kubelet[2494]: E0421 10:46:57.734326 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:46:57.744669 containerd[1449]: time="2026-04-21T10:46:57.744186526Z" level=info msg="CreateContainer within sandbox \"14146be3c7f0a542548e60d9c85d5b37eee47bf8956df0be1930608201434ef8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 10:46:57.766235 containerd[1449]: time="2026-04-21T10:46:57.766188956Z" level=info msg="CreateContainer within sandbox \"14146be3c7f0a542548e60d9c85d5b37eee47bf8956df0be1930608201434ef8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8faa283c8050eb9ca3a38fae3c271cac7bc8577a55c6883a01d5c933ec27e1db\"" Apr 21 10:46:57.767240 containerd[1449]: time="2026-04-21T10:46:57.766883382Z" level=info msg="StartContainer for \"8faa283c8050eb9ca3a38fae3c271cac7bc8577a55c6883a01d5c933ec27e1db\"" Apr 21 10:46:57.798060 systemd[1]: Started cri-containerd-8faa283c8050eb9ca3a38fae3c271cac7bc8577a55c6883a01d5c933ec27e1db.scope - libcontainer container 8faa283c8050eb9ca3a38fae3c271cac7bc8577a55c6883a01d5c933ec27e1db. Apr 21 10:46:57.841371 containerd[1449]: time="2026-04-21T10:46:57.841180241Z" level=info msg="StartContainer for \"8faa283c8050eb9ca3a38fae3c271cac7bc8577a55c6883a01d5c933ec27e1db\" returns successfully" Apr 21 10:46:57.849066 systemd[1]: cri-containerd-8faa283c8050eb9ca3a38fae3c271cac7bc8577a55c6883a01d5c933ec27e1db.scope: Deactivated successfully. Apr 21 10:46:57.873553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8faa283c8050eb9ca3a38fae3c271cac7bc8577a55c6883a01d5c933ec27e1db-rootfs.mount: Deactivated successfully. Apr 21 10:46:57.879528 containerd[1449]: time="2026-04-21T10:46:57.879375732Z" level=info msg="shim disconnected" id=8faa283c8050eb9ca3a38fae3c271cac7bc8577a55c6883a01d5c933ec27e1db namespace=k8s.io Apr 21 10:46:57.879528 containerd[1449]: time="2026-04-21T10:46:57.879490734Z" level=warning msg="cleaning up after shim disconnected" id=8faa283c8050eb9ca3a38fae3c271cac7bc8577a55c6883a01d5c933ec27e1db namespace=k8s.io Apr 21 10:46:57.879528 containerd[1449]: time="2026-04-21T10:46:57.879498444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:46:58.739936 kubelet[2494]: E0421 10:46:58.739663 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:46:58.746306 containerd[1449]: time="2026-04-21T10:46:58.746234876Z" level=info msg="CreateContainer within sandbox \"14146be3c7f0a542548e60d9c85d5b37eee47bf8956df0be1930608201434ef8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 10:46:58.761629 containerd[1449]: time="2026-04-21T10:46:58.761546573Z" level=info msg="CreateContainer within sandbox \"14146be3c7f0a542548e60d9c85d5b37eee47bf8956df0be1930608201434ef8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f99341edeb0c88c8c649c9627f5ee941412972762d0ce65be7413963938b5809\"" Apr 21 10:46:58.762778 containerd[1449]: time="2026-04-21T10:46:58.762747170Z" level=info msg="StartContainer for \"f99341edeb0c88c8c649c9627f5ee941412972762d0ce65be7413963938b5809\"" Apr 21 10:46:58.791886 systemd[1]: Started cri-containerd-f99341edeb0c88c8c649c9627f5ee941412972762d0ce65be7413963938b5809.scope - libcontainer container f99341edeb0c88c8c649c9627f5ee941412972762d0ce65be7413963938b5809. Apr 21 10:46:58.817622 systemd[1]: cri-containerd-f99341edeb0c88c8c649c9627f5ee941412972762d0ce65be7413963938b5809.scope: Deactivated successfully. Apr 21 10:46:58.824012 containerd[1449]: time="2026-04-21T10:46:58.823954653Z" level=info msg="StartContainer for \"f99341edeb0c88c8c649c9627f5ee941412972762d0ce65be7413963938b5809\" returns successfully" Apr 21 10:46:58.842340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f99341edeb0c88c8c649c9627f5ee941412972762d0ce65be7413963938b5809-rootfs.mount: Deactivated successfully. Apr 21 10:46:58.847907 containerd[1449]: time="2026-04-21T10:46:58.847853297Z" level=info msg="shim disconnected" id=f99341edeb0c88c8c649c9627f5ee941412972762d0ce65be7413963938b5809 namespace=k8s.io Apr 21 10:46:58.847907 containerd[1449]: time="2026-04-21T10:46:58.847904300Z" level=warning msg="cleaning up after shim disconnected" id=f99341edeb0c88c8c649c9627f5ee941412972762d0ce65be7413963938b5809 namespace=k8s.io Apr 21 10:46:58.848041 containerd[1449]: time="2026-04-21T10:46:58.847912491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:46:59.745422 kubelet[2494]: E0421 10:46:59.745367 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:46:59.754157 containerd[1449]: time="2026-04-21T10:46:59.754004268Z" level=info msg="CreateContainer within sandbox \"14146be3c7f0a542548e60d9c85d5b37eee47bf8956df0be1930608201434ef8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 10:46:59.777655 containerd[1449]: time="2026-04-21T10:46:59.777593588Z" level=info msg="CreateContainer within sandbox \"14146be3c7f0a542548e60d9c85d5b37eee47bf8956df0be1930608201434ef8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6ced335166b5faef349ec8d18795c1297dafc4bf190612f3f15d8569edee8fc4\"" Apr 21 10:46:59.778756 containerd[1449]: time="2026-04-21T10:46:59.778194357Z" level=info msg="StartContainer for \"6ced335166b5faef349ec8d18795c1297dafc4bf190612f3f15d8569edee8fc4\"" Apr 21 10:46:59.809051 systemd[1]: Started cri-containerd-6ced335166b5faef349ec8d18795c1297dafc4bf190612f3f15d8569edee8fc4.scope - libcontainer container 6ced335166b5faef349ec8d18795c1297dafc4bf190612f3f15d8569edee8fc4. Apr 21 10:46:59.840358 containerd[1449]: time="2026-04-21T10:46:59.840145385Z" level=info msg="StartContainer for \"6ced335166b5faef349ec8d18795c1297dafc4bf190612f3f15d8569edee8fc4\" returns successfully" Apr 21 10:47:00.084767 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 21 10:47:00.752456 kubelet[2494]: E0421 10:47:00.752405 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:00.767325 kubelet[2494]: I0421 10:47:00.767255 2494 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-8khbj" podStartSLOduration=5.767244632 podStartE2EDuration="5.767244632s" podCreationTimestamp="2026-04-21 10:46:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:47:00.767179861 +0000 UTC m=+98.962949736" watchObservedRunningTime="2026-04-21 10:47:00.767244632 +0000 UTC m=+98.963014497" Apr 21 10:47:01.998616 kubelet[2494]: E0421 10:47:01.998529 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:02.870218 systemd-networkd[1375]: lxc_health: Link UP Apr 21 10:47:02.879918 systemd-networkd[1375]: lxc_health: Gained carrier Apr 21 10:47:03.960881 systemd-networkd[1375]: lxc_health: Gained IPv6LL Apr 21 10:47:03.998831 kubelet[2494]: E0421 10:47:03.998659 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:04.251955 systemd[1]: run-containerd-runc-k8s.io-6ced335166b5faef349ec8d18795c1297dafc4bf190612f3f15d8569edee8fc4-runc.u0k72w.mount: Deactivated successfully. Apr 21 10:47:04.762236 kubelet[2494]: E0421 10:47:04.762174 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:05.765100 kubelet[2494]: E0421 10:47:05.765022 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:08.624338 sshd[4351]: pam_unix(sshd:session): session closed for user core Apr 21 10:47:08.627437 systemd[1]: sshd@27-10.0.0.151:22-10.0.0.1:49532.service: Deactivated successfully. Apr 21 10:47:08.628973 systemd[1]: session-28.scope: Deactivated successfully. Apr 21 10:47:08.629794 systemd-logind[1436]: Session 28 logged out. Waiting for processes to exit. Apr 21 10:47:08.630658 systemd-logind[1436]: Removed session 28. Apr 21 10:47:09.916986 kubelet[2494]: E0421 10:47:09.916864 2494 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"