Apr 17 23:29:39.024927 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:29:39.024946 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:29:39.024955 kernel: BIOS-provided physical RAM map: Apr 17 23:29:39.024959 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 17 23:29:39.024964 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 17 23:29:39.024968 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 17 23:29:39.024973 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 17 23:29:39.024978 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 17 23:29:39.024982 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 23:29:39.024988 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 17 23:29:39.024992 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 23:29:39.024996 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 17 23:29:39.025001 kernel: NX (Execute Disable) protection: active Apr 17 23:29:39.025005 kernel: APIC: Static calls initialized Apr 17 23:29:39.025011 kernel: SMBIOS 2.8 present. Apr 17 23:29:39.025017 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 17 23:29:39.025022 kernel: Hypervisor detected: KVM Apr 17 23:29:39.025026 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:29:39.025031 kernel: kvm-clock: using sched offset of 4110422215 cycles Apr 17 23:29:39.025036 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:29:39.025041 kernel: tsc: Detected 2793.438 MHz processor Apr 17 23:29:39.025046 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:29:39.025051 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:29:39.025056 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 17 23:29:39.025062 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 17 23:29:39.025067 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:29:39.025072 kernel: Using GB pages for direct mapping Apr 17 23:29:39.025077 kernel: ACPI: Early table checksum verification disabled Apr 17 23:29:39.025082 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 17 23:29:39.025086 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:29:39.025091 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:29:39.025098 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:29:39.025106 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 17 23:29:39.025169 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:29:39.025177 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:29:39.025184 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:29:39.025192 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:29:39.025200 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 17 23:29:39.025209 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 17 23:29:39.025217 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 17 23:29:39.025227 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 17 23:29:39.025236 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 17 23:29:39.025245 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 17 23:29:39.025254 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 17 23:29:39.025263 kernel: No NUMA configuration found Apr 17 23:29:39.025272 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 17 23:29:39.025280 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 17 23:29:39.025291 kernel: Zone ranges: Apr 17 23:29:39.025300 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:29:39.025309 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 17 23:29:39.025318 kernel: Normal empty Apr 17 23:29:39.025326 kernel: Movable zone start for each node Apr 17 23:29:39.025335 kernel: Early memory node ranges Apr 17 23:29:39.025345 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 17 23:29:39.025353 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 17 23:29:39.025362 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 17 23:29:39.025371 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:29:39.025383 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 17 23:29:39.025393 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 17 23:29:39.025403 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 23:29:39.025412 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:29:39.025422 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:29:39.025431 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 23:29:39.025441 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:29:39.025450 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:29:39.025458 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:29:39.025468 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:29:39.025475 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:29:39.025483 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:29:39.025491 kernel: TSC deadline timer available Apr 17 23:29:39.025740 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 17 23:29:39.025764 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:29:39.025769 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 23:29:39.025774 kernel: kvm-guest: setup PV sched yield Apr 17 23:29:39.025779 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 17 23:29:39.025787 kernel: Booting paravirtualized kernel on KVM Apr 17 23:29:39.025792 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:29:39.025797 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 17 23:29:39.025802 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 17 23:29:39.025808 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 17 23:29:39.025813 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 17 23:29:39.025818 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:29:39.025826 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:29:39.025836 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:29:39.025846 kernel: random: crng init done Apr 17 23:29:39.025852 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:29:39.025860 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:29:39.025867 kernel: Fallback order for Node 0: 0 Apr 17 23:29:39.025875 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 17 23:29:39.025883 kernel: Policy zone: DMA32 Apr 17 23:29:39.025892 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:29:39.025901 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 137896K reserved, 0K cma-reserved) Apr 17 23:29:39.025912 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 17 23:29:39.025921 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:29:39.025930 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:29:39.025938 kernel: Dynamic Preempt: voluntary Apr 17 23:29:39.025947 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:29:39.025964 kernel: rcu: RCU event tracing is enabled. Apr 17 23:29:39.025974 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 17 23:29:39.025983 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:29:39.025992 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:29:39.026003 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:29:39.026012 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:29:39.026020 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 17 23:29:39.026029 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 17 23:29:39.026038 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:29:39.026047 kernel: Console: colour VGA+ 80x25 Apr 17 23:29:39.026056 kernel: printk: console [ttyS0] enabled Apr 17 23:29:39.026066 kernel: ACPI: Core revision 20230628 Apr 17 23:29:39.026074 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 23:29:39.026086 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:29:39.026096 kernel: x2apic enabled Apr 17 23:29:39.026105 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:29:39.026187 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 23:29:39.026198 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 23:29:39.026207 kernel: kvm-guest: setup PV IPIs Apr 17 23:29:39.026215 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 23:29:39.026235 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:29:39.026248 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 17 23:29:39.026254 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 23:29:39.026259 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 17 23:29:39.026265 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 17 23:29:39.026272 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:29:39.026277 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:29:39.026283 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:29:39.026289 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:29:39.026296 kernel: RETBleed: Vulnerable Apr 17 23:29:39.026302 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:29:39.026307 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:29:39.026313 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 23:29:39.026319 kernel: active return thunk: its_return_thunk Apr 17 23:29:39.026324 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:29:39.026330 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:29:39.026335 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:29:39.026341 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:29:39.026348 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:29:39.026353 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:29:39.026359 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:29:39.026365 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:29:39.026370 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 23:29:39.026376 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 23:29:39.026381 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 23:29:39.026387 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 23:29:39.026392 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:29:39.026399 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:29:39.026405 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:29:39.026410 kernel: landlock: Up and running. Apr 17 23:29:39.026416 kernel: SELinux: Initializing. Apr 17 23:29:39.026421 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:29:39.026427 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:29:39.026432 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 17 23:29:39.026438 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:29:39.026444 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:29:39.026451 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:29:39.026456 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 17 23:29:39.026462 kernel: signal: max sigframe size: 3632 Apr 17 23:29:39.026468 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:29:39.026478 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:29:39.026487 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:29:39.026495 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:29:39.026535 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:29:39.026543 kernel: .... node #0, CPUs: #1 #2 #3 Apr 17 23:29:39.026553 kernel: smp: Brought up 1 node, 4 CPUs Apr 17 23:29:39.026563 kernel: smpboot: Max logical packages: 1 Apr 17 23:29:39.026570 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 17 23:29:39.026578 kernel: devtmpfs: initialized Apr 17 23:29:39.026587 kernel: x86/mm: Memory block size: 128MB Apr 17 23:29:39.026596 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:29:39.026603 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 17 23:29:39.026611 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:29:39.026622 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:29:39.026632 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:29:39.026637 kernel: audit: type=2000 audit(1776468577.989:1): state=initialized audit_enabled=0 res=1 Apr 17 23:29:39.026643 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:29:39.026648 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:29:39.026654 kernel: cpuidle: using governor menu Apr 17 23:29:39.026659 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:29:39.026665 kernel: dca service started, version 1.12.1 Apr 17 23:29:39.026670 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 17 23:29:39.026675 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 23:29:39.026682 kernel: PCI: Using configuration type 1 for base access Apr 17 23:29:39.026688 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:29:39.026693 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:29:39.026699 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:29:39.026704 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:29:39.026710 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:29:39.026715 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:29:39.026720 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:29:39.026726 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:29:39.026733 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:29:39.026738 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:29:39.026744 kernel: ACPI: Interpreter enabled Apr 17 23:29:39.026749 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:29:39.026754 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:29:39.026760 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:29:39.026765 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:29:39.026771 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 23:29:39.026776 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:29:39.026953 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:29:39.027017 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 23:29:39.027072 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 23:29:39.027079 kernel: PCI host bridge to bus 0000:00 Apr 17 23:29:39.027191 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:29:39.027244 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:29:39.027298 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:29:39.027347 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 17 23:29:39.027418 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 23:29:39.027572 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 17 23:29:39.027633 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:29:39.027708 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 17 23:29:39.027778 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 17 23:29:39.027843 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 17 23:29:39.027900 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 17 23:29:39.027954 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 17 23:29:39.028009 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:29:39.028071 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 17 23:29:39.028221 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 17 23:29:39.028281 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 17 23:29:39.028340 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 17 23:29:39.028399 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 17 23:29:39.028455 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 17 23:29:39.028631 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 17 23:29:39.028723 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 17 23:29:39.028786 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:29:39.028846 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 17 23:29:39.028901 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 17 23:29:39.028955 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 17 23:29:39.029010 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 17 23:29:39.029069 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 17 23:29:39.029176 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 23:29:39.029237 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 17 23:29:39.029295 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 17 23:29:39.029350 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 17 23:29:39.029411 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 17 23:29:39.029466 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 17 23:29:39.029474 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:29:39.029479 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:29:39.029485 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:29:39.029491 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:29:39.029498 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 23:29:39.029531 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 23:29:39.029536 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 23:29:39.029542 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 23:29:39.029547 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 23:29:39.029553 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 23:29:39.029558 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 23:29:39.029563 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 23:29:39.029569 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 23:29:39.029576 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 23:29:39.029582 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 23:29:39.029587 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 23:29:39.029593 kernel: iommu: Default domain type: Translated Apr 17 23:29:39.029598 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:29:39.029604 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:29:39.029609 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:29:39.029615 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 17 23:29:39.029620 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 17 23:29:39.029680 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 23:29:39.029736 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 23:29:39.029790 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:29:39.029797 kernel: vgaarb: loaded Apr 17 23:29:39.029803 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 23:29:39.029808 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 23:29:39.029814 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:29:39.029819 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:29:39.029826 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:29:39.029831 kernel: pnp: PnP ACPI init Apr 17 23:29:39.029893 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 23:29:39.029901 kernel: pnp: PnP ACPI: found 6 devices Apr 17 23:29:39.029907 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:29:39.029913 kernel: NET: Registered PF_INET protocol family Apr 17 23:29:39.029918 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:29:39.029924 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:29:39.029931 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:29:39.029937 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:29:39.029942 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:29:39.029948 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:29:39.029953 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:29:39.029959 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:29:39.029964 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:29:39.029970 kernel: NET: Registered PF_XDP protocol family Apr 17 23:29:39.030020 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:29:39.030071 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:29:39.030166 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:29:39.030216 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 17 23:29:39.030265 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 23:29:39.030313 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 17 23:29:39.030320 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:29:39.030326 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:29:39.030332 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:29:39.030339 kernel: Initialise system trusted keyrings Apr 17 23:29:39.030345 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:29:39.030351 kernel: Key type asymmetric registered Apr 17 23:29:39.030357 kernel: Asymmetric key parser 'x509' registered Apr 17 23:29:39.030362 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:29:39.030368 kernel: io scheduler mq-deadline registered Apr 17 23:29:39.030373 kernel: io scheduler kyber registered Apr 17 23:29:39.030379 kernel: io scheduler bfq registered Apr 17 23:29:39.030384 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:29:39.030392 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 23:29:39.030398 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 23:29:39.030403 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 17 23:29:39.030409 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:29:39.030414 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:29:39.030420 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:29:39.030425 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:29:39.030431 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:29:39.030487 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 17 23:29:39.030497 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:29:39.030578 kernel: rtc_cmos 00:04: registered as rtc0 Apr 17 23:29:39.030629 kernel: rtc_cmos 00:04: setting system clock to 2026-04-17T23:29:38 UTC (1776468578) Apr 17 23:29:39.030680 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 17 23:29:39.030687 kernel: intel_pstate: CPU model not supported Apr 17 23:29:39.030692 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:29:39.030698 kernel: Segment Routing with IPv6 Apr 17 23:29:39.030704 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:29:39.030711 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:29:39.030717 kernel: Key type dns_resolver registered Apr 17 23:29:39.030722 kernel: IPI shorthand broadcast: enabled Apr 17 23:29:39.030728 kernel: sched_clock: Marking stable (1307023386, 371517474)->(1805185339, -126644479) Apr 17 23:29:39.030733 kernel: registered taskstats version 1 Apr 17 23:29:39.030739 kernel: Loading compiled-in X.509 certificates Apr 17 23:29:39.030744 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:29:39.030750 kernel: Key type .fscrypt registered Apr 17 23:29:39.030755 kernel: Key type fscrypt-provisioning registered Apr 17 23:29:39.030761 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:29:39.030768 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:29:39.030774 kernel: ima: No architecture policies found Apr 17 23:29:39.030779 kernel: clk: Disabling unused clocks Apr 17 23:29:39.030785 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:29:39.030790 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:29:39.030796 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:29:39.030801 kernel: Run /init as init process Apr 17 23:29:39.030807 kernel: with arguments: Apr 17 23:29:39.030812 kernel: /init Apr 17 23:29:39.030819 kernel: with environment: Apr 17 23:29:39.030824 kernel: HOME=/ Apr 17 23:29:39.030830 kernel: TERM=linux Apr 17 23:29:39.030837 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:29:39.030845 systemd[1]: Detected virtualization kvm. Apr 17 23:29:39.030851 systemd[1]: Detected architecture x86-64. Apr 17 23:29:39.030857 systemd[1]: Running in initrd. Apr 17 23:29:39.030863 systemd[1]: No hostname configured, using default hostname. Apr 17 23:29:39.030870 systemd[1]: Hostname set to . Apr 17 23:29:39.030876 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:29:39.030882 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:29:39.030887 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:29:39.030893 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:29:39.030899 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:29:39.030905 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:29:39.030913 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:29:39.030919 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:29:39.030935 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:29:39.030941 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:29:39.030947 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:29:39.030954 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:29:39.030961 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:29:39.030966 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:29:39.030972 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:29:39.030978 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:29:39.030984 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:29:39.030991 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:29:39.030997 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:29:39.031003 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:29:39.031010 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:29:39.031016 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:29:39.031022 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:29:39.031028 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:29:39.031034 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:29:39.031040 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:29:39.031046 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:29:39.031052 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:29:39.031060 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:29:39.031066 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:29:39.031072 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:29:39.031078 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:29:39.031095 systemd-journald[195]: Collecting audit messages is disabled. Apr 17 23:29:39.031157 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:29:39.031164 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:29:39.031174 systemd-journald[195]: Journal started Apr 17 23:29:39.031190 systemd-journald[195]: Runtime Journal (/run/log/journal/f95d1b97e8284a348ccb06209ba4dd83) is 6.0M, max 48.4M, 42.3M free. Apr 17 23:29:39.031219 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:29:39.039185 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:29:39.040646 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:29:39.040998 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:29:39.046266 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:29:39.073065 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:29:39.078096 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:29:39.089533 systemd-modules-load[196]: Inserted module 'overlay' Apr 17 23:29:39.270489 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:29:39.270541 kernel: Bridge firewalling registered Apr 17 23:29:39.120692 systemd-modules-load[196]: Inserted module 'br_netfilter' Apr 17 23:29:39.122107 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:29:39.274773 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:29:39.298608 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:29:39.306327 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:29:39.317365 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:29:39.321900 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:29:39.332718 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:29:39.339069 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:29:39.344573 dracut-cmdline[227]: dracut-dracut-053 Apr 17 23:29:39.344573 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:29:39.380301 systemd-resolved[234]: Positive Trust Anchors: Apr 17 23:29:39.380334 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:29:39.380359 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:29:39.382567 systemd-resolved[234]: Defaulting to hostname 'linux'. Apr 17 23:29:39.383421 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:29:39.387722 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:29:39.453280 kernel: SCSI subsystem initialized Apr 17 23:29:39.462236 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:29:39.475193 kernel: iscsi: registered transport (tcp) Apr 17 23:29:39.500500 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:29:39.500605 kernel: QLogic iSCSI HBA Driver Apr 17 23:29:39.538807 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:29:39.548433 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:29:39.580489 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:29:39.580644 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:29:39.580657 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:29:39.630311 kernel: raid6: avx512x4 gen() 32939 MB/s Apr 17 23:29:39.648295 kernel: raid6: avx512x2 gen() 38834 MB/s Apr 17 23:29:39.666184 kernel: raid6: avx512x1 gen() 36048 MB/s Apr 17 23:29:39.684306 kernel: raid6: avx2x4 gen() 32863 MB/s Apr 17 23:29:39.702283 kernel: raid6: avx2x2 gen() 24972 MB/s Apr 17 23:29:39.721907 kernel: raid6: avx2x1 gen() 21839 MB/s Apr 17 23:29:39.722004 kernel: raid6: using algorithm avx512x2 gen() 38834 MB/s Apr 17 23:29:39.742169 kernel: raid6: .... xor() 27109 MB/s, rmw enabled Apr 17 23:29:39.742246 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:29:39.766470 kernel: xor: automatically using best checksumming function avx Apr 17 23:29:39.948261 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:29:39.958634 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:29:39.974288 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:29:39.984730 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 17 23:29:39.987383 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:29:39.992248 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:29:40.016097 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Apr 17 23:29:40.041951 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:29:40.063491 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:29:40.102439 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:29:40.117649 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:29:40.134368 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:29:40.138377 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 17 23:29:40.139873 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:29:40.140036 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:29:40.155993 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 17 23:29:40.157096 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:29:40.165059 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:29:40.178788 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:29:40.178811 kernel: GPT:9289727 != 19775487 Apr 17 23:29:40.178818 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:29:40.178825 kernel: GPT:9289727 != 19775487 Apr 17 23:29:40.178851 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:29:40.178876 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:29:40.179419 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:29:40.187473 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:29:40.203748 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:29:40.203781 kernel: libata version 3.00 loaded. Apr 17 23:29:40.203789 kernel: AES CTR mode by8 optimization enabled Apr 17 23:29:40.210474 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:29:40.214183 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:29:40.222160 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 23:29:40.225277 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 23:29:40.240268 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/vda3 scanned by (udev-worker) (473) Apr 17 23:29:40.240331 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 17 23:29:40.240451 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Apr 17 23:29:40.240463 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 23:29:40.244632 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 17 23:29:40.249162 kernel: scsi host0: ahci Apr 17 23:29:40.251717 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 17 23:29:40.456380 kernel: scsi host1: ahci Apr 17 23:29:40.456688 kernel: scsi host2: ahci Apr 17 23:29:40.456789 kernel: scsi host3: ahci Apr 17 23:29:40.456873 kernel: scsi host4: ahci Apr 17 23:29:40.456955 kernel: scsi host5: ahci Apr 17 23:29:40.457036 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 17 23:29:40.457048 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 17 23:29:40.457056 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 17 23:29:40.457064 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 17 23:29:40.457073 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 17 23:29:40.457082 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 17 23:29:40.464000 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 17 23:29:40.464382 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:29:40.483639 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 17 23:29:40.487008 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:29:40.497392 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:29:40.501261 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:29:40.508006 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:29:40.526284 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:29:40.534193 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:29:40.535692 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:29:40.549378 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:29:40.549395 disk-uuid[547]: Primary Header is updated. Apr 17 23:29:40.549395 disk-uuid[547]: Secondary Entries is updated. Apr 17 23:29:40.549395 disk-uuid[547]: Secondary Header is updated. Apr 17 23:29:40.558745 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:29:40.565255 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:29:40.571755 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 17 23:29:40.571815 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 23:29:40.572170 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 23:29:40.575640 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:29:40.583937 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 23:29:40.586620 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:29:40.591940 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 23:29:40.592012 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 23:29:40.604955 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 17 23:29:40.605035 kernel: ata3.00: applying bridge limits Apr 17 23:29:40.608393 kernel: ata3.00: configured for UDMA/100 Apr 17 23:29:40.614420 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 23:29:40.679275 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 17 23:29:40.679470 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 23:29:40.703182 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 17 23:29:41.566218 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:29:41.566767 disk-uuid[548]: The operation has completed successfully. Apr 17 23:29:41.594661 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:29:41.594840 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:29:41.618643 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:29:41.625683 sh[592]: Success Apr 17 23:29:41.638193 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 17 23:29:41.675465 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:29:41.689620 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:29:41.693302 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:29:41.707231 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:29:41.707275 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:29:41.713284 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:29:41.713353 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:29:41.717950 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:29:41.727670 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:29:41.732902 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:29:41.743606 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:29:41.750156 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:29:41.764284 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:29:41.764368 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:29:41.764383 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:29:41.772192 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:29:41.785469 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:29:41.791884 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:29:41.797325 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:29:41.806650 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:29:41.865187 ignition[661]: Ignition 2.19.0 Apr 17 23:29:41.865219 ignition[661]: Stage: fetch-offline Apr 17 23:29:41.865256 ignition[661]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:29:41.865265 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:29:41.865363 ignition[661]: parsed url from cmdline: "" Apr 17 23:29:41.865366 ignition[661]: no config URL provided Apr 17 23:29:41.865371 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:29:41.865379 ignition[661]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:29:41.865410 ignition[661]: op(1): [started] loading QEMU firmware config module Apr 17 23:29:41.865415 ignition[661]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 17 23:29:41.873099 ignition[661]: op(1): [finished] loading QEMU firmware config module Apr 17 23:29:41.952336 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:29:41.976719 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:29:41.996824 systemd-networkd[780]: lo: Link UP Apr 17 23:29:41.996854 systemd-networkd[780]: lo: Gained carrier Apr 17 23:29:41.997871 systemd-networkd[780]: Enumeration completed Apr 17 23:29:41.997942 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:29:42.000249 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:29:42.000252 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:29:42.001244 systemd-networkd[780]: eth0: Link UP Apr 17 23:29:42.001247 systemd-networkd[780]: eth0: Gained carrier Apr 17 23:29:42.001255 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:29:42.029514 systemd[1]: Reached target network.target - Network. Apr 17 23:29:42.045381 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:29:42.144072 ignition[661]: parsing config with SHA512: cd520735f375fcc0f4c642778189af7f8c70df2290a76cd8010366c3e7b8c6024ff911261dd2445b8e482357dc2c39d62a8929f04e0ca1944222db5df61e88c3 Apr 17 23:29:42.147829 unknown[661]: fetched base config from "system" Apr 17 23:29:42.147839 unknown[661]: fetched user config from "qemu" Apr 17 23:29:42.148360 ignition[661]: fetch-offline: fetch-offline passed Apr 17 23:29:42.148412 ignition[661]: Ignition finished successfully Apr 17 23:29:42.158783 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:29:42.159226 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 17 23:29:42.189677 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:29:42.207337 ignition[784]: Ignition 2.19.0 Apr 17 23:29:42.207366 ignition[784]: Stage: kargs Apr 17 23:29:42.207495 ignition[784]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:29:42.207502 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:29:42.208288 ignition[784]: kargs: kargs passed Apr 17 23:29:42.208326 ignition[784]: Ignition finished successfully Apr 17 23:29:42.222338 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:29:42.245420 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:29:42.265997 ignition[792]: Ignition 2.19.0 Apr 17 23:29:42.266028 ignition[792]: Stage: disks Apr 17 23:29:42.266217 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:29:42.266223 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:29:42.266862 ignition[792]: disks: disks passed Apr 17 23:29:42.266893 ignition[792]: Ignition finished successfully Apr 17 23:29:42.280793 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:29:42.285404 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:29:42.291517 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:29:42.298088 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:29:42.306796 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:29:42.306955 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:29:42.330475 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:29:42.347095 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:29:42.351418 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:29:42.371290 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:29:42.482199 kernel: EXT4-fs (vda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:29:42.482711 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:29:42.483295 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:29:42.504478 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:29:42.514175 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Apr 17 23:29:42.508958 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:29:42.516319 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:29:42.516367 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:29:42.516400 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:29:42.538793 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:29:42.557394 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:29:42.557423 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:29:42.557435 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:29:42.557443 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:29:42.548462 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:29:42.559222 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:29:42.599944 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:29:42.607578 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:29:42.612647 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:29:42.618305 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:29:42.723248 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:29:42.735459 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:29:42.747639 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:29:42.745891 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:29:42.755721 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:29:42.772352 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:29:42.788851 ignition[924]: INFO : Ignition 2.19.0 Apr 17 23:29:42.788851 ignition[924]: INFO : Stage: mount Apr 17 23:29:42.793317 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:29:42.793317 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:29:42.793317 ignition[924]: INFO : mount: mount passed Apr 17 23:29:42.793317 ignition[924]: INFO : Ignition finished successfully Apr 17 23:29:42.802515 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:29:42.823655 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:29:42.832331 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:29:42.846241 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Apr 17 23:29:42.846311 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:29:42.849210 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:29:42.853613 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:29:42.861323 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:29:42.862953 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:29:42.903186 ignition[954]: INFO : Ignition 2.19.0 Apr 17 23:29:42.903186 ignition[954]: INFO : Stage: files Apr 17 23:29:42.903186 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:29:42.903186 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:29:42.913662 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:29:42.913662 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:29:42.913662 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:29:42.913662 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:29:42.913662 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:29:42.913662 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:29:42.913662 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:29:42.913662 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:29:42.909615 unknown[954]: wrote ssh authorized keys file for user: core Apr 17 23:29:42.955907 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:29:43.015762 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:29:43.015762 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 23:29:43.015762 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 17 23:29:43.053496 systemd-networkd[780]: eth0: Gained IPv6LL Apr 17 23:29:43.246295 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 17 23:29:43.310947 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 23:29:43.310947 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:29:43.320672 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:29:43.320672 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:29:43.320672 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:29:43.320672 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:29:43.320672 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:29:43.320672 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:29:43.320672 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:29:43.320672 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:29:43.320672 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:29:43.320672 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:29:43.320672 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:29:43.320672 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:29:43.320672 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 17 23:29:43.629095 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 17 23:29:47.100274 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:29:47.100274 ignition[954]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 17 23:29:47.113916 ignition[954]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:29:47.121500 ignition[954]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:29:47.121500 ignition[954]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 17 23:29:47.121500 ignition[954]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 17 23:29:47.121500 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:29:47.121500 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:29:47.121500 ignition[954]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 17 23:29:47.121500 ignition[954]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 17 23:29:47.187490 ignition[954]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:29:47.193468 ignition[954]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:29:47.199225 ignition[954]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 17 23:29:47.199225 ignition[954]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:29:47.199225 ignition[954]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:29:47.199225 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:29:47.199225 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:29:47.199225 ignition[954]: INFO : files: files passed Apr 17 23:29:47.199225 ignition[954]: INFO : Ignition finished successfully Apr 17 23:29:47.213852 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:29:47.237541 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:29:47.245656 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:29:47.255323 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:29:47.255451 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:29:47.265391 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Apr 17 23:29:47.271267 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:29:47.271267 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:29:47.280534 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:29:47.287208 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:29:47.287509 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:29:47.306355 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:29:47.331801 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:29:47.331965 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:29:47.339189 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:29:47.345377 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:29:47.350962 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:29:47.367468 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:29:47.382474 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:29:47.395364 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:29:47.411758 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:29:47.411970 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:29:47.419191 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:29:47.425455 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:29:47.425694 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:29:47.436460 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:29:47.442881 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:29:47.448548 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:29:47.454294 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:29:47.457075 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:29:47.469859 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:29:47.471061 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:29:47.479499 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:29:47.486638 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:29:47.492410 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:29:47.499834 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:29:47.500332 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:29:47.509034 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:29:47.509259 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:29:47.515058 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:29:47.524377 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:29:47.524627 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:29:47.524752 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:29:47.536942 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:29:47.537088 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:29:47.540052 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:29:47.545886 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:29:47.551450 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:29:47.554050 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:29:47.563361 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:29:47.570218 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:29:47.570336 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:29:47.572589 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:29:47.572691 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:29:47.577835 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:29:47.578027 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:29:47.586894 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:29:47.587240 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:29:47.614635 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:29:47.617544 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:29:47.617773 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:29:47.628411 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:29:47.632018 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:29:47.633224 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:29:47.648712 ignition[1009]: INFO : Ignition 2.19.0 Apr 17 23:29:47.648712 ignition[1009]: INFO : Stage: umount Apr 17 23:29:47.648712 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:29:47.648712 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:29:47.648712 ignition[1009]: INFO : umount: umount passed Apr 17 23:29:47.648712 ignition[1009]: INFO : Ignition finished successfully Apr 17 23:29:47.641378 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:29:47.641621 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:29:47.666418 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:29:47.667789 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:29:47.667924 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:29:47.672491 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:29:47.672632 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:29:47.680522 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:29:47.680663 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:29:47.686189 systemd[1]: Stopped target network.target - Network. Apr 17 23:29:47.692344 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:29:47.692490 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:29:47.698259 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:29:47.698310 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:29:47.703220 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:29:47.703265 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:29:47.708425 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:29:47.708468 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:29:47.711782 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:29:47.711824 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:29:47.720465 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:29:47.725531 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:29:47.732187 systemd-networkd[780]: eth0: DHCPv6 lease lost Apr 17 23:29:47.735367 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:29:47.735518 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:29:47.741733 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:29:47.741857 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:29:47.746046 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:29:47.746096 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:29:47.762637 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:29:47.767369 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:29:47.767426 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:29:47.773848 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:29:47.773899 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:29:47.778278 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:29:47.778328 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:29:47.788937 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:29:47.789056 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:29:47.797313 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:29:47.818838 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:29:47.818953 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:29:47.826326 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:29:47.826495 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:29:47.835076 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:29:47.835329 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:29:47.836728 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:29:47.836761 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:29:47.841893 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:29:47.841978 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:29:47.853810 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:29:47.853930 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:29:47.863095 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:29:47.863269 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:29:47.907026 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:29:47.934465 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:29:47.934692 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:29:47.944299 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:29:47.944702 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:29:47.953804 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:29:47.953910 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:29:47.965235 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:29:47.981348 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:29:48.002228 systemd[1]: Switching root. Apr 17 23:29:48.034221 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Apr 17 23:29:48.034275 systemd-journald[195]: Journal stopped Apr 17 23:29:49.167456 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:29:49.167504 kernel: SELinux: policy capability open_perms=1 Apr 17 23:29:49.167517 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:29:49.167525 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:29:49.167536 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:29:49.167544 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:29:49.167552 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:29:49.167560 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:29:49.167605 kernel: audit: type=1403 audit(1776468588.231:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:29:49.167620 systemd[1]: Successfully loaded SELinux policy in 47.505ms. Apr 17 23:29:49.167637 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.326ms. Apr 17 23:29:49.167647 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:29:49.167656 systemd[1]: Detected virtualization kvm. Apr 17 23:29:49.167666 systemd[1]: Detected architecture x86-64. Apr 17 23:29:49.167673 systemd[1]: Detected first boot. Apr 17 23:29:49.167682 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:29:49.167690 zram_generator::config[1055]: No configuration found. Apr 17 23:29:49.167699 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:29:49.167707 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:29:49.167717 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:29:49.167726 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:29:49.167734 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:29:49.167742 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:29:49.167750 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:29:49.167758 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:29:49.167766 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:29:49.167774 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:29:49.167784 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:29:49.167791 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:29:49.167799 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:29:49.167807 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:29:49.167814 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:29:49.167822 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:29:49.167830 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:29:49.167839 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:29:49.167846 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:29:49.167857 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:29:49.167864 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:29:49.167873 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:29:49.167880 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:29:49.167888 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:29:49.167896 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:29:49.167904 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:29:49.167912 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:29:49.167920 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:29:49.167928 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:29:49.167936 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:29:49.167944 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:29:49.167951 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:29:49.167959 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:29:49.167967 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:29:49.167974 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:29:49.167982 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:29:49.167991 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:29:49.167998 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:29:49.168007 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:29:49.168014 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:29:49.168022 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:29:49.168030 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:29:49.168038 systemd[1]: Reached target machines.target - Containers. Apr 17 23:29:49.168045 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:29:49.168053 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:29:49.168062 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:29:49.168069 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:29:49.168078 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:29:49.168086 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:29:49.168094 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:29:49.168102 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:29:49.168110 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:29:49.168167 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:29:49.168177 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:29:49.168184 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:29:49.168192 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:29:49.168200 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:29:49.168208 kernel: fuse: init (API version 7.39) Apr 17 23:29:49.168216 kernel: loop: module loaded Apr 17 23:29:49.168224 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:29:49.168232 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:29:49.168240 kernel: ACPI: bus type drm_connector registered Apr 17 23:29:49.168249 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:29:49.168270 systemd-journald[1139]: Collecting audit messages is disabled. Apr 17 23:29:49.168291 systemd-journald[1139]: Journal started Apr 17 23:29:49.168307 systemd-journald[1139]: Runtime Journal (/run/log/journal/f95d1b97e8284a348ccb06209ba4dd83) is 6.0M, max 48.4M, 42.3M free. Apr 17 23:29:48.698783 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:29:48.727842 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 17 23:29:48.728305 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:29:49.190195 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:29:49.199929 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:29:49.204430 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:29:49.204467 systemd[1]: Stopped verity-setup.service. Apr 17 23:29:49.212179 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:29:49.216395 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:29:49.218789 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:29:49.222240 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:29:49.225733 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:29:49.228670 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:29:49.231938 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:29:49.235254 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:29:49.238818 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:29:49.242472 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:29:49.246537 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:29:49.246731 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:29:49.250396 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:29:49.250546 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:29:49.254040 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:29:49.254251 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:29:49.257474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:29:49.257667 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:29:49.261385 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:29:49.261527 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:29:49.264789 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:29:49.264931 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:29:49.268644 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:29:49.272511 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:29:49.276520 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:29:49.280316 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:29:49.295098 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:29:49.311704 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:29:49.316333 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:29:49.319536 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:29:49.319623 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:29:49.323619 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:29:49.328383 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:29:49.332739 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:29:49.335702 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:29:49.339666 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:29:49.344678 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:29:49.348809 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:29:49.349900 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:29:49.353504 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:29:49.359355 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:29:49.364525 systemd-journald[1139]: Time spent on flushing to /var/log/journal/f95d1b97e8284a348ccb06209ba4dd83 is 22.556ms for 954 entries. Apr 17 23:29:49.364525 systemd-journald[1139]: System Journal (/var/log/journal/f95d1b97e8284a348ccb06209ba4dd83) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:29:49.401383 systemd-journald[1139]: Received client request to flush runtime journal. Apr 17 23:29:49.401418 kernel: loop0: detected capacity change from 0 to 142488 Apr 17 23:29:49.364416 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:29:49.372373 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:29:49.379489 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:29:49.385102 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:29:49.389803 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:29:49.394504 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:29:49.399345 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:29:49.405412 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:29:49.417827 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:29:49.427297 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:29:49.427280 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:29:49.430898 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:29:49.437346 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 17 23:29:49.453849 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:29:49.454444 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:29:49.465229 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:29:49.476169 kernel: loop1: detected capacity change from 0 to 140768 Apr 17 23:29:49.477474 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:29:49.498671 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Apr 17 23:29:49.498684 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Apr 17 23:29:49.502966 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:29:49.524221 kernel: loop2: detected capacity change from 0 to 219192 Apr 17 23:29:49.566221 kernel: loop3: detected capacity change from 0 to 142488 Apr 17 23:29:49.584406 kernel: loop4: detected capacity change from 0 to 140768 Apr 17 23:29:49.605219 kernel: loop5: detected capacity change from 0 to 219192 Apr 17 23:29:49.619040 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 17 23:29:49.620377 (sd-merge)[1195]: Merged extensions into '/usr'. Apr 17 23:29:49.625106 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:29:49.625303 systemd[1]: Reloading... Apr 17 23:29:49.684364 zram_generator::config[1221]: No configuration found. Apr 17 23:29:49.760508 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:29:49.790863 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:29:49.823392 systemd[1]: Reloading finished in 197 ms. Apr 17 23:29:49.854042 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:29:49.858327 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:29:49.862689 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:29:49.887920 systemd[1]: Starting ensure-sysext.service... Apr 17 23:29:49.891923 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:29:49.896877 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:29:49.901997 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:29:49.902033 systemd[1]: Reloading... Apr 17 23:29:49.910378 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:29:49.910739 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:29:49.911472 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:29:49.911692 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Apr 17 23:29:49.911732 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Apr 17 23:29:49.913712 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:29:49.913741 systemd-tmpfiles[1260]: Skipping /boot Apr 17 23:29:49.919516 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:29:49.919632 systemd-tmpfiles[1260]: Skipping /boot Apr 17 23:29:49.940402 zram_generator::config[1287]: No configuration found. Apr 17 23:29:49.945342 systemd-udevd[1261]: Using default interface naming scheme 'v255'. Apr 17 23:29:50.052165 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1318) Apr 17 23:29:50.053434 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:29:50.061264 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 17 23:29:50.090216 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:29:50.126271 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 23:29:50.127263 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 17 23:29:50.128474 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 23:29:50.140864 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:29:50.141753 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:29:50.147467 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 17 23:29:50.148745 systemd[1]: Reloading finished in 246 ms. Apr 17 23:29:50.244965 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:29:50.245293 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:29:50.260329 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:29:50.342215 systemd[1]: Finished ensure-sysext.service. Apr 17 23:29:50.364720 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:29:50.372522 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:29:50.391780 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:29:50.397179 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:29:50.401640 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:29:50.403006 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:29:50.410351 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:29:50.416512 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:29:50.420852 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:29:50.426492 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:29:50.429827 lvm[1360]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:29:50.430294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:29:50.432419 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:29:50.438803 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:29:50.445352 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:29:50.447613 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:29:50.450481 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 23:29:50.455262 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:29:50.460810 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:29:50.464971 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:29:50.465934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:29:50.466103 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:29:50.470716 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:29:50.470908 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:29:50.474650 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:29:50.479969 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:29:50.480224 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:29:50.493849 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:29:50.493977 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:29:50.497702 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:29:50.502235 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:29:50.508843 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:29:50.513298 augenrules[1387]: No rules Apr 17 23:29:50.515866 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:29:50.518988 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:29:50.521328 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:29:50.531525 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:29:50.531685 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:29:50.531748 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:29:50.532973 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:29:50.538343 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:29:50.540269 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:29:50.543479 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:29:50.550686 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:29:50.569171 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:29:50.576210 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:29:50.639678 systemd-networkd[1378]: lo: Link UP Apr 17 23:29:50.639685 systemd-networkd[1378]: lo: Gained carrier Apr 17 23:29:50.640556 systemd-networkd[1378]: Enumeration completed Apr 17 23:29:50.641475 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:29:50.641503 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:29:50.642509 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:29:50.642728 systemd-networkd[1378]: eth0: Link UP Apr 17 23:29:50.642731 systemd-networkd[1378]: eth0: Gained carrier Apr 17 23:29:50.642743 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:29:50.647978 systemd-resolved[1379]: Positive Trust Anchors: Apr 17 23:29:50.647992 systemd-resolved[1379]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:29:50.648022 systemd-resolved[1379]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:29:50.652276 systemd-resolved[1379]: Defaulting to hostname 'linux'. Apr 17 23:29:50.664351 systemd-networkd[1378]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:29:50.665104 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Apr 17 23:29:51.795857 systemd-resolved[1379]: Clock change detected. Flushing caches. Apr 17 23:29:51.795897 systemd-timesyncd[1380]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 17 23:29:51.795955 systemd-timesyncd[1380]: Initial clock synchronization to Fri 2026-04-17 23:29:51.795801 UTC. Apr 17 23:29:51.908578 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 23:29:51.915703 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:29:51.919389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:29:51.923595 systemd[1]: Reached target network.target - Network. Apr 17 23:29:51.926213 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:29:51.929670 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:29:51.933019 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:29:51.936847 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:29:51.941425 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:29:51.945116 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:29:51.945162 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:29:51.947795 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:29:51.951321 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:29:51.954465 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:29:51.958432 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:29:51.962141 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:29:51.966779 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:29:51.981389 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:29:51.988633 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:29:51.993181 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:29:51.997474 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:29:52.000825 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:29:52.004057 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:29:52.004177 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:29:52.005360 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:29:52.010915 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:29:52.016903 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:29:52.023287 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:29:52.026773 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:29:52.027817 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:29:52.028467 jq[1425]: false Apr 17 23:29:52.033013 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:29:52.040405 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:29:52.043501 extend-filesystems[1426]: Found loop3 Apr 17 23:29:52.049684 extend-filesystems[1426]: Found loop4 Apr 17 23:29:52.049684 extend-filesystems[1426]: Found loop5 Apr 17 23:29:52.049684 extend-filesystems[1426]: Found sr0 Apr 17 23:29:52.049684 extend-filesystems[1426]: Found vda Apr 17 23:29:52.049684 extend-filesystems[1426]: Found vda1 Apr 17 23:29:52.049684 extend-filesystems[1426]: Found vda2 Apr 17 23:29:52.049684 extend-filesystems[1426]: Found vda3 Apr 17 23:29:52.049684 extend-filesystems[1426]: Found usr Apr 17 23:29:52.049684 extend-filesystems[1426]: Found vda4 Apr 17 23:29:52.049684 extend-filesystems[1426]: Found vda6 Apr 17 23:29:52.049684 extend-filesystems[1426]: Found vda7 Apr 17 23:29:52.049684 extend-filesystems[1426]: Found vda9 Apr 17 23:29:52.049684 extend-filesystems[1426]: Checking size of /dev/vda9 Apr 17 23:29:52.101769 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 17 23:29:52.101794 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1308) Apr 17 23:29:52.049675 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:29:52.056290 dbus-daemon[1424]: [system] SELinux support is enabled Apr 17 23:29:52.102039 extend-filesystems[1426]: Resized partition /dev/vda9 Apr 17 23:29:52.062417 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:29:52.105278 extend-filesystems[1437]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:29:52.078185 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:29:52.090006 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:29:52.099259 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:29:52.119367 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 17 23:29:52.113248 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:29:52.119660 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:29:52.151417 update_engine[1446]: I20260417 23:29:52.131043 1446 main.cc:92] Flatcar Update Engine starting Apr 17 23:29:52.151417 update_engine[1446]: I20260417 23:29:52.133785 1446 update_check_scheduler.cc:74] Next update check in 3m37s Apr 17 23:29:52.132465 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:29:52.163506 jq[1448]: true Apr 17 23:29:52.163665 extend-filesystems[1437]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 17 23:29:52.163665 extend-filesystems[1437]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 17 23:29:52.163665 extend-filesystems[1437]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 17 23:29:52.132724 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:29:52.174707 extend-filesystems[1426]: Resized filesystem in /dev/vda9 Apr 17 23:29:52.132954 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:29:52.177267 tar[1450]: linux-amd64/LICENSE Apr 17 23:29:52.177267 tar[1450]: linux-amd64/helm Apr 17 23:29:52.133056 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:29:52.178019 jq[1451]: true Apr 17 23:29:52.139499 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:29:52.139848 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:29:52.151392 systemd-logind[1434]: Watching system buttons on /dev/input/event1 (Power Button) Apr 17 23:29:52.151405 systemd-logind[1434]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:29:52.152007 systemd-logind[1434]: New seat seat0. Apr 17 23:29:52.178393 dbus-daemon[1424]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 23:29:52.156471 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:29:52.156876 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:29:52.161229 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:29:52.179304 (ntainerd)[1454]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:29:52.185260 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:29:52.190330 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:29:52.192037 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:29:52.195948 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:29:52.196034 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:29:52.203752 sshd_keygen[1445]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:29:52.207369 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:29:52.226658 bash[1479]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:29:52.228897 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:29:52.233762 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 23:29:52.240358 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:29:52.251440 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:29:52.259263 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:29:52.259491 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:29:52.264327 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:29:52.270333 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:29:52.282424 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:29:52.292594 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:29:52.298524 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:29:52.304340 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:29:52.355279 containerd[1454]: time="2026-04-17T23:29:52.355171961Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:29:52.374454 containerd[1454]: time="2026-04-17T23:29:52.374407548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:29:52.377169 containerd[1454]: time="2026-04-17T23:29:52.377142911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:29:52.377233 containerd[1454]: time="2026-04-17T23:29:52.377224424Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:29:52.377283 containerd[1454]: time="2026-04-17T23:29:52.377276229Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:29:52.377417 containerd[1454]: time="2026-04-17T23:29:52.377408801Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:29:52.377456 containerd[1454]: time="2026-04-17T23:29:52.377449347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:29:52.377518 containerd[1454]: time="2026-04-17T23:29:52.377508745Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:29:52.377594 containerd[1454]: time="2026-04-17T23:29:52.377585946Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:29:52.377749 containerd[1454]: time="2026-04-17T23:29:52.377737934Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:29:52.377779 containerd[1454]: time="2026-04-17T23:29:52.377773807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:29:52.377808 containerd[1454]: time="2026-04-17T23:29:52.377801469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:29:52.377839 containerd[1454]: time="2026-04-17T23:29:52.377833506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:29:52.377916 containerd[1454]: time="2026-04-17T23:29:52.377907488Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:29:52.378157 containerd[1454]: time="2026-04-17T23:29:52.378144035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:29:52.378288 containerd[1454]: time="2026-04-17T23:29:52.378276963Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:29:52.378318 containerd[1454]: time="2026-04-17T23:29:52.378312801Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:29:52.378399 containerd[1454]: time="2026-04-17T23:29:52.378390991Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:29:52.378452 containerd[1454]: time="2026-04-17T23:29:52.378445959Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:29:52.383945 containerd[1454]: time="2026-04-17T23:29:52.383924849Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:29:52.384042 containerd[1454]: time="2026-04-17T23:29:52.384030907Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:29:52.384268 containerd[1454]: time="2026-04-17T23:29:52.384255064Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:29:52.384324 containerd[1454]: time="2026-04-17T23:29:52.384314735Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:29:52.384372 containerd[1454]: time="2026-04-17T23:29:52.384363012Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:29:52.384515 containerd[1454]: time="2026-04-17T23:29:52.384504126Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:29:52.385151 containerd[1454]: time="2026-04-17T23:29:52.385051498Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:29:52.385331 containerd[1454]: time="2026-04-17T23:29:52.385320975Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:29:52.385368 containerd[1454]: time="2026-04-17T23:29:52.385362516Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:29:52.385416 containerd[1454]: time="2026-04-17T23:29:52.385407986Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:29:52.385447 containerd[1454]: time="2026-04-17T23:29:52.385441448Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:29:52.385482 containerd[1454]: time="2026-04-17T23:29:52.385475540Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:29:52.385510 containerd[1454]: time="2026-04-17T23:29:52.385504803Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:29:52.385585 containerd[1454]: time="2026-04-17T23:29:52.385577777Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:29:52.385624 containerd[1454]: time="2026-04-17T23:29:52.385617369Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:29:52.385665 containerd[1454]: time="2026-04-17T23:29:52.385653777Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:29:52.385701 containerd[1454]: time="2026-04-17T23:29:52.385694096Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:29:52.385728 containerd[1454]: time="2026-04-17T23:29:52.385722426Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:29:52.385761 containerd[1454]: time="2026-04-17T23:29:52.385755374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.385790 containerd[1454]: time="2026-04-17T23:29:52.385784908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.385822 containerd[1454]: time="2026-04-17T23:29:52.385816641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.385855 containerd[1454]: time="2026-04-17T23:29:52.385849631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.385883 containerd[1454]: time="2026-04-17T23:29:52.385877296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.385911 containerd[1454]: time="2026-04-17T23:29:52.385905560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.385938 containerd[1454]: time="2026-04-17T23:29:52.385932761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.385965 containerd[1454]: time="2026-04-17T23:29:52.385960155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.385997 containerd[1454]: time="2026-04-17T23:29:52.385990910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.386031 containerd[1454]: time="2026-04-17T23:29:52.386024368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.386058 containerd[1454]: time="2026-04-17T23:29:52.386052792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.386148 containerd[1454]: time="2026-04-17T23:29:52.386141212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.386178 containerd[1454]: time="2026-04-17T23:29:52.386172793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.386209 containerd[1454]: time="2026-04-17T23:29:52.386204017Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:29:52.386251 containerd[1454]: time="2026-04-17T23:29:52.386244491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.386286 containerd[1454]: time="2026-04-17T23:29:52.386279099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.386313 containerd[1454]: time="2026-04-17T23:29:52.386307309Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:29:52.386398 containerd[1454]: time="2026-04-17T23:29:52.386390963Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:29:52.386578 containerd[1454]: time="2026-04-17T23:29:52.386567306Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:29:52.386609 containerd[1454]: time="2026-04-17T23:29:52.386603461Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:29:52.386644 containerd[1454]: time="2026-04-17T23:29:52.386637193Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:29:52.386670 containerd[1454]: time="2026-04-17T23:29:52.386664746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.386697 containerd[1454]: time="2026-04-17T23:29:52.386691904Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:29:52.386728 containerd[1454]: time="2026-04-17T23:29:52.386722814Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:29:52.386754 containerd[1454]: time="2026-04-17T23:29:52.386748583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:29:52.386997 containerd[1454]: time="2026-04-17T23:29:52.386963896Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:29:52.387236 containerd[1454]: time="2026-04-17T23:29:52.387226836Z" level=info msg="Connect containerd service" Apr 17 23:29:52.387289 containerd[1454]: time="2026-04-17T23:29:52.387282995Z" level=info msg="using legacy CRI server" Apr 17 23:29:52.387314 containerd[1454]: time="2026-04-17T23:29:52.387308598Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:29:52.387408 containerd[1454]: time="2026-04-17T23:29:52.387400001Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:29:52.388008 containerd[1454]: time="2026-04-17T23:29:52.387984717Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:29:52.388327 containerd[1454]: time="2026-04-17T23:29:52.388308478Z" level=info msg="Start subscribing containerd event" Apr 17 23:29:52.388423 containerd[1454]: time="2026-04-17T23:29:52.388414252Z" level=info msg="Start recovering state" Apr 17 23:29:52.388827 containerd[1454]: time="2026-04-17T23:29:52.388814562Z" level=info msg="Start event monitor" Apr 17 23:29:52.388871 containerd[1454]: time="2026-04-17T23:29:52.388863995Z" level=info msg="Start snapshots syncer" Apr 17 23:29:52.388973 containerd[1454]: time="2026-04-17T23:29:52.388690125Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:29:52.389025 containerd[1454]: time="2026-04-17T23:29:52.388992880Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:29:52.389042 containerd[1454]: time="2026-04-17T23:29:52.388891312Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:29:52.389042 containerd[1454]: time="2026-04-17T23:29:52.389036439Z" level=info msg="Start streaming server" Apr 17 23:29:52.389206 containerd[1454]: time="2026-04-17T23:29:52.389160651Z" level=info msg="containerd successfully booted in 0.034786s" Apr 17 23:29:52.390615 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:29:52.616700 tar[1450]: linux-amd64/README.md Apr 17 23:29:52.641586 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:29:53.655799 systemd-networkd[1378]: eth0: Gained IPv6LL Apr 17 23:29:53.658898 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:29:53.663314 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:29:53.673422 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 17 23:29:53.678020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:29:53.682465 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:29:53.700970 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 17 23:29:53.701182 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 17 23:29:53.704865 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:29:53.708669 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:29:54.007378 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:29:54.011685 systemd[1]: Started sshd@0-10.0.0.26:22-10.0.0.1:41362.service - OpenSSH per-connection server daemon (10.0.0.1:41362). Apr 17 23:29:54.055730 sshd[1532]: Accepted publickey for core from 10.0.0.1 port 41362 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:29:54.057598 sshd[1532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:54.065767 systemd-logind[1434]: New session 1 of user core. Apr 17 23:29:54.066624 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:29:54.075399 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:29:54.086385 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:29:54.098393 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:29:54.103702 (systemd)[1536]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:29:54.195345 systemd[1536]: Queued start job for default target default.target. Apr 17 23:29:54.210778 systemd[1536]: Created slice app.slice - User Application Slice. Apr 17 23:29:54.210830 systemd[1536]: Reached target paths.target - Paths. Apr 17 23:29:54.210842 systemd[1536]: Reached target timers.target - Timers. Apr 17 23:29:54.212138 systemd[1536]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:29:54.225123 systemd[1536]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:29:54.225203 systemd[1536]: Reached target sockets.target - Sockets. Apr 17 23:29:54.225213 systemd[1536]: Reached target basic.target - Basic System. Apr 17 23:29:54.225235 systemd[1536]: Reached target default.target - Main User Target. Apr 17 23:29:54.225255 systemd[1536]: Startup finished in 114ms. Apr 17 23:29:54.225527 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:29:54.240338 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:29:54.317342 systemd[1]: Started sshd@1-10.0.0.26:22-10.0.0.1:41366.service - OpenSSH per-connection server daemon (10.0.0.1:41366). Apr 17 23:29:54.355890 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 41366 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:29:54.357397 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:54.361333 systemd-logind[1434]: New session 2 of user core. Apr 17 23:29:54.378303 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:29:54.398764 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:29:54.402893 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:29:54.403421 (kubelet)[1555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:29:54.409808 systemd[1]: Startup finished in 1.449s (kernel) + 9.504s (initrd) + 5.094s (userspace) = 16.048s. Apr 17 23:29:54.438179 sshd[1547]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:54.445358 systemd[1]: sshd@1-10.0.0.26:22-10.0.0.1:41366.service: Deactivated successfully. Apr 17 23:29:54.446823 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:29:54.447736 systemd-logind[1434]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:29:54.455382 systemd[1]: Started sshd@2-10.0.0.26:22-10.0.0.1:41380.service - OpenSSH per-connection server daemon (10.0.0.1:41380). Apr 17 23:29:54.456254 systemd-logind[1434]: Removed session 2. Apr 17 23:29:54.497038 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 41380 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:29:54.498265 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:54.502737 systemd-logind[1434]: New session 3 of user core. Apr 17 23:29:54.511625 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:29:54.561515 sshd[1560]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:54.571471 systemd[1]: sshd@2-10.0.0.26:22-10.0.0.1:41380.service: Deactivated successfully. Apr 17 23:29:54.573507 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:29:54.574708 systemd-logind[1434]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:29:54.580399 systemd[1]: Started sshd@3-10.0.0.26:22-10.0.0.1:41382.service - OpenSSH per-connection server daemon (10.0.0.1:41382). Apr 17 23:29:54.582345 systemd-logind[1434]: Removed session 3. Apr 17 23:29:54.611880 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 41382 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:29:54.613020 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:54.618334 systemd-logind[1434]: New session 4 of user core. Apr 17 23:29:54.625248 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:29:54.682303 sshd[1576]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:54.689115 systemd[1]: sshd@3-10.0.0.26:22-10.0.0.1:41382.service: Deactivated successfully. Apr 17 23:29:54.690732 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:29:54.692824 systemd-logind[1434]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:29:54.699979 systemd[1]: Started sshd@4-10.0.0.26:22-10.0.0.1:41394.service - OpenSSH per-connection server daemon (10.0.0.1:41394). Apr 17 23:29:54.701752 systemd-logind[1434]: Removed session 4. Apr 17 23:29:54.732245 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 41394 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:29:54.733535 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:54.738795 systemd-logind[1434]: New session 5 of user core. Apr 17 23:29:54.752482 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:29:54.817939 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:29:54.818313 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:29:54.829159 kubelet[1555]: E0417 23:29:54.828804 1555 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:29:54.832423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:29:54.832644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:29:54.833761 sudo[1587]: pam_unix(sudo:session): session closed for user root Apr 17 23:29:54.836784 sshd[1584]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:54.851769 systemd[1]: sshd@4-10.0.0.26:22-10.0.0.1:41394.service: Deactivated successfully. Apr 17 23:29:54.853226 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:29:54.854464 systemd-logind[1434]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:29:54.855668 systemd[1]: Started sshd@5-10.0.0.26:22-10.0.0.1:41408.service - OpenSSH per-connection server daemon (10.0.0.1:41408). Apr 17 23:29:54.856768 systemd-logind[1434]: Removed session 5. Apr 17 23:29:54.889734 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 41408 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:29:54.891286 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:54.896321 systemd-logind[1434]: New session 6 of user core. Apr 17 23:29:54.906341 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:29:54.960347 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:29:54.960683 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:29:54.964762 sudo[1598]: pam_unix(sudo:session): session closed for user root Apr 17 23:29:54.969482 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:29:54.969728 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:29:54.983539 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:29:54.985497 auditctl[1601]: No rules Apr 17 23:29:54.985804 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:29:54.985979 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:29:54.989003 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:29:55.019517 augenrules[1619]: No rules Apr 17 23:29:55.020984 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:29:55.021967 sudo[1597]: pam_unix(sudo:session): session closed for user root Apr 17 23:29:55.023577 sshd[1594]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:55.030120 systemd[1]: sshd@5-10.0.0.26:22-10.0.0.1:41408.service: Deactivated successfully. Apr 17 23:29:55.031311 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:29:55.032627 systemd-logind[1434]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:29:55.033671 systemd[1]: Started sshd@6-10.0.0.26:22-10.0.0.1:41422.service - OpenSSH per-connection server daemon (10.0.0.1:41422). Apr 17 23:29:55.034690 systemd-logind[1434]: Removed session 6. Apr 17 23:29:55.068479 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 41422 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:29:55.069819 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:55.074214 systemd-logind[1434]: New session 7 of user core. Apr 17 23:29:55.092316 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:29:55.147324 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:29:55.147660 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:29:55.418419 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:29:55.418535 (dockerd)[1650]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:29:55.689451 dockerd[1650]: time="2026-04-17T23:29:55.689250414Z" level=info msg="Starting up" Apr 17 23:29:55.897977 dockerd[1650]: time="2026-04-17T23:29:55.897873680Z" level=info msg="Loading containers: start." Apr 17 23:29:56.024139 kernel: Initializing XFRM netlink socket Apr 17 23:29:56.112427 systemd-networkd[1378]: docker0: Link UP Apr 17 23:29:56.135335 dockerd[1650]: time="2026-04-17T23:29:56.135249707Z" level=info msg="Loading containers: done." Apr 17 23:29:56.149535 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1566701086-merged.mount: Deactivated successfully. Apr 17 23:29:56.150690 dockerd[1650]: time="2026-04-17T23:29:56.150623238Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:29:56.150798 dockerd[1650]: time="2026-04-17T23:29:56.150764142Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:29:56.150880 dockerd[1650]: time="2026-04-17T23:29:56.150850402Z" level=info msg="Daemon has completed initialization" Apr 17 23:29:56.194788 dockerd[1650]: time="2026-04-17T23:29:56.194654182Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:29:56.195769 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:29:56.659592 containerd[1454]: time="2026-04-17T23:29:56.659388647Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 17 23:29:57.479538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1956001293.mount: Deactivated successfully. Apr 17 23:29:58.312271 containerd[1454]: time="2026-04-17T23:29:58.312028773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:58.312917 containerd[1454]: time="2026-04-17T23:29:58.312865555Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 17 23:29:58.313960 containerd[1454]: time="2026-04-17T23:29:58.313922254Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:58.316888 containerd[1454]: time="2026-04-17T23:29:58.316811325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:58.317967 containerd[1454]: time="2026-04-17T23:29:58.317909520Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 1.658469609s" Apr 17 23:29:58.317967 containerd[1454]: time="2026-04-17T23:29:58.317959416Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 17 23:29:58.319045 containerd[1454]: time="2026-04-17T23:29:58.319002944Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 17 23:29:59.303604 containerd[1454]: time="2026-04-17T23:29:59.303426233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:59.304541 containerd[1454]: time="2026-04-17T23:29:59.304479069Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 17 23:29:59.305985 containerd[1454]: time="2026-04-17T23:29:59.305939728Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:59.308785 containerd[1454]: time="2026-04-17T23:29:59.308719524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:59.309897 containerd[1454]: time="2026-04-17T23:29:59.309856542Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 990.808047ms" Apr 17 23:29:59.309988 containerd[1454]: time="2026-04-17T23:29:59.309897723Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 17 23:29:59.310778 containerd[1454]: time="2026-04-17T23:29:59.310627774Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 17 23:30:00.150526 containerd[1454]: time="2026-04-17T23:30:00.150335648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:00.151313 containerd[1454]: time="2026-04-17T23:30:00.151260702Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 17 23:30:00.152412 containerd[1454]: time="2026-04-17T23:30:00.152363892Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:00.156823 containerd[1454]: time="2026-04-17T23:30:00.156774038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:00.157944 containerd[1454]: time="2026-04-17T23:30:00.157893071Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 847.240156ms" Apr 17 23:30:00.157944 containerd[1454]: time="2026-04-17T23:30:00.157942734Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 17 23:30:00.158736 containerd[1454]: time="2026-04-17T23:30:00.158637881Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 17 23:30:01.056378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3367641163.mount: Deactivated successfully. Apr 17 23:30:01.320965 containerd[1454]: time="2026-04-17T23:30:01.320902393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:01.321917 containerd[1454]: time="2026-04-17T23:30:01.321855495Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 17 23:30:01.322649 containerd[1454]: time="2026-04-17T23:30:01.322551882Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:01.325553 containerd[1454]: time="2026-04-17T23:30:01.325489245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:01.325975 containerd[1454]: time="2026-04-17T23:30:01.325928671Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1.167221189s" Apr 17 23:30:01.326005 containerd[1454]: time="2026-04-17T23:30:01.325972274Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 17 23:30:01.326858 containerd[1454]: time="2026-04-17T23:30:01.326658722Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 17 23:30:01.728536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4220181175.mount: Deactivated successfully. Apr 17 23:30:02.595785 containerd[1454]: time="2026-04-17T23:30:02.595558436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:02.596428 containerd[1454]: time="2026-04-17T23:30:02.596370374Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 17 23:30:02.597545 containerd[1454]: time="2026-04-17T23:30:02.597470875Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:02.600688 containerd[1454]: time="2026-04-17T23:30:02.600177283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:02.601338 containerd[1454]: time="2026-04-17T23:30:02.601276833Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.274591792s" Apr 17 23:30:02.601338 containerd[1454]: time="2026-04-17T23:30:02.601327945Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 17 23:30:02.601974 containerd[1454]: time="2026-04-17T23:30:02.601930177Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 23:30:03.059879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4061549127.mount: Deactivated successfully. Apr 17 23:30:03.067106 containerd[1454]: time="2026-04-17T23:30:03.066958723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:03.067998 containerd[1454]: time="2026-04-17T23:30:03.067894957Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 17 23:30:03.069324 containerd[1454]: time="2026-04-17T23:30:03.069227285Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:03.071309 containerd[1454]: time="2026-04-17T23:30:03.071275913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:03.071733 containerd[1454]: time="2026-04-17T23:30:03.071694613Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 469.711509ms" Apr 17 23:30:03.071733 containerd[1454]: time="2026-04-17T23:30:03.071735987Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 23:30:03.072514 containerd[1454]: time="2026-04-17T23:30:03.072493386Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 17 23:30:03.511129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409242544.mount: Deactivated successfully. Apr 17 23:30:04.146820 containerd[1454]: time="2026-04-17T23:30:04.146760666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:04.147583 containerd[1454]: time="2026-04-17T23:30:04.147545340Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 17 23:30:04.148676 containerd[1454]: time="2026-04-17T23:30:04.148589451Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:04.152140 containerd[1454]: time="2026-04-17T23:30:04.151940193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:04.153236 containerd[1454]: time="2026-04-17T23:30:04.153183064Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.080597924s" Apr 17 23:30:04.153236 containerd[1454]: time="2026-04-17T23:30:04.153223191Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 17 23:30:05.066309 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:30:05.075420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:30:05.184867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:30:05.188291 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:30:05.223803 kubelet[2024]: E0417 23:30:05.223686 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:30:05.226183 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:30:05.226310 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:30:06.311377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:30:06.323345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:30:06.345220 systemd[1]: Reloading requested from client PID 2040 ('systemctl') (unit session-7.scope)... Apr 17 23:30:06.345265 systemd[1]: Reloading... Apr 17 23:30:06.410213 zram_generator::config[2079]: No configuration found. Apr 17 23:30:06.543852 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:30:06.605240 systemd[1]: Reloading finished in 259 ms. Apr 17 23:30:06.652433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:30:06.653910 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:30:06.656174 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:30:06.656345 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:30:06.657645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:30:06.773238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:30:06.777135 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:30:06.823575 kubelet[2129]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:30:06.823575 kubelet[2129]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:30:06.824011 kubelet[2129]: I0417 23:30:06.823598 2129 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:30:07.090550 kubelet[2129]: I0417 23:30:07.090480 2129 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 23:30:07.090550 kubelet[2129]: I0417 23:30:07.090543 2129 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:30:07.090720 kubelet[2129]: I0417 23:30:07.090576 2129 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:30:07.090720 kubelet[2129]: I0417 23:30:07.090587 2129 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:30:07.090929 kubelet[2129]: I0417 23:30:07.090879 2129 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:30:07.099018 kubelet[2129]: E0417 23:30:07.098945 2129 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:30:07.099203 kubelet[2129]: I0417 23:30:07.099034 2129 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:30:07.103497 kubelet[2129]: E0417 23:30:07.103471 2129 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:30:07.103497 kubelet[2129]: I0417 23:30:07.103500 2129 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:30:07.107233 kubelet[2129]: I0417 23:30:07.107171 2129 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:30:07.108842 kubelet[2129]: I0417 23:30:07.108777 2129 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:30:07.109124 kubelet[2129]: I0417 23:30:07.108846 2129 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:30:07.109234 kubelet[2129]: I0417 23:30:07.109133 2129 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:30:07.109234 kubelet[2129]: I0417 23:30:07.109147 2129 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 23:30:07.109271 kubelet[2129]: I0417 23:30:07.109235 2129 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:30:07.111795 kubelet[2129]: I0417 23:30:07.111757 2129 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:30:07.111946 kubelet[2129]: I0417 23:30:07.111915 2129 kubelet.go:475] "Attempting to sync node with API server" Apr 17 23:30:07.111946 kubelet[2129]: I0417 23:30:07.111927 2129 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:30:07.111946 kubelet[2129]: I0417 23:30:07.111942 2129 kubelet.go:387] "Adding apiserver pod source" Apr 17 23:30:07.112092 kubelet[2129]: I0417 23:30:07.111950 2129 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:30:07.112796 kubelet[2129]: E0417 23:30:07.112434 2129 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:30:07.112796 kubelet[2129]: E0417 23:30:07.112457 2129 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:30:07.114207 kubelet[2129]: I0417 23:30:07.114152 2129 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:30:07.114941 kubelet[2129]: I0417 23:30:07.114893 2129 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:30:07.114976 kubelet[2129]: I0417 23:30:07.114965 2129 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:30:07.115051 kubelet[2129]: W0417 23:30:07.115022 2129 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:30:07.118247 kubelet[2129]: I0417 23:30:07.118211 2129 server.go:1262] "Started kubelet" Apr 17 23:30:07.118555 kubelet[2129]: I0417 23:30:07.118428 2129 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:30:07.118555 kubelet[2129]: I0417 23:30:07.118502 2129 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:30:07.118810 kubelet[2129]: I0417 23:30:07.118769 2129 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:30:07.118874 kubelet[2129]: I0417 23:30:07.118845 2129 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:30:07.120282 kubelet[2129]: I0417 23:30:07.119235 2129 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:30:07.120282 kubelet[2129]: I0417 23:30:07.120024 2129 server.go:310] "Adding debug handlers to kubelet server" Apr 17 23:30:07.121427 kubelet[2129]: I0417 23:30:07.121339 2129 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:30:07.123391 kubelet[2129]: E0417 23:30:07.123357 2129 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:07.123391 kubelet[2129]: I0417 23:30:07.123406 2129 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 23:30:07.123581 kubelet[2129]: I0417 23:30:07.123552 2129 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:30:07.123664 kubelet[2129]: I0417 23:30:07.123652 2129 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:30:07.123972 kubelet[2129]: E0417 23:30:07.123947 2129 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:30:07.124141 kubelet[2129]: E0417 23:30:07.124032 2129 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:30:07.124141 kubelet[2129]: E0417 23:30:07.120898 2129 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a748c939e94b32 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 23:30:07.118166834 +0000 UTC m=+0.337198335,LastTimestamp:2026-04-17 23:30:07.118166834 +0000 UTC m=+0.337198335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 23:30:07.124428 kubelet[2129]: I0417 23:30:07.124388 2129 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:30:07.124525 kubelet[2129]: E0417 23:30:07.124445 2129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="200ms" Apr 17 23:30:07.124576 kubelet[2129]: I0417 23:30:07.124525 2129 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:30:07.125593 kubelet[2129]: I0417 23:30:07.125558 2129 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:30:07.137034 kubelet[2129]: I0417 23:30:07.136990 2129 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:30:07.137034 kubelet[2129]: I0417 23:30:07.137020 2129 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:30:07.137034 kubelet[2129]: I0417 23:30:07.137033 2129 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:30:07.138655 kubelet[2129]: I0417 23:30:07.138580 2129 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:30:07.140284 kubelet[2129]: I0417 23:30:07.140233 2129 policy_none.go:49] "None policy: Start" Apr 17 23:30:07.140284 kubelet[2129]: I0417 23:30:07.140293 2129 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:30:07.140411 kubelet[2129]: I0417 23:30:07.140305 2129 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:30:07.140591 kubelet[2129]: I0417 23:30:07.140547 2129 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:30:07.140591 kubelet[2129]: I0417 23:30:07.140585 2129 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 23:30:07.140651 kubelet[2129]: I0417 23:30:07.140605 2129 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 23:30:07.140674 kubelet[2129]: E0417 23:30:07.140660 2129 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:30:07.142606 kubelet[2129]: E0417 23:30:07.142314 2129 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:30:07.143294 kubelet[2129]: I0417 23:30:07.143040 2129 policy_none.go:47] "Start" Apr 17 23:30:07.151773 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:30:07.165526 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:30:07.168242 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:30:07.184009 kubelet[2129]: E0417 23:30:07.183979 2129 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:30:07.184343 kubelet[2129]: I0417 23:30:07.184219 2129 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:30:07.184343 kubelet[2129]: I0417 23:30:07.184229 2129 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:30:07.184441 kubelet[2129]: I0417 23:30:07.184430 2129 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:30:07.185520 kubelet[2129]: E0417 23:30:07.185494 2129 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:30:07.185580 kubelet[2129]: E0417 23:30:07.185538 2129 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 23:30:07.252373 systemd[1]: Created slice kubepods-burstable-pod4602d113227f39ee269a93428c7e048c.slice - libcontainer container kubepods-burstable-pod4602d113227f39ee269a93428c7e048c.slice. Apr 17 23:30:07.263776 kubelet[2129]: E0417 23:30:07.263713 2129 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:30:07.266577 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 17 23:30:07.281109 kubelet[2129]: E0417 23:30:07.281042 2129 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:30:07.283541 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 17 23:30:07.284864 kubelet[2129]: E0417 23:30:07.284840 2129 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:30:07.285503 kubelet[2129]: I0417 23:30:07.285450 2129 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:30:07.285833 kubelet[2129]: E0417 23:30:07.285801 2129 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Apr 17 23:30:07.325664 kubelet[2129]: E0417 23:30:07.325530 2129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="400ms" Apr 17 23:30:07.425887 kubelet[2129]: I0417 23:30:07.425561 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:30:07.425887 kubelet[2129]: I0417 23:30:07.425642 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4602d113227f39ee269a93428c7e048c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4602d113227f39ee269a93428c7e048c\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:07.425887 kubelet[2129]: I0417 23:30:07.425662 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4602d113227f39ee269a93428c7e048c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4602d113227f39ee269a93428c7e048c\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:07.425887 kubelet[2129]: I0417 23:30:07.425686 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:07.425887 kubelet[2129]: I0417 23:30:07.425698 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:07.426248 kubelet[2129]: I0417 23:30:07.425710 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4602d113227f39ee269a93428c7e048c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4602d113227f39ee269a93428c7e048c\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:07.426248 kubelet[2129]: I0417 23:30:07.425723 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:07.426248 kubelet[2129]: I0417 23:30:07.425747 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:07.426248 kubelet[2129]: I0417 23:30:07.425760 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:07.488463 kubelet[2129]: I0417 23:30:07.488052 2129 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:30:07.488463 kubelet[2129]: E0417 23:30:07.488426 2129 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Apr 17 23:30:07.637268 kubelet[2129]: E0417 23:30:07.637174 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:07.638134 containerd[1454]: time="2026-04-17T23:30:07.638050528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4602d113227f39ee269a93428c7e048c,Namespace:kube-system,Attempt:0,}" Apr 17 23:30:07.639751 kubelet[2129]: E0417 23:30:07.639717 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:07.640416 containerd[1454]: time="2026-04-17T23:30:07.640378761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,}" Apr 17 23:30:07.640831 kubelet[2129]: E0417 23:30:07.640800 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:07.641320 containerd[1454]: time="2026-04-17T23:30:07.641205564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,}" Apr 17 23:30:07.726767 kubelet[2129]: E0417 23:30:07.726470 2129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="800ms" Apr 17 23:30:07.891574 kubelet[2129]: I0417 23:30:07.891506 2129 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:30:07.892417 kubelet[2129]: E0417 23:30:07.892392 2129 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Apr 17 23:30:08.007910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount527983394.mount: Deactivated successfully. Apr 17 23:30:08.015863 containerd[1454]: time="2026-04-17T23:30:08.015671512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:30:08.019011 containerd[1454]: time="2026-04-17T23:30:08.018883766Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 17 23:30:08.020045 containerd[1454]: time="2026-04-17T23:30:08.019957056Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:30:08.021052 containerd[1454]: time="2026-04-17T23:30:08.020989017Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:30:08.021716 containerd[1454]: time="2026-04-17T23:30:08.021588128Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:30:08.022751 containerd[1454]: time="2026-04-17T23:30:08.022719668Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:30:08.023231 containerd[1454]: time="2026-04-17T23:30:08.023188897Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:30:08.025796 containerd[1454]: time="2026-04-17T23:30:08.025667501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:30:08.027804 containerd[1454]: time="2026-04-17T23:30:08.027740555Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 389.567609ms" Apr 17 23:30:08.028739 containerd[1454]: time="2026-04-17T23:30:08.028595575Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 388.128022ms" Apr 17 23:30:08.031541 containerd[1454]: time="2026-04-17T23:30:08.031490990Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 390.239944ms" Apr 17 23:30:08.111271 kubelet[2129]: E0417 23:30:08.110372 2129 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:30:08.143873 containerd[1454]: time="2026-04-17T23:30:08.143670476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:30:08.143873 containerd[1454]: time="2026-04-17T23:30:08.143869349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:30:08.143873 containerd[1454]: time="2026-04-17T23:30:08.143886527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:08.145056 containerd[1454]: time="2026-04-17T23:30:08.144604041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:08.153619 containerd[1454]: time="2026-04-17T23:30:08.152409864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:30:08.153619 containerd[1454]: time="2026-04-17T23:30:08.152445811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:30:08.153619 containerd[1454]: time="2026-04-17T23:30:08.152454389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:08.153619 containerd[1454]: time="2026-04-17T23:30:08.152509472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:08.155972 containerd[1454]: time="2026-04-17T23:30:08.153771353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:30:08.155972 containerd[1454]: time="2026-04-17T23:30:08.153819946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:30:08.155972 containerd[1454]: time="2026-04-17T23:30:08.153946494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:08.155972 containerd[1454]: time="2026-04-17T23:30:08.154123311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:08.165714 systemd[1]: Started cri-containerd-aea68832627a0d8febc7795707ef285e033fdd3a7036f2d9f9e89e32ae882176.scope - libcontainer container aea68832627a0d8febc7795707ef285e033fdd3a7036f2d9f9e89e32ae882176. Apr 17 23:30:08.173300 systemd[1]: Started cri-containerd-342b8ccf3a858d4b124fc5b9a70c92f95f38fdccd604379925eaa0d8fc860a80.scope - libcontainer container 342b8ccf3a858d4b124fc5b9a70c92f95f38fdccd604379925eaa0d8fc860a80. Apr 17 23:30:08.174578 systemd[1]: Started cri-containerd-9d2e0fbd384bdcd153d1659cb3ed25e0a9531abf3bdbaf85dcbeb151ac3ff9ee.scope - libcontainer container 9d2e0fbd384bdcd153d1659cb3ed25e0a9531abf3bdbaf85dcbeb151ac3ff9ee. Apr 17 23:30:08.196446 kubelet[2129]: E0417 23:30:08.196355 2129 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:30:08.220588 containerd[1454]: time="2026-04-17T23:30:08.220511558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"aea68832627a0d8febc7795707ef285e033fdd3a7036f2d9f9e89e32ae882176\"" Apr 17 23:30:08.223158 kubelet[2129]: E0417 23:30:08.223027 2129 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:30:08.223214 containerd[1454]: time="2026-04-17T23:30:08.223159270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4602d113227f39ee269a93428c7e048c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d2e0fbd384bdcd153d1659cb3ed25e0a9531abf3bdbaf85dcbeb151ac3ff9ee\"" Apr 17 23:30:08.223493 kubelet[2129]: E0417 23:30:08.223430 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:08.227091 kubelet[2129]: E0417 23:30:08.226821 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:08.231735 containerd[1454]: time="2026-04-17T23:30:08.231691173Z" level=info msg="CreateContainer within sandbox \"aea68832627a0d8febc7795707ef285e033fdd3a7036f2d9f9e89e32ae882176\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:30:08.233978 containerd[1454]: time="2026-04-17T23:30:08.233880122Z" level=info msg="CreateContainer within sandbox \"9d2e0fbd384bdcd153d1659cb3ed25e0a9531abf3bdbaf85dcbeb151ac3ff9ee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:30:08.238385 containerd[1454]: time="2026-04-17T23:30:08.238343241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"342b8ccf3a858d4b124fc5b9a70c92f95f38fdccd604379925eaa0d8fc860a80\"" Apr 17 23:30:08.239990 kubelet[2129]: E0417 23:30:08.239948 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:08.245137 containerd[1454]: time="2026-04-17T23:30:08.244994991Z" level=info msg="CreateContainer within sandbox \"342b8ccf3a858d4b124fc5b9a70c92f95f38fdccd604379925eaa0d8fc860a80\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:30:08.254106 containerd[1454]: time="2026-04-17T23:30:08.253985788Z" level=info msg="CreateContainer within sandbox \"9d2e0fbd384bdcd153d1659cb3ed25e0a9531abf3bdbaf85dcbeb151ac3ff9ee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa09768d87bf50b543a6d9741554547a3d640fa494e14dd123fa07f20a34a8ae\"" Apr 17 23:30:08.254953 containerd[1454]: time="2026-04-17T23:30:08.254936132Z" level=info msg="StartContainer for \"aa09768d87bf50b543a6d9741554547a3d640fa494e14dd123fa07f20a34a8ae\"" Apr 17 23:30:08.259666 containerd[1454]: time="2026-04-17T23:30:08.259428175Z" level=info msg="CreateContainer within sandbox \"aea68832627a0d8febc7795707ef285e033fdd3a7036f2d9f9e89e32ae882176\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ba7e40779a105c4d1bc49c67baf41217e7add3089657bf2ae7015f496f42804f\"" Apr 17 23:30:08.259984 containerd[1454]: time="2026-04-17T23:30:08.259935658Z" level=info msg="StartContainer for \"ba7e40779a105c4d1bc49c67baf41217e7add3089657bf2ae7015f496f42804f\"" Apr 17 23:30:08.271311 containerd[1454]: time="2026-04-17T23:30:08.270619025Z" level=info msg="CreateContainer within sandbox \"342b8ccf3a858d4b124fc5b9a70c92f95f38fdccd604379925eaa0d8fc860a80\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"db2105c04615228b1c4f2c2a303fab1efa99439403a3affcf55fe880d9e4776f\"" Apr 17 23:30:08.271942 containerd[1454]: time="2026-04-17T23:30:08.271915974Z" level=info msg="StartContainer for \"db2105c04615228b1c4f2c2a303fab1efa99439403a3affcf55fe880d9e4776f\"" Apr 17 23:30:08.288273 systemd[1]: Started cri-containerd-aa09768d87bf50b543a6d9741554547a3d640fa494e14dd123fa07f20a34a8ae.scope - libcontainer container aa09768d87bf50b543a6d9741554547a3d640fa494e14dd123fa07f20a34a8ae. Apr 17 23:30:08.291416 systemd[1]: Started cri-containerd-ba7e40779a105c4d1bc49c67baf41217e7add3089657bf2ae7015f496f42804f.scope - libcontainer container ba7e40779a105c4d1bc49c67baf41217e7add3089657bf2ae7015f496f42804f. Apr 17 23:30:08.307258 systemd[1]: Started cri-containerd-db2105c04615228b1c4f2c2a303fab1efa99439403a3affcf55fe880d9e4776f.scope - libcontainer container db2105c04615228b1c4f2c2a303fab1efa99439403a3affcf55fe880d9e4776f. Apr 17 23:30:08.343688 containerd[1454]: time="2026-04-17T23:30:08.343473084Z" level=info msg="StartContainer for \"aa09768d87bf50b543a6d9741554547a3d640fa494e14dd123fa07f20a34a8ae\" returns successfully" Apr 17 23:30:08.358957 containerd[1454]: time="2026-04-17T23:30:08.358891179Z" level=info msg="StartContainer for \"ba7e40779a105c4d1bc49c67baf41217e7add3089657bf2ae7015f496f42804f\" returns successfully" Apr 17 23:30:08.358957 containerd[1454]: time="2026-04-17T23:30:08.358971291Z" level=info msg="StartContainer for \"db2105c04615228b1c4f2c2a303fab1efa99439403a3affcf55fe880d9e4776f\" returns successfully" Apr 17 23:30:08.697206 kubelet[2129]: I0417 23:30:08.697048 2129 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:30:09.154414 kubelet[2129]: E0417 23:30:09.154358 2129 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:30:09.154715 kubelet[2129]: E0417 23:30:09.154471 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:09.154715 kubelet[2129]: E0417 23:30:09.154566 2129 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:30:09.154715 kubelet[2129]: E0417 23:30:09.154611 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:09.155789 kubelet[2129]: E0417 23:30:09.155751 2129 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:30:09.155880 kubelet[2129]: E0417 23:30:09.155847 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:09.253424 kubelet[2129]: E0417 23:30:09.253330 2129 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 17 23:30:09.402591 kubelet[2129]: E0417 23:30:09.402453 2129 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a748c939e94b32 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 23:30:07.118166834 +0000 UTC m=+0.337198335,LastTimestamp:2026-04-17 23:30:07.118166834 +0000 UTC m=+0.337198335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 23:30:09.455890 kubelet[2129]: I0417 23:30:09.455655 2129 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 23:30:09.455890 kubelet[2129]: E0417 23:30:09.455709 2129 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 23:30:09.468512 kubelet[2129]: E0417 23:30:09.468439 2129 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:09.569307 kubelet[2129]: E0417 23:30:09.569219 2129 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:09.669859 kubelet[2129]: E0417 23:30:09.669753 2129 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:09.770470 kubelet[2129]: E0417 23:30:09.770181 2129 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:09.871211 kubelet[2129]: E0417 23:30:09.871042 2129 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:09.972284 kubelet[2129]: E0417 23:30:09.971960 2129 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:10.072462 kubelet[2129]: E0417 23:30:10.072396 2129 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:10.158421 kubelet[2129]: E0417 23:30:10.157996 2129 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:30:10.158421 kubelet[2129]: E0417 23:30:10.158148 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:10.158421 kubelet[2129]: E0417 23:30:10.158180 2129 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:30:10.158421 kubelet[2129]: E0417 23:30:10.158292 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:10.173327 kubelet[2129]: E0417 23:30:10.173193 2129 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:10.224881 kubelet[2129]: I0417 23:30:10.224771 2129 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:10.231466 kubelet[2129]: I0417 23:30:10.231418 2129 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:30:10.237720 kubelet[2129]: I0417 23:30:10.237674 2129 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:11.114590 kubelet[2129]: I0417 23:30:11.114375 2129 apiserver.go:52] "Watching apiserver" Apr 17 23:30:11.117195 kubelet[2129]: E0417 23:30:11.117142 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:11.123943 kubelet[2129]: I0417 23:30:11.123864 2129 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:30:11.159519 kubelet[2129]: E0417 23:30:11.159490 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:11.159519 kubelet[2129]: E0417 23:30:11.159504 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:11.225972 systemd[1]: Reloading requested from client PID 2418 ('systemctl') (unit session-7.scope)... Apr 17 23:30:11.226003 systemd[1]: Reloading... Apr 17 23:30:11.300186 zram_generator::config[2457]: No configuration found. Apr 17 23:30:11.385709 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:30:11.454131 systemd[1]: Reloading finished in 227 ms. Apr 17 23:30:11.492269 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:30:11.506398 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:30:11.506627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:30:11.515860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:30:11.623193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:30:11.627453 (kubelet)[2502]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:30:11.672456 kubelet[2502]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:30:11.672456 kubelet[2502]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:30:11.672456 kubelet[2502]: I0417 23:30:11.672327 2502 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:30:11.679058 kubelet[2502]: I0417 23:30:11.678993 2502 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 23:30:11.679058 kubelet[2502]: I0417 23:30:11.679031 2502 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:30:11.679058 kubelet[2502]: I0417 23:30:11.679056 2502 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:30:11.679289 kubelet[2502]: I0417 23:30:11.679109 2502 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:30:11.679289 kubelet[2502]: I0417 23:30:11.679256 2502 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:30:11.680542 kubelet[2502]: I0417 23:30:11.680486 2502 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:30:11.682681 kubelet[2502]: I0417 23:30:11.682615 2502 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:30:11.686668 kubelet[2502]: E0417 23:30:11.686589 2502 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:30:11.686833 kubelet[2502]: I0417 23:30:11.686691 2502 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:30:11.694744 kubelet[2502]: I0417 23:30:11.694684 2502 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:30:11.695005 kubelet[2502]: I0417 23:30:11.694932 2502 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:30:11.695159 kubelet[2502]: I0417 23:30:11.694967 2502 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:30:11.695159 kubelet[2502]: I0417 23:30:11.695154 2502 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:30:11.695329 kubelet[2502]: I0417 23:30:11.695164 2502 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 23:30:11.695329 kubelet[2502]: I0417 23:30:11.695189 2502 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:30:11.695435 kubelet[2502]: I0417 23:30:11.695390 2502 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:30:11.695587 kubelet[2502]: I0417 23:30:11.695535 2502 kubelet.go:475] "Attempting to sync node with API server" Apr 17 23:30:11.695587 kubelet[2502]: I0417 23:30:11.695561 2502 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:30:11.695587 kubelet[2502]: I0417 23:30:11.695578 2502 kubelet.go:387] "Adding apiserver pod source" Apr 17 23:30:11.695587 kubelet[2502]: I0417 23:30:11.695586 2502 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:30:11.697155 kubelet[2502]: I0417 23:30:11.697028 2502 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:30:11.697763 kubelet[2502]: I0417 23:30:11.697620 2502 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:30:11.698028 kubelet[2502]: I0417 23:30:11.697802 2502 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:30:11.701331 kubelet[2502]: I0417 23:30:11.701288 2502 server.go:1262] "Started kubelet" Apr 17 23:30:11.701456 kubelet[2502]: I0417 23:30:11.701424 2502 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:30:11.701736 kubelet[2502]: I0417 23:30:11.701690 2502 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:30:11.701759 kubelet[2502]: I0417 23:30:11.701745 2502 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:30:11.702111 kubelet[2502]: I0417 23:30:11.702051 2502 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:30:11.703198 kubelet[2502]: I0417 23:30:11.702594 2502 server.go:310] "Adding debug handlers to kubelet server" Apr 17 23:30:11.711143 kubelet[2502]: I0417 23:30:11.705398 2502 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:30:11.711143 kubelet[2502]: I0417 23:30:11.705878 2502 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:30:11.711143 kubelet[2502]: E0417 23:30:11.706940 2502 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:11.711143 kubelet[2502]: I0417 23:30:11.706967 2502 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 23:30:11.711143 kubelet[2502]: I0417 23:30:11.707139 2502 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:30:11.711143 kubelet[2502]: I0417 23:30:11.707224 2502 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:30:11.711143 kubelet[2502]: I0417 23:30:11.710510 2502 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:30:11.711143 kubelet[2502]: I0417 23:30:11.710623 2502 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:30:11.714552 kubelet[2502]: E0417 23:30:11.713863 2502 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:30:11.714552 kubelet[2502]: I0417 23:30:11.713939 2502 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:30:11.730767 kubelet[2502]: I0417 23:30:11.730557 2502 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:30:11.731924 kubelet[2502]: I0417 23:30:11.731883 2502 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:30:11.731924 kubelet[2502]: I0417 23:30:11.731899 2502 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 23:30:11.731924 kubelet[2502]: I0417 23:30:11.731923 2502 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 23:30:11.732052 kubelet[2502]: E0417 23:30:11.731965 2502 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:30:11.752742 kubelet[2502]: I0417 23:30:11.752454 2502 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:30:11.752742 kubelet[2502]: I0417 23:30:11.752485 2502 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:30:11.752742 kubelet[2502]: I0417 23:30:11.752635 2502 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:30:11.752742 kubelet[2502]: I0417 23:30:11.752753 2502 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:30:11.752922 kubelet[2502]: I0417 23:30:11.752760 2502 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:30:11.752922 kubelet[2502]: I0417 23:30:11.752772 2502 policy_none.go:49] "None policy: Start" Apr 17 23:30:11.752922 kubelet[2502]: I0417 23:30:11.752779 2502 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:30:11.752922 kubelet[2502]: I0417 23:30:11.752787 2502 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:30:11.752922 kubelet[2502]: I0417 23:30:11.752849 2502 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 23:30:11.752922 kubelet[2502]: I0417 23:30:11.752854 2502 policy_none.go:47] "Start" Apr 17 23:30:11.757580 kubelet[2502]: E0417 23:30:11.757542 2502 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:30:11.757761 kubelet[2502]: I0417 23:30:11.757733 2502 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:30:11.757802 kubelet[2502]: I0417 23:30:11.757760 2502 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:30:11.758516 kubelet[2502]: I0417 23:30:11.758337 2502 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:30:11.759685 kubelet[2502]: E0417 23:30:11.759598 2502 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:30:11.834281 kubelet[2502]: I0417 23:30:11.834051 2502 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:30:11.834281 kubelet[2502]: I0417 23:30:11.834226 2502 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:11.834550 kubelet[2502]: I0417 23:30:11.834474 2502 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:11.843391 kubelet[2502]: E0417 23:30:11.843245 2502 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:11.843391 kubelet[2502]: E0417 23:30:11.843383 2502 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:11.843784 kubelet[2502]: E0417 23:30:11.843743 2502 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 23:30:11.864520 kubelet[2502]: I0417 23:30:11.864448 2502 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:30:11.872713 kubelet[2502]: I0417 23:30:11.872684 2502 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 17 23:30:11.873214 kubelet[2502]: I0417 23:30:11.872934 2502 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 23:30:11.908417 kubelet[2502]: I0417 23:30:11.908339 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:12.009517 kubelet[2502]: I0417 23:30:12.009267 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:12.009517 kubelet[2502]: I0417 23:30:12.009324 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:12.009517 kubelet[2502]: I0417 23:30:12.009342 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4602d113227f39ee269a93428c7e048c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4602d113227f39ee269a93428c7e048c\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:12.009517 kubelet[2502]: I0417 23:30:12.009356 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:12.009517 kubelet[2502]: I0417 23:30:12.009368 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:12.009779 kubelet[2502]: I0417 23:30:12.009381 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:30:12.009779 kubelet[2502]: I0417 23:30:12.009485 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4602d113227f39ee269a93428c7e048c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4602d113227f39ee269a93428c7e048c\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:12.009779 kubelet[2502]: I0417 23:30:12.009522 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4602d113227f39ee269a93428c7e048c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4602d113227f39ee269a93428c7e048c\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:12.144828 kubelet[2502]: E0417 23:30:12.144733 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:12.144828 kubelet[2502]: E0417 23:30:12.144808 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:12.145259 kubelet[2502]: E0417 23:30:12.144877 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:12.223478 sudo[2545]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 17 23:30:12.223755 sudo[2545]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 17 23:30:12.696830 kubelet[2502]: I0417 23:30:12.696754 2502 apiserver.go:52] "Watching apiserver" Apr 17 23:30:12.708114 kubelet[2502]: I0417 23:30:12.707979 2502 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:30:12.743989 kubelet[2502]: I0417 23:30:12.743958 2502 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:12.744754 kubelet[2502]: E0417 23:30:12.743993 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:12.744754 kubelet[2502]: E0417 23:30:12.744236 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:12.754003 kubelet[2502]: E0417 23:30:12.753806 2502 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:12.754003 kubelet[2502]: E0417 23:30:12.753993 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:12.772953 kubelet[2502]: I0417 23:30:12.772791 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.772773175 podStartE2EDuration="2.772773175s" podCreationTimestamp="2026-04-17 23:30:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:30:12.771785777 +0000 UTC m=+1.140169398" watchObservedRunningTime="2026-04-17 23:30:12.772773175 +0000 UTC m=+1.141156796" Apr 17 23:30:12.785680 sudo[2545]: pam_unix(sudo:session): session closed for user root Apr 17 23:30:12.801974 kubelet[2502]: I0417 23:30:12.801715 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.801694603 podStartE2EDuration="2.801694603s" podCreationTimestamp="2026-04-17 23:30:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:30:12.781853436 +0000 UTC m=+1.150237069" watchObservedRunningTime="2026-04-17 23:30:12.801694603 +0000 UTC m=+1.170078222" Apr 17 23:30:13.746113 kubelet[2502]: E0417 23:30:13.745977 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:13.746458 kubelet[2502]: E0417 23:30:13.746164 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:14.023044 sudo[1630]: pam_unix(sudo:session): session closed for user root Apr 17 23:30:14.025235 sshd[1627]: pam_unix(sshd:session): session closed for user core Apr 17 23:30:14.028118 systemd[1]: sshd@6-10.0.0.26:22-10.0.0.1:41422.service: Deactivated successfully. Apr 17 23:30:14.029500 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:30:14.030234 systemd[1]: session-7.scope: Consumed 4.315s CPU time, 158.3M memory peak, 0B memory swap peak. Apr 17 23:30:14.030999 systemd-logind[1434]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:30:14.031986 systemd-logind[1434]: Removed session 7. Apr 17 23:30:15.753228 kubelet[2502]: E0417 23:30:15.753138 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:18.200883 kubelet[2502]: I0417 23:30:18.200831 2502 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:30:18.201444 containerd[1454]: time="2026-04-17T23:30:18.201276214Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:30:18.201583 kubelet[2502]: I0417 23:30:18.201524 2502 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:30:18.956389 kubelet[2502]: I0417 23:30:18.956312 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=8.956298122 podStartE2EDuration="8.956298122s" podCreationTimestamp="2026-04-17 23:30:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:30:12.801898247 +0000 UTC m=+1.170281874" watchObservedRunningTime="2026-04-17 23:30:18.956298122 +0000 UTC m=+7.324681727" Apr 17 23:30:18.970859 systemd[1]: Created slice kubepods-besteffort-pode362b5b4_d1a3_4dad_a904_b00dd4277a3a.slice - libcontainer container kubepods-besteffort-pode362b5b4_d1a3_4dad_a904_b00dd4277a3a.slice. Apr 17 23:30:18.986051 systemd[1]: Created slice kubepods-burstable-pod6a0c505f_4b9c_4262_acf5_27c692151472.slice - libcontainer container kubepods-burstable-pod6a0c505f_4b9c_4262_acf5_27c692151472.slice. Apr 17 23:30:19.059253 kubelet[2502]: I0417 23:30:19.059195 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a0c505f-4b9c-4262-acf5-27c692151472-clustermesh-secrets\") pod \"cilium-h42mh\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " pod="kube-system/cilium-h42mh" Apr 17 23:30:19.059253 kubelet[2502]: I0417 23:30:19.059250 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a0c505f-4b9c-4262-acf5-27c692151472-cilium-config-path\") pod \"cilium-h42mh\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " pod="kube-system/cilium-h42mh" Apr 17 23:30:19.059253 kubelet[2502]: I0417 23:30:19.059278 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-host-proc-sys-net\") pod \"cilium-h42mh\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " pod="kube-system/cilium-h42mh" Apr 17 23:30:19.059253 kubelet[2502]: I0417 23:30:19.059295 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-cilium-run\") pod \"cilium-h42mh\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " pod="kube-system/cilium-h42mh" Apr 17 23:30:19.059546 kubelet[2502]: I0417 23:30:19.059324 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-bpf-maps\") pod \"cilium-h42mh\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " pod="kube-system/cilium-h42mh" Apr 17 23:30:19.059546 kubelet[2502]: I0417 23:30:19.059361 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-cni-path\") pod \"cilium-h42mh\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " pod="kube-system/cilium-h42mh" Apr 17 23:30:19.059546 kubelet[2502]: I0417 23:30:19.059376 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a0c505f-4b9c-4262-acf5-27c692151472-hubble-tls\") pod \"cilium-h42mh\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " pod="kube-system/cilium-h42mh" Apr 17 23:30:19.059546 kubelet[2502]: I0417 23:30:19.059391 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c898n\" (UniqueName: \"kubernetes.io/projected/6a0c505f-4b9c-4262-acf5-27c692151472-kube-api-access-c898n\") pod \"cilium-h42mh\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " pod="kube-system/cilium-h42mh" Apr 17 23:30:19.059546 kubelet[2502]: I0417 23:30:19.059408 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-cilium-cgroup\") pod \"cilium-h42mh\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " pod="kube-system/cilium-h42mh" Apr 17 23:30:19.059546 kubelet[2502]: I0417 23:30:19.059421 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-etc-cni-netd\") pod \"cilium-h42mh\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " pod="kube-system/cilium-h42mh" Apr 17 23:30:19.059665 kubelet[2502]: I0417 23:30:19.059440 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e362b5b4-d1a3-4dad-a904-b00dd4277a3a-kube-proxy\") pod \"kube-proxy-n9nw8\" (UID: \"e362b5b4-d1a3-4dad-a904-b00dd4277a3a\") " pod="kube-system/kube-proxy-n9nw8" Apr 17 23:30:19.059665 kubelet[2502]: I0417 23:30:19.059460 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e362b5b4-d1a3-4dad-a904-b00dd4277a3a-xtables-lock\") pod \"kube-proxy-n9nw8\" (UID: \"e362b5b4-d1a3-4dad-a904-b00dd4277a3a\") " pod="kube-system/kube-proxy-n9nw8" Apr 17 23:30:19.059665 kubelet[2502]: I0417 23:30:19.059475 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-hostproc\") pod \"cilium-h42mh\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " pod="kube-system/cilium-h42mh" Apr 17 23:30:19.059665 kubelet[2502]: I0417 23:30:19.059499 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-lib-modules\") pod \"cilium-h42mh\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " pod="kube-system/cilium-h42mh" Apr 17 23:30:19.059665 kubelet[2502]: I0417 23:30:19.059521 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-xtables-lock\") pod \"cilium-h42mh\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " pod="kube-system/cilium-h42mh" Apr 17 23:30:19.059665 kubelet[2502]: I0417 23:30:19.059540 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-host-proc-sys-kernel\") pod \"cilium-h42mh\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " pod="kube-system/cilium-h42mh" Apr 17 23:30:19.059815 kubelet[2502]: I0417 23:30:19.059567 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e362b5b4-d1a3-4dad-a904-b00dd4277a3a-lib-modules\") pod \"kube-proxy-n9nw8\" (UID: \"e362b5b4-d1a3-4dad-a904-b00dd4277a3a\") " pod="kube-system/kube-proxy-n9nw8" Apr 17 23:30:19.059815 kubelet[2502]: I0417 23:30:19.059584 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsgcm\" (UniqueName: \"kubernetes.io/projected/e362b5b4-d1a3-4dad-a904-b00dd4277a3a-kube-api-access-rsgcm\") pod \"kube-proxy-n9nw8\" (UID: \"e362b5b4-d1a3-4dad-a904-b00dd4277a3a\") " pod="kube-system/kube-proxy-n9nw8" Apr 17 23:30:19.288169 kubelet[2502]: E0417 23:30:19.287641 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:19.289866 containerd[1454]: time="2026-04-17T23:30:19.288658761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n9nw8,Uid:e362b5b4-d1a3-4dad-a904-b00dd4277a3a,Namespace:kube-system,Attempt:0,}" Apr 17 23:30:19.291995 kubelet[2502]: E0417 23:30:19.291930 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:19.292435 containerd[1454]: time="2026-04-17T23:30:19.292325659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h42mh,Uid:6a0c505f-4b9c-4262-acf5-27c692151472,Namespace:kube-system,Attempt:0,}" Apr 17 23:30:19.320978 containerd[1454]: time="2026-04-17T23:30:19.320672627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:30:19.320978 containerd[1454]: time="2026-04-17T23:30:19.320758688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:30:19.320978 containerd[1454]: time="2026-04-17T23:30:19.320777563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:19.320978 containerd[1454]: time="2026-04-17T23:30:19.320869363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:19.322492 containerd[1454]: time="2026-04-17T23:30:19.322330481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:30:19.323165 containerd[1454]: time="2026-04-17T23:30:19.323041142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:30:19.323267 containerd[1454]: time="2026-04-17T23:30:19.323149241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:19.323267 containerd[1454]: time="2026-04-17T23:30:19.323207453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:19.351324 systemd[1]: Started cri-containerd-084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b.scope - libcontainer container 084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b. Apr 17 23:30:19.355121 systemd[1]: Started cri-containerd-419243bf5bb7d4fe83b299ffc7bdaa6de4b4103b3283ec1503ea19f7abe3be5e.scope - libcontainer container 419243bf5bb7d4fe83b299ffc7bdaa6de4b4103b3283ec1503ea19f7abe3be5e. Apr 17 23:30:19.374859 containerd[1454]: time="2026-04-17T23:30:19.374790609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h42mh,Uid:6a0c505f-4b9c-4262-acf5-27c692151472,Namespace:kube-system,Attempt:0,} returns sandbox id \"084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b\"" Apr 17 23:30:19.375492 kubelet[2502]: E0417 23:30:19.375471 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:19.377340 containerd[1454]: time="2026-04-17T23:30:19.377274527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n9nw8,Uid:e362b5b4-d1a3-4dad-a904-b00dd4277a3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"419243bf5bb7d4fe83b299ffc7bdaa6de4b4103b3283ec1503ea19f7abe3be5e\"" Apr 17 23:30:19.378177 kubelet[2502]: E0417 23:30:19.378032 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:19.379441 containerd[1454]: time="2026-04-17T23:30:19.379274267Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 17 23:30:19.384796 containerd[1454]: time="2026-04-17T23:30:19.384760762Z" level=info msg="CreateContainer within sandbox \"419243bf5bb7d4fe83b299ffc7bdaa6de4b4103b3283ec1503ea19f7abe3be5e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:30:19.416727 containerd[1454]: time="2026-04-17T23:30:19.416626424Z" level=info msg="CreateContainer within sandbox \"419243bf5bb7d4fe83b299ffc7bdaa6de4b4103b3283ec1503ea19f7abe3be5e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1893ffdf07f536ce2748398fbfb6112a52c5d26875b778be15b530aa61c0a4f9\"" Apr 17 23:30:19.417854 containerd[1454]: time="2026-04-17T23:30:19.417794135Z" level=info msg="StartContainer for \"1893ffdf07f536ce2748398fbfb6112a52c5d26875b778be15b530aa61c0a4f9\"" Apr 17 23:30:19.451976 systemd[1]: Created slice kubepods-besteffort-podbf10608a_9a4a_48ce_bda3_f59395fc07e4.slice - libcontainer container kubepods-besteffort-podbf10608a_9a4a_48ce_bda3_f59395fc07e4.slice. Apr 17 23:30:19.464187 kubelet[2502]: I0417 23:30:19.464141 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf10608a-9a4a-48ce-bda3-f59395fc07e4-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-9sg2h\" (UID: \"bf10608a-9a4a-48ce-bda3-f59395fc07e4\") " pod="kube-system/cilium-operator-6f9c7c5859-9sg2h" Apr 17 23:30:19.464187 kubelet[2502]: I0417 23:30:19.464180 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx76t\" (UniqueName: \"kubernetes.io/projected/bf10608a-9a4a-48ce-bda3-f59395fc07e4-kube-api-access-vx76t\") pod \"cilium-operator-6f9c7c5859-9sg2h\" (UID: \"bf10608a-9a4a-48ce-bda3-f59395fc07e4\") " pod="kube-system/cilium-operator-6f9c7c5859-9sg2h" Apr 17 23:30:19.475315 systemd[1]: Started cri-containerd-1893ffdf07f536ce2748398fbfb6112a52c5d26875b778be15b530aa61c0a4f9.scope - libcontainer container 1893ffdf07f536ce2748398fbfb6112a52c5d26875b778be15b530aa61c0a4f9. Apr 17 23:30:19.498776 containerd[1454]: time="2026-04-17T23:30:19.498666105Z" level=info msg="StartContainer for \"1893ffdf07f536ce2748398fbfb6112a52c5d26875b778be15b530aa61c0a4f9\" returns successfully" Apr 17 23:30:19.717903 kubelet[2502]: E0417 23:30:19.717685 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:19.739588 kubelet[2502]: E0417 23:30:19.739460 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:19.757821 kubelet[2502]: E0417 23:30:19.757567 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:19.758396 containerd[1454]: time="2026-04-17T23:30:19.758365605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-9sg2h,Uid:bf10608a-9a4a-48ce-bda3-f59395fc07e4,Namespace:kube-system,Attempt:0,}" Apr 17 23:30:19.761169 kubelet[2502]: E0417 23:30:19.760891 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:19.762215 kubelet[2502]: E0417 23:30:19.762036 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:19.762758 kubelet[2502]: E0417 23:30:19.762516 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:19.792636 kubelet[2502]: I0417 23:30:19.792392 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n9nw8" podStartSLOduration=1.7923472 podStartE2EDuration="1.7923472s" podCreationTimestamp="2026-04-17 23:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:30:19.780595246 +0000 UTC m=+8.148978862" watchObservedRunningTime="2026-04-17 23:30:19.7923472 +0000 UTC m=+8.160730844" Apr 17 23:30:19.804460 containerd[1454]: time="2026-04-17T23:30:19.799728573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:30:19.804460 containerd[1454]: time="2026-04-17T23:30:19.804048453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:30:19.804460 containerd[1454]: time="2026-04-17T23:30:19.804060833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:19.804460 containerd[1454]: time="2026-04-17T23:30:19.804340073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:19.839651 systemd[1]: Started cri-containerd-e348b2a4c491af8fc50cfed016cafe1d3a96d6a44e812aa477e9997ce103c90c.scope - libcontainer container e348b2a4c491af8fc50cfed016cafe1d3a96d6a44e812aa477e9997ce103c90c. Apr 17 23:30:19.884346 containerd[1454]: time="2026-04-17T23:30:19.884274567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-9sg2h,Uid:bf10608a-9a4a-48ce-bda3-f59395fc07e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e348b2a4c491af8fc50cfed016cafe1d3a96d6a44e812aa477e9997ce103c90c\"" Apr 17 23:30:19.885567 kubelet[2502]: E0417 23:30:19.885229 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:24.572411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3611428013.mount: Deactivated successfully. Apr 17 23:30:25.761686 kubelet[2502]: E0417 23:30:25.760029 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:25.772233 kubelet[2502]: E0417 23:30:25.772153 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:26.583760 containerd[1454]: time="2026-04-17T23:30:26.583421191Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:26.583760 containerd[1454]: time="2026-04-17T23:30:26.583739278Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 17 23:30:26.585530 containerd[1454]: time="2026-04-17T23:30:26.584987131Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:26.586306 containerd[1454]: time="2026-04-17T23:30:26.585996560Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.206643143s" Apr 17 23:30:26.586306 containerd[1454]: time="2026-04-17T23:30:26.586043921Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 17 23:30:26.592906 containerd[1454]: time="2026-04-17T23:30:26.592827705Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 17 23:30:26.601511 containerd[1454]: time="2026-04-17T23:30:26.601479304Z" level=info msg="CreateContainer within sandbox \"084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 23:30:26.620427 containerd[1454]: time="2026-04-17T23:30:26.620348139Z" level=info msg="CreateContainer within sandbox \"084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f\"" Apr 17 23:30:26.622290 containerd[1454]: time="2026-04-17T23:30:26.621280368Z" level=info msg="StartContainer for \"c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f\"" Apr 17 23:30:26.657294 systemd[1]: Started cri-containerd-c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f.scope - libcontainer container c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f. Apr 17 23:30:26.703309 containerd[1454]: time="2026-04-17T23:30:26.703227947Z" level=info msg="StartContainer for \"c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f\" returns successfully" Apr 17 23:30:26.716527 systemd[1]: cri-containerd-c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f.scope: Deactivated successfully. Apr 17 23:30:26.775876 kubelet[2502]: E0417 23:30:26.775823 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:26.852164 containerd[1454]: time="2026-04-17T23:30:26.849146757Z" level=info msg="shim disconnected" id=c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f namespace=k8s.io Apr 17 23:30:26.852164 containerd[1454]: time="2026-04-17T23:30:26.851947790Z" level=warning msg="cleaning up after shim disconnected" id=c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f namespace=k8s.io Apr 17 23:30:26.852164 containerd[1454]: time="2026-04-17T23:30:26.851963103Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:30:27.613552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f-rootfs.mount: Deactivated successfully. Apr 17 23:30:27.780845 kubelet[2502]: E0417 23:30:27.780785 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:27.795373 containerd[1454]: time="2026-04-17T23:30:27.795281676Z" level=info msg="CreateContainer within sandbox \"084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 23:30:27.822012 containerd[1454]: time="2026-04-17T23:30:27.821924452Z" level=info msg="CreateContainer within sandbox \"084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d\"" Apr 17 23:30:27.822901 containerd[1454]: time="2026-04-17T23:30:27.822762534Z" level=info msg="StartContainer for \"e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d\"" Apr 17 23:30:27.869597 systemd[1]: Started cri-containerd-e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d.scope - libcontainer container e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d. Apr 17 23:30:27.904869 containerd[1454]: time="2026-04-17T23:30:27.904828248Z" level=info msg="StartContainer for \"e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d\" returns successfully" Apr 17 23:30:27.925919 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:30:27.926145 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:30:27.926205 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:30:27.937164 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:30:27.938822 systemd[1]: cri-containerd-e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d.scope: Deactivated successfully. Apr 17 23:30:27.974039 containerd[1454]: time="2026-04-17T23:30:27.973951114Z" level=info msg="shim disconnected" id=e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d namespace=k8s.io Apr 17 23:30:27.974039 containerd[1454]: time="2026-04-17T23:30:27.974034917Z" level=warning msg="cleaning up after shim disconnected" id=e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d namespace=k8s.io Apr 17 23:30:27.974039 containerd[1454]: time="2026-04-17T23:30:27.974046680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:30:27.981797 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:30:28.493385 containerd[1454]: time="2026-04-17T23:30:28.493044464Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:28.495040 containerd[1454]: time="2026-04-17T23:30:28.494983328Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 17 23:30:28.496943 containerd[1454]: time="2026-04-17T23:30:28.496845849Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:28.499902 containerd[1454]: time="2026-04-17T23:30:28.499650560Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.906680627s" Apr 17 23:30:28.499902 containerd[1454]: time="2026-04-17T23:30:28.499734343Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 17 23:30:28.504658 containerd[1454]: time="2026-04-17T23:30:28.504599958Z" level=info msg="CreateContainer within sandbox \"e348b2a4c491af8fc50cfed016cafe1d3a96d6a44e812aa477e9997ce103c90c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 17 23:30:28.521443 containerd[1454]: time="2026-04-17T23:30:28.521211623Z" level=info msg="CreateContainer within sandbox \"e348b2a4c491af8fc50cfed016cafe1d3a96d6a44e812aa477e9997ce103c90c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e\"" Apr 17 23:30:28.523005 containerd[1454]: time="2026-04-17T23:30:28.522909932Z" level=info msg="StartContainer for \"10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e\"" Apr 17 23:30:28.568485 systemd[1]: Started cri-containerd-10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e.scope - libcontainer container 10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e. Apr 17 23:30:28.616237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d-rootfs.mount: Deactivated successfully. Apr 17 23:30:28.618980 containerd[1454]: time="2026-04-17T23:30:28.618900973Z" level=info msg="StartContainer for \"10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e\" returns successfully" Apr 17 23:30:28.797235 kubelet[2502]: E0417 23:30:28.794858 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:28.797235 kubelet[2502]: E0417 23:30:28.796981 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:28.817894 containerd[1454]: time="2026-04-17T23:30:28.817730881Z" level=info msg="CreateContainer within sandbox \"084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 23:30:28.848050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount890680150.mount: Deactivated successfully. Apr 17 23:30:28.855643 containerd[1454]: time="2026-04-17T23:30:28.855524893Z" level=info msg="CreateContainer within sandbox \"084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e\"" Apr 17 23:30:28.856622 containerd[1454]: time="2026-04-17T23:30:28.856563079Z" level=info msg="StartContainer for \"512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e\"" Apr 17 23:30:28.881572 kubelet[2502]: I0417 23:30:28.881459 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-9sg2h" podStartSLOduration=1.260014623 podStartE2EDuration="9.874310675s" podCreationTimestamp="2026-04-17 23:30:19 +0000 UTC" firstStartedPulling="2026-04-17 23:30:19.886650099 +0000 UTC m=+8.255033705" lastFinishedPulling="2026-04-17 23:30:28.500946152 +0000 UTC m=+16.869329757" observedRunningTime="2026-04-17 23:30:28.873944339 +0000 UTC m=+17.242327960" watchObservedRunningTime="2026-04-17 23:30:28.874310675 +0000 UTC m=+17.242694289" Apr 17 23:30:28.917544 systemd[1]: Started cri-containerd-512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e.scope - libcontainer container 512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e. Apr 17 23:30:28.970700 containerd[1454]: time="2026-04-17T23:30:28.970514677Z" level=info msg="StartContainer for \"512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e\" returns successfully" Apr 17 23:30:28.975572 systemd[1]: cri-containerd-512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e.scope: Deactivated successfully. Apr 17 23:30:29.020416 containerd[1454]: time="2026-04-17T23:30:29.020351628Z" level=info msg="shim disconnected" id=512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e namespace=k8s.io Apr 17 23:30:29.020771 containerd[1454]: time="2026-04-17T23:30:29.020659146Z" level=warning msg="cleaning up after shim disconnected" id=512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e namespace=k8s.io Apr 17 23:30:29.020771 containerd[1454]: time="2026-04-17T23:30:29.020717793Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:30:29.615270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e-rootfs.mount: Deactivated successfully. Apr 17 23:30:29.827254 kubelet[2502]: E0417 23:30:29.826385 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:29.828948 kubelet[2502]: E0417 23:30:29.827467 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:29.867028 containerd[1454]: time="2026-04-17T23:30:29.866800862Z" level=info msg="CreateContainer within sandbox \"084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 23:30:29.886653 containerd[1454]: time="2026-04-17T23:30:29.886591340Z" level=info msg="CreateContainer within sandbox \"084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e\"" Apr 17 23:30:29.888420 containerd[1454]: time="2026-04-17T23:30:29.887455934Z" level=info msg="StartContainer for \"0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e\"" Apr 17 23:30:29.939867 systemd[1]: Started cri-containerd-0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e.scope - libcontainer container 0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e. Apr 17 23:30:29.980493 systemd[1]: cri-containerd-0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e.scope: Deactivated successfully. Apr 17 23:30:29.983271 containerd[1454]: time="2026-04-17T23:30:29.983169158Z" level=info msg="StartContainer for \"0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e\" returns successfully" Apr 17 23:30:30.014420 containerd[1454]: time="2026-04-17T23:30:30.014301961Z" level=info msg="shim disconnected" id=0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e namespace=k8s.io Apr 17 23:30:30.014420 containerd[1454]: time="2026-04-17T23:30:30.014362895Z" level=warning msg="cleaning up after shim disconnected" id=0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e namespace=k8s.io Apr 17 23:30:30.014420 containerd[1454]: time="2026-04-17T23:30:30.014370228Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:30:30.615983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e-rootfs.mount: Deactivated successfully. Apr 17 23:30:30.823569 kubelet[2502]: E0417 23:30:30.820844 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:30.839129 containerd[1454]: time="2026-04-17T23:30:30.838631579Z" level=info msg="CreateContainer within sandbox \"084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 23:30:30.871578 containerd[1454]: time="2026-04-17T23:30:30.871163799Z" level=info msg="CreateContainer within sandbox \"084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f\"" Apr 17 23:30:30.871950 containerd[1454]: time="2026-04-17T23:30:30.871781739Z" level=info msg="StartContainer for \"122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f\"" Apr 17 23:30:30.973463 systemd[1]: Started cri-containerd-122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f.scope - libcontainer container 122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f. Apr 17 23:30:31.040121 containerd[1454]: time="2026-04-17T23:30:31.040041266Z" level=info msg="StartContainer for \"122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f\" returns successfully" Apr 17 23:30:31.247748 kubelet[2502]: I0417 23:30:31.241779 2502 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 17 23:30:31.320627 systemd[1]: Created slice kubepods-burstable-podeba0699b_85ad_4eff_ab29_08fc63fa0303.slice - libcontainer container kubepods-burstable-podeba0699b_85ad_4eff_ab29_08fc63fa0303.slice. Apr 17 23:30:31.329733 systemd[1]: Created slice kubepods-burstable-podc859462d_9563_4376_98d6_2d07418ca2cc.slice - libcontainer container kubepods-burstable-podc859462d_9563_4376_98d6_2d07418ca2cc.slice. Apr 17 23:30:31.372878 kubelet[2502]: I0417 23:30:31.372779 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmj4h\" (UniqueName: \"kubernetes.io/projected/eba0699b-85ad-4eff-ab29-08fc63fa0303-kube-api-access-pmj4h\") pod \"coredns-66bc5c9577-hf67j\" (UID: \"eba0699b-85ad-4eff-ab29-08fc63fa0303\") " pod="kube-system/coredns-66bc5c9577-hf67j" Apr 17 23:30:31.372878 kubelet[2502]: I0417 23:30:31.372841 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2rq7\" (UniqueName: \"kubernetes.io/projected/c859462d-9563-4376-98d6-2d07418ca2cc-kube-api-access-d2rq7\") pod \"coredns-66bc5c9577-d7j56\" (UID: \"c859462d-9563-4376-98d6-2d07418ca2cc\") " pod="kube-system/coredns-66bc5c9577-d7j56" Apr 17 23:30:31.372878 kubelet[2502]: I0417 23:30:31.372873 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c859462d-9563-4376-98d6-2d07418ca2cc-config-volume\") pod \"coredns-66bc5c9577-d7j56\" (UID: \"c859462d-9563-4376-98d6-2d07418ca2cc\") " pod="kube-system/coredns-66bc5c9577-d7j56" Apr 17 23:30:31.372878 kubelet[2502]: I0417 23:30:31.372887 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eba0699b-85ad-4eff-ab29-08fc63fa0303-config-volume\") pod \"coredns-66bc5c9577-hf67j\" (UID: \"eba0699b-85ad-4eff-ab29-08fc63fa0303\") " pod="kube-system/coredns-66bc5c9577-hf67j" Apr 17 23:30:31.628988 kubelet[2502]: E0417 23:30:31.628906 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:31.646750 kubelet[2502]: E0417 23:30:31.646660 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:31.657501 containerd[1454]: time="2026-04-17T23:30:31.657394755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hf67j,Uid:eba0699b-85ad-4eff-ab29-08fc63fa0303,Namespace:kube-system,Attempt:0,}" Apr 17 23:30:31.658544 containerd[1454]: time="2026-04-17T23:30:31.658492330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d7j56,Uid:c859462d-9563-4376-98d6-2d07418ca2cc,Namespace:kube-system,Attempt:0,}" Apr 17 23:30:31.850363 kubelet[2502]: E0417 23:30:31.850319 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:31.875957 kubelet[2502]: I0417 23:30:31.875629 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h42mh" podStartSLOduration=6.6622788889999995 podStartE2EDuration="13.875613331s" podCreationTimestamp="2026-04-17 23:30:18 +0000 UTC" firstStartedPulling="2026-04-17 23:30:19.378012537 +0000 UTC m=+7.746396150" lastFinishedPulling="2026-04-17 23:30:26.591346982 +0000 UTC m=+14.959730592" observedRunningTime="2026-04-17 23:30:31.875327857 +0000 UTC m=+20.243711475" watchObservedRunningTime="2026-04-17 23:30:31.875613331 +0000 UTC m=+20.243996947" Apr 17 23:30:32.853823 kubelet[2502]: E0417 23:30:32.853731 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:33.058951 systemd-networkd[1378]: cilium_host: Link UP Apr 17 23:30:33.059054 systemd-networkd[1378]: cilium_net: Link UP Apr 17 23:30:33.059211 systemd-networkd[1378]: cilium_net: Gained carrier Apr 17 23:30:33.059352 systemd-networkd[1378]: cilium_host: Gained carrier Apr 17 23:30:33.162455 systemd-networkd[1378]: cilium_vxlan: Link UP Apr 17 23:30:33.162460 systemd-networkd[1378]: cilium_vxlan: Gained carrier Apr 17 23:30:33.295570 systemd-networkd[1378]: cilium_net: Gained IPv6LL Apr 17 23:30:33.355124 kernel: NET: Registered PF_ALG protocol family Apr 17 23:30:33.855824 kubelet[2502]: E0417 23:30:33.855451 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:33.976287 systemd-networkd[1378]: cilium_host: Gained IPv6LL Apr 17 23:30:34.038278 systemd-networkd[1378]: lxc_health: Link UP Apr 17 23:30:34.051369 systemd-networkd[1378]: lxc_health: Gained carrier Apr 17 23:30:34.261842 systemd-networkd[1378]: lxc95a7edfcb9e3: Link UP Apr 17 23:30:34.270818 systemd-networkd[1378]: lxce75845cdcb4e: Link UP Apr 17 23:30:34.279314 kernel: eth0: renamed from tmp6fbe7 Apr 17 23:30:34.285152 kernel: eth0: renamed from tmp1184b Apr 17 23:30:34.291917 systemd-networkd[1378]: lxc95a7edfcb9e3: Gained carrier Apr 17 23:30:34.292143 systemd-networkd[1378]: lxce75845cdcb4e: Gained carrier Apr 17 23:30:35.064575 systemd-networkd[1378]: cilium_vxlan: Gained IPv6LL Apr 17 23:30:35.293119 kubelet[2502]: E0417 23:30:35.293000 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:35.640384 systemd-networkd[1378]: lxc_health: Gained IPv6LL Apr 17 23:30:35.831498 systemd-networkd[1378]: lxc95a7edfcb9e3: Gained IPv6LL Apr 17 23:30:36.023441 systemd-networkd[1378]: lxce75845cdcb4e: Gained IPv6LL Apr 17 23:30:36.419336 kernel: hrtimer: interrupt took 3762848 ns Apr 17 23:30:37.368426 update_engine[1446]: I20260417 23:30:37.368196 1446 update_attempter.cc:509] Updating boot flags... Apr 17 23:30:37.418157 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (3734) Apr 17 23:30:37.454190 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (3733) Apr 17 23:30:37.891767 systemd[1]: Started sshd@7-10.0.0.26:22-10.0.0.1:37788.service - OpenSSH per-connection server daemon (10.0.0.1:37788). Apr 17 23:30:37.935431 sshd[3742]: Accepted publickey for core from 10.0.0.1 port 37788 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:30:37.938605 sshd[3742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:30:37.944208 systemd-logind[1434]: New session 8 of user core. Apr 17 23:30:37.952354 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:30:38.145243 sshd[3742]: pam_unix(sshd:session): session closed for user core Apr 17 23:30:38.156805 systemd-logind[1434]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:30:38.157436 systemd[1]: sshd@7-10.0.0.26:22-10.0.0.1:37788.service: Deactivated successfully. Apr 17 23:30:38.159999 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:30:38.161707 systemd-logind[1434]: Removed session 8. Apr 17 23:30:38.416552 containerd[1454]: time="2026-04-17T23:30:38.416006541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:30:38.418620 containerd[1454]: time="2026-04-17T23:30:38.418122252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:30:38.418620 containerd[1454]: time="2026-04-17T23:30:38.418174284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:38.419131 containerd[1454]: time="2026-04-17T23:30:38.418886647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:38.434517 containerd[1454]: time="2026-04-17T23:30:38.434028966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:30:38.434517 containerd[1454]: time="2026-04-17T23:30:38.434202709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:30:38.434517 containerd[1454]: time="2026-04-17T23:30:38.434221485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:38.435331 containerd[1454]: time="2026-04-17T23:30:38.434943353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:38.456556 systemd[1]: run-containerd-runc-k8s.io-6fbe7fb5b1fbf6deaafed843c79c355d77fa91f75f9754d07accbb3019e87b95-runc.5cslL4.mount: Deactivated successfully. Apr 17 23:30:38.483974 systemd[1]: Started cri-containerd-1184bf51566e9e0a082bc18f404ab0bde92178295100b7a8a5e917b3eb275eb9.scope - libcontainer container 1184bf51566e9e0a082bc18f404ab0bde92178295100b7a8a5e917b3eb275eb9. Apr 17 23:30:38.486057 systemd[1]: Started cri-containerd-6fbe7fb5b1fbf6deaafed843c79c355d77fa91f75f9754d07accbb3019e87b95.scope - libcontainer container 6fbe7fb5b1fbf6deaafed843c79c355d77fa91f75f9754d07accbb3019e87b95. Apr 17 23:30:38.502848 systemd-resolved[1379]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:30:38.508709 systemd-resolved[1379]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:30:38.542997 containerd[1454]: time="2026-04-17T23:30:38.542924160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d7j56,Uid:c859462d-9563-4376-98d6-2d07418ca2cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"1184bf51566e9e0a082bc18f404ab0bde92178295100b7a8a5e917b3eb275eb9\"" Apr 17 23:30:38.545157 kubelet[2502]: E0417 23:30:38.544200 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:38.548758 containerd[1454]: time="2026-04-17T23:30:38.548600889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hf67j,Uid:eba0699b-85ad-4eff-ab29-08fc63fa0303,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fbe7fb5b1fbf6deaafed843c79c355d77fa91f75f9754d07accbb3019e87b95\"" Apr 17 23:30:38.550003 kubelet[2502]: E0417 23:30:38.549970 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:38.556278 containerd[1454]: time="2026-04-17T23:30:38.556109198Z" level=info msg="CreateContainer within sandbox \"1184bf51566e9e0a082bc18f404ab0bde92178295100b7a8a5e917b3eb275eb9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:30:38.572949 containerd[1454]: time="2026-04-17T23:30:38.572903349Z" level=info msg="CreateContainer within sandbox \"6fbe7fb5b1fbf6deaafed843c79c355d77fa91f75f9754d07accbb3019e87b95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:30:38.579446 containerd[1454]: time="2026-04-17T23:30:38.579346224Z" level=info msg="CreateContainer within sandbox \"1184bf51566e9e0a082bc18f404ab0bde92178295100b7a8a5e917b3eb275eb9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d233a5fa5cfa27472a4b776db6e06ead09f21e9138e09dfa877c8c303c95664d\"" Apr 17 23:30:38.580360 containerd[1454]: time="2026-04-17T23:30:38.580223846Z" level=info msg="StartContainer for \"d233a5fa5cfa27472a4b776db6e06ead09f21e9138e09dfa877c8c303c95664d\"" Apr 17 23:30:38.603590 containerd[1454]: time="2026-04-17T23:30:38.603454345Z" level=info msg="CreateContainer within sandbox \"6fbe7fb5b1fbf6deaafed843c79c355d77fa91f75f9754d07accbb3019e87b95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eafb51b0d088ce88007f959f8d18126993533968a8ae40bfb9f386f1b327b4d9\"" Apr 17 23:30:38.606947 containerd[1454]: time="2026-04-17T23:30:38.606186836Z" level=info msg="StartContainer for \"eafb51b0d088ce88007f959f8d18126993533968a8ae40bfb9f386f1b327b4d9\"" Apr 17 23:30:38.610302 systemd[1]: Started cri-containerd-d233a5fa5cfa27472a4b776db6e06ead09f21e9138e09dfa877c8c303c95664d.scope - libcontainer container d233a5fa5cfa27472a4b776db6e06ead09f21e9138e09dfa877c8c303c95664d. Apr 17 23:30:38.638365 systemd[1]: Started cri-containerd-eafb51b0d088ce88007f959f8d18126993533968a8ae40bfb9f386f1b327b4d9.scope - libcontainer container eafb51b0d088ce88007f959f8d18126993533968a8ae40bfb9f386f1b327b4d9. Apr 17 23:30:38.643436 containerd[1454]: time="2026-04-17T23:30:38.643162854Z" level=info msg="StartContainer for \"d233a5fa5cfa27472a4b776db6e06ead09f21e9138e09dfa877c8c303c95664d\" returns successfully" Apr 17 23:30:38.687256 containerd[1454]: time="2026-04-17T23:30:38.686686015Z" level=info msg="StartContainer for \"eafb51b0d088ce88007f959f8d18126993533968a8ae40bfb9f386f1b327b4d9\" returns successfully" Apr 17 23:30:38.883395 kubelet[2502]: E0417 23:30:38.882749 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:38.942758 kubelet[2502]: E0417 23:30:38.941649 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:38.968149 kubelet[2502]: I0417 23:30:38.967486 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hf67j" podStartSLOduration=19.967463575 podStartE2EDuration="19.967463575s" podCreationTimestamp="2026-04-17 23:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:30:38.952417926 +0000 UTC m=+27.320801548" watchObservedRunningTime="2026-04-17 23:30:38.967463575 +0000 UTC m=+27.335847191" Apr 17 23:30:39.944554 kubelet[2502]: E0417 23:30:39.944508 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:39.944554 kubelet[2502]: E0417 23:30:39.944593 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:40.953798 kubelet[2502]: E0417 23:30:40.953729 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:40.953798 kubelet[2502]: E0417 23:30:40.953782 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:43.158881 systemd[1]: Started sshd@8-10.0.0.26:22-10.0.0.1:55542.service - OpenSSH per-connection server daemon (10.0.0.1:55542). Apr 17 23:30:43.200890 sshd[3932]: Accepted publickey for core from 10.0.0.1 port 55542 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:30:43.203027 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:30:43.208333 systemd-logind[1434]: New session 9 of user core. Apr 17 23:30:43.214444 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:30:43.374185 sshd[3932]: pam_unix(sshd:session): session closed for user core Apr 17 23:30:43.382516 systemd[1]: sshd@8-10.0.0.26:22-10.0.0.1:55542.service: Deactivated successfully. Apr 17 23:30:43.392011 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:30:43.393407 systemd-logind[1434]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:30:43.395780 systemd-logind[1434]: Removed session 9. Apr 17 23:30:43.766767 kubelet[2502]: I0417 23:30:43.766712 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:30:43.767639 kubelet[2502]: E0417 23:30:43.767029 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:43.783684 kubelet[2502]: I0417 23:30:43.783544 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-d7j56" podStartSLOduration=24.783531024 podStartE2EDuration="24.783531024s" podCreationTimestamp="2026-04-17 23:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:30:38.991782366 +0000 UTC m=+27.360165982" watchObservedRunningTime="2026-04-17 23:30:43.783531024 +0000 UTC m=+32.151914640" Apr 17 23:30:43.962289 kubelet[2502]: E0417 23:30:43.962210 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:48.388351 systemd[1]: Started sshd@9-10.0.0.26:22-10.0.0.1:55546.service - OpenSSH per-connection server daemon (10.0.0.1:55546). Apr 17 23:30:48.427050 sshd[3947]: Accepted publickey for core from 10.0.0.1 port 55546 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:30:48.428480 sshd[3947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:30:48.432883 systemd-logind[1434]: New session 10 of user core. Apr 17 23:30:48.442294 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:30:48.566627 sshd[3947]: pam_unix(sshd:session): session closed for user core Apr 17 23:30:48.574883 systemd[1]: sshd@9-10.0.0.26:22-10.0.0.1:55546.service: Deactivated successfully. Apr 17 23:30:48.578654 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:30:48.579879 systemd-logind[1434]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:30:48.581015 systemd-logind[1434]: Removed session 10. Apr 17 23:30:53.581281 systemd[1]: Started sshd@10-10.0.0.26:22-10.0.0.1:58484.service - OpenSSH per-connection server daemon (10.0.0.1:58484). Apr 17 23:30:53.622499 sshd[3965]: Accepted publickey for core from 10.0.0.1 port 58484 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:30:53.626477 sshd[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:30:53.635261 systemd-logind[1434]: New session 11 of user core. Apr 17 23:30:53.642681 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:30:53.786543 sshd[3965]: pam_unix(sshd:session): session closed for user core Apr 17 23:30:53.797843 systemd[1]: sshd@10-10.0.0.26:22-10.0.0.1:58484.service: Deactivated successfully. Apr 17 23:30:53.800335 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:30:53.801745 systemd-logind[1434]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:30:53.808645 systemd[1]: Started sshd@11-10.0.0.26:22-10.0.0.1:58496.service - OpenSSH per-connection server daemon (10.0.0.1:58496). Apr 17 23:30:53.809451 systemd-logind[1434]: Removed session 11. Apr 17 23:30:53.842961 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 58496 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:30:53.845774 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:30:53.851356 systemd-logind[1434]: New session 12 of user core. Apr 17 23:30:53.862668 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:30:54.042896 sshd[3980]: pam_unix(sshd:session): session closed for user core Apr 17 23:30:54.052359 systemd[1]: sshd@11-10.0.0.26:22-10.0.0.1:58496.service: Deactivated successfully. Apr 17 23:30:54.056638 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:30:54.063973 systemd-logind[1434]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:30:54.077487 systemd[1]: Started sshd@12-10.0.0.26:22-10.0.0.1:58502.service - OpenSSH per-connection server daemon (10.0.0.1:58502). Apr 17 23:30:54.079350 systemd-logind[1434]: Removed session 12. Apr 17 23:30:54.118189 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 58502 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:30:54.119602 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:30:54.124577 systemd-logind[1434]: New session 13 of user core. Apr 17 23:30:54.132580 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:30:54.288462 sshd[3992]: pam_unix(sshd:session): session closed for user core Apr 17 23:30:54.292058 systemd-logind[1434]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:30:54.292835 systemd[1]: sshd@12-10.0.0.26:22-10.0.0.1:58502.service: Deactivated successfully. Apr 17 23:30:54.298400 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:30:54.300427 systemd-logind[1434]: Removed session 13. Apr 17 23:30:59.328228 systemd[1]: Started sshd@13-10.0.0.26:22-10.0.0.1:58560.service - OpenSSH per-connection server daemon (10.0.0.1:58560). Apr 17 23:30:59.373608 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 58560 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:30:59.375527 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:30:59.384632 systemd-logind[1434]: New session 14 of user core. Apr 17 23:30:59.391691 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:30:59.571774 sshd[4008]: pam_unix(sshd:session): session closed for user core Apr 17 23:30:59.580471 systemd[1]: sshd@13-10.0.0.26:22-10.0.0.1:58560.service: Deactivated successfully. Apr 17 23:30:59.584996 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:30:59.586902 systemd-logind[1434]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:30:59.588462 systemd-logind[1434]: Removed session 14. Apr 17 23:31:04.586815 systemd[1]: Started sshd@14-10.0.0.26:22-10.0.0.1:57472.service - OpenSSH per-connection server daemon (10.0.0.1:57472). Apr 17 23:31:04.623137 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 57472 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:04.624800 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:04.629536 systemd-logind[1434]: New session 15 of user core. Apr 17 23:31:04.641956 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:31:04.765033 sshd[4022]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:04.772488 systemd[1]: sshd@14-10.0.0.26:22-10.0.0.1:57472.service: Deactivated successfully. Apr 17 23:31:04.774006 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:31:04.775303 systemd-logind[1434]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:31:04.776337 systemd[1]: Started sshd@15-10.0.0.26:22-10.0.0.1:57476.service - OpenSSH per-connection server daemon (10.0.0.1:57476). Apr 17 23:31:04.777491 systemd-logind[1434]: Removed session 15. Apr 17 23:31:04.815027 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 57476 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:04.816334 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:04.820210 systemd-logind[1434]: New session 16 of user core. Apr 17 23:31:04.830341 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:31:05.035826 sshd[4036]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:05.042682 systemd[1]: sshd@15-10.0.0.26:22-10.0.0.1:57476.service: Deactivated successfully. Apr 17 23:31:05.044156 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:31:05.045822 systemd-logind[1434]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:31:05.053395 systemd[1]: Started sshd@16-10.0.0.26:22-10.0.0.1:57480.service - OpenSSH per-connection server daemon (10.0.0.1:57480). Apr 17 23:31:05.054247 systemd-logind[1434]: Removed session 16. Apr 17 23:31:05.086985 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 57480 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:05.088390 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:05.092944 systemd-logind[1434]: New session 17 of user core. Apr 17 23:31:05.111590 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:31:05.595625 sshd[4049]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:05.604762 systemd[1]: sshd@16-10.0.0.26:22-10.0.0.1:57480.service: Deactivated successfully. Apr 17 23:31:05.607573 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:31:05.609304 systemd-logind[1434]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:31:05.618462 systemd[1]: Started sshd@17-10.0.0.26:22-10.0.0.1:57486.service - OpenSSH per-connection server daemon (10.0.0.1:57486). Apr 17 23:31:05.619815 systemd-logind[1434]: Removed session 17. Apr 17 23:31:05.657580 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 57486 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:05.659247 sshd[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:05.666872 systemd-logind[1434]: New session 18 of user core. Apr 17 23:31:05.676616 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:31:05.939222 sshd[4067]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:05.951289 systemd[1]: sshd@17-10.0.0.26:22-10.0.0.1:57486.service: Deactivated successfully. Apr 17 23:31:05.953204 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:31:05.955370 systemd-logind[1434]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:31:05.963199 systemd[1]: Started sshd@18-10.0.0.26:22-10.0.0.1:57488.service - OpenSSH per-connection server daemon (10.0.0.1:57488). Apr 17 23:31:05.964215 systemd-logind[1434]: Removed session 18. Apr 17 23:31:05.996768 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 57488 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:05.999881 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:06.006679 systemd-logind[1434]: New session 19 of user core. Apr 17 23:31:06.013453 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:31:06.141059 sshd[4079]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:06.144340 systemd[1]: sshd@18-10.0.0.26:22-10.0.0.1:57488.service: Deactivated successfully. Apr 17 23:31:06.145841 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:31:06.147992 systemd-logind[1434]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:31:06.149788 systemd-logind[1434]: Removed session 19. Apr 17 23:31:11.169491 systemd[1]: Started sshd@19-10.0.0.26:22-10.0.0.1:36318.service - OpenSSH per-connection server daemon (10.0.0.1:36318). Apr 17 23:31:11.209688 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 36318 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:11.211628 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:11.220475 systemd-logind[1434]: New session 20 of user core. Apr 17 23:31:11.228453 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:31:11.370663 sshd[4097]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:11.376057 systemd-logind[1434]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:31:11.376732 systemd[1]: sshd@19-10.0.0.26:22-10.0.0.1:36318.service: Deactivated successfully. Apr 17 23:31:11.378761 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:31:11.380857 systemd-logind[1434]: Removed session 20. Apr 17 23:31:16.397985 systemd[1]: Started sshd@20-10.0.0.26:22-10.0.0.1:36328.service - OpenSSH per-connection server daemon (10.0.0.1:36328). Apr 17 23:31:16.438041 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 36328 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:16.441136 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:16.447530 systemd-logind[1434]: New session 21 of user core. Apr 17 23:31:16.457637 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:31:16.600886 sshd[4113]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:16.604726 systemd[1]: sshd@20-10.0.0.26:22-10.0.0.1:36328.service: Deactivated successfully. Apr 17 23:31:16.606843 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:31:16.607624 systemd-logind[1434]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:31:16.608925 systemd-logind[1434]: Removed session 21. Apr 17 23:31:21.636533 systemd[1]: Started sshd@21-10.0.0.26:22-10.0.0.1:37228.service - OpenSSH per-connection server daemon (10.0.0.1:37228). Apr 17 23:31:21.682640 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 37228 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:21.684526 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:21.692748 systemd-logind[1434]: New session 22 of user core. Apr 17 23:31:21.703497 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 23:31:21.985483 sshd[4129]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:22.002532 systemd[1]: sshd@21-10.0.0.26:22-10.0.0.1:37228.service: Deactivated successfully. Apr 17 23:31:22.004955 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 23:31:22.006800 systemd-logind[1434]: Session 22 logged out. Waiting for processes to exit. Apr 17 23:31:22.013394 systemd[1]: Started sshd@22-10.0.0.26:22-10.0.0.1:37242.service - OpenSSH per-connection server daemon (10.0.0.1:37242). Apr 17 23:31:22.018713 systemd-logind[1434]: Removed session 22. Apr 17 23:31:22.092460 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 37242 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:22.097814 sshd[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:22.107606 systemd-logind[1434]: New session 23 of user core. Apr 17 23:31:22.115540 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 17 23:31:23.556430 containerd[1454]: time="2026-04-17T23:31:23.556373986Z" level=info msg="StopContainer for \"10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e\" with timeout 30 (s)" Apr 17 23:31:23.558137 containerd[1454]: time="2026-04-17T23:31:23.558044393Z" level=info msg="Stop container \"10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e\" with signal terminated" Apr 17 23:31:23.582971 systemd[1]: run-containerd-runc-k8s.io-122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f-runc.y7Xeuw.mount: Deactivated successfully. Apr 17 23:31:23.596446 systemd[1]: cri-containerd-10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e.scope: Deactivated successfully. Apr 17 23:31:23.618623 containerd[1454]: time="2026-04-17T23:31:23.618453783Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:31:23.643028 containerd[1454]: time="2026-04-17T23:31:23.642957762Z" level=info msg="StopContainer for \"122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f\" with timeout 2 (s)" Apr 17 23:31:23.645400 containerd[1454]: time="2026-04-17T23:31:23.645248610Z" level=info msg="Stop container \"122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f\" with signal terminated" Apr 17 23:31:23.663483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e-rootfs.mount: Deactivated successfully. Apr 17 23:31:23.672039 systemd-networkd[1378]: lxc_health: Link DOWN Apr 17 23:31:23.672499 systemd-networkd[1378]: lxc_health: Lost carrier Apr 17 23:31:23.693389 containerd[1454]: time="2026-04-17T23:31:23.692893230Z" level=info msg="shim disconnected" id=10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e namespace=k8s.io Apr 17 23:31:23.693389 containerd[1454]: time="2026-04-17T23:31:23.693003949Z" level=warning msg="cleaning up after shim disconnected" id=10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e namespace=k8s.io Apr 17 23:31:23.693389 containerd[1454]: time="2026-04-17T23:31:23.693015332Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:31:23.707384 systemd[1]: cri-containerd-122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f.scope: Deactivated successfully. Apr 17 23:31:23.707801 systemd[1]: cri-containerd-122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f.scope: Consumed 7.367s CPU time. Apr 17 23:31:23.840814 containerd[1454]: time="2026-04-17T23:31:23.839580123Z" level=info msg="StopContainer for \"10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e\" returns successfully" Apr 17 23:31:23.843764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f-rootfs.mount: Deactivated successfully. Apr 17 23:31:23.848241 containerd[1454]: time="2026-04-17T23:31:23.844045211Z" level=info msg="StopPodSandbox for \"e348b2a4c491af8fc50cfed016cafe1d3a96d6a44e812aa477e9997ce103c90c\"" Apr 17 23:31:23.848241 containerd[1454]: time="2026-04-17T23:31:23.844218730Z" level=info msg="Container to stop \"10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:31:23.853380 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e348b2a4c491af8fc50cfed016cafe1d3a96d6a44e812aa477e9997ce103c90c-shm.mount: Deactivated successfully. Apr 17 23:31:23.863421 containerd[1454]: time="2026-04-17T23:31:23.862743373Z" level=info msg="shim disconnected" id=122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f namespace=k8s.io Apr 17 23:31:23.863421 containerd[1454]: time="2026-04-17T23:31:23.862797118Z" level=warning msg="cleaning up after shim disconnected" id=122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f namespace=k8s.io Apr 17 23:31:23.863421 containerd[1454]: time="2026-04-17T23:31:23.862803903Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:31:23.872900 systemd[1]: cri-containerd-e348b2a4c491af8fc50cfed016cafe1d3a96d6a44e812aa477e9997ce103c90c.scope: Deactivated successfully. Apr 17 23:31:23.909998 containerd[1454]: time="2026-04-17T23:31:23.909670255Z" level=info msg="StopContainer for \"122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f\" returns successfully" Apr 17 23:31:23.911361 containerd[1454]: time="2026-04-17T23:31:23.910951136Z" level=info msg="StopPodSandbox for \"084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b\"" Apr 17 23:31:23.911361 containerd[1454]: time="2026-04-17T23:31:23.911012394Z" level=info msg="Container to stop \"e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:31:23.911361 containerd[1454]: time="2026-04-17T23:31:23.911021807Z" level=info msg="Container to stop \"122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:31:23.911361 containerd[1454]: time="2026-04-17T23:31:23.911031184Z" level=info msg="Container to stop \"512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:31:23.911361 containerd[1454]: time="2026-04-17T23:31:23.911039572Z" level=info msg="Container to stop \"0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:31:23.911361 containerd[1454]: time="2026-04-17T23:31:23.911046766Z" level=info msg="Container to stop \"c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:31:23.919385 containerd[1454]: time="2026-04-17T23:31:23.919230038Z" level=info msg="shim disconnected" id=e348b2a4c491af8fc50cfed016cafe1d3a96d6a44e812aa477e9997ce103c90c namespace=k8s.io Apr 17 23:31:23.919693 containerd[1454]: time="2026-04-17T23:31:23.919428775Z" level=warning msg="cleaning up after shim disconnected" id=e348b2a4c491af8fc50cfed016cafe1d3a96d6a44e812aa477e9997ce103c90c namespace=k8s.io Apr 17 23:31:23.919693 containerd[1454]: time="2026-04-17T23:31:23.919445893Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:31:23.925450 systemd[1]: cri-containerd-084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b.scope: Deactivated successfully. Apr 17 23:31:23.952967 containerd[1454]: time="2026-04-17T23:31:23.952585064Z" level=info msg="TearDown network for sandbox \"e348b2a4c491af8fc50cfed016cafe1d3a96d6a44e812aa477e9997ce103c90c\" successfully" Apr 17 23:31:23.952967 containerd[1454]: time="2026-04-17T23:31:23.952617663Z" level=info msg="StopPodSandbox for \"e348b2a4c491af8fc50cfed016cafe1d3a96d6a44e812aa477e9997ce103c90c\" returns successfully" Apr 17 23:31:23.970012 containerd[1454]: time="2026-04-17T23:31:23.969922126Z" level=info msg="shim disconnected" id=084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b namespace=k8s.io Apr 17 23:31:23.970012 containerd[1454]: time="2026-04-17T23:31:23.970002242Z" level=warning msg="cleaning up after shim disconnected" id=084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b namespace=k8s.io Apr 17 23:31:23.970012 containerd[1454]: time="2026-04-17T23:31:23.970012176Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:31:24.001126 containerd[1454]: time="2026-04-17T23:31:24.000949912Z" level=info msg="TearDown network for sandbox \"084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b\" successfully" Apr 17 23:31:24.001126 containerd[1454]: time="2026-04-17T23:31:24.001010645Z" level=info msg="StopPodSandbox for \"084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b\" returns successfully" Apr 17 23:31:24.058675 kubelet[2502]: I0417 23:31:24.056587 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-host-proc-sys-kernel\") pod \"6a0c505f-4b9c-4262-acf5-27c692151472\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " Apr 17 23:31:24.058675 kubelet[2502]: I0417 23:31:24.056660 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c898n\" (UniqueName: \"kubernetes.io/projected/6a0c505f-4b9c-4262-acf5-27c692151472-kube-api-access-c898n\") pod \"6a0c505f-4b9c-4262-acf5-27c692151472\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " Apr 17 23:31:24.058675 kubelet[2502]: I0417 23:31:24.056677 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-bpf-maps\") pod \"6a0c505f-4b9c-4262-acf5-27c692151472\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " Apr 17 23:31:24.058675 kubelet[2502]: I0417 23:31:24.056696 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-cni-path\") pod \"6a0c505f-4b9c-4262-acf5-27c692151472\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " Apr 17 23:31:24.058675 kubelet[2502]: I0417 23:31:24.056717 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-etc-cni-netd\") pod \"6a0c505f-4b9c-4262-acf5-27c692151472\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " Apr 17 23:31:24.058675 kubelet[2502]: I0417 23:31:24.056733 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-hostproc\") pod \"6a0c505f-4b9c-4262-acf5-27c692151472\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " Apr 17 23:31:24.068760 kubelet[2502]: I0417 23:31:24.056757 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf10608a-9a4a-48ce-bda3-f59395fc07e4-cilium-config-path\") pod \"bf10608a-9a4a-48ce-bda3-f59395fc07e4\" (UID: \"bf10608a-9a4a-48ce-bda3-f59395fc07e4\") " Apr 17 23:31:24.068760 kubelet[2502]: I0417 23:31:24.056776 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a0c505f-4b9c-4262-acf5-27c692151472-cilium-config-path\") pod \"6a0c505f-4b9c-4262-acf5-27c692151472\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " Apr 17 23:31:24.068760 kubelet[2502]: I0417 23:31:24.056793 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx76t\" (UniqueName: \"kubernetes.io/projected/bf10608a-9a4a-48ce-bda3-f59395fc07e4-kube-api-access-vx76t\") pod \"bf10608a-9a4a-48ce-bda3-f59395fc07e4\" (UID: \"bf10608a-9a4a-48ce-bda3-f59395fc07e4\") " Apr 17 23:31:24.068760 kubelet[2502]: I0417 23:31:24.056811 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-lib-modules\") pod \"6a0c505f-4b9c-4262-acf5-27c692151472\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " Apr 17 23:31:24.068760 kubelet[2502]: I0417 23:31:24.057038 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a0c505f-4b9c-4262-acf5-27c692151472-hubble-tls\") pod \"6a0c505f-4b9c-4262-acf5-27c692151472\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " Apr 17 23:31:24.068760 kubelet[2502]: I0417 23:31:24.057449 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-xtables-lock\") pod \"6a0c505f-4b9c-4262-acf5-27c692151472\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " Apr 17 23:31:24.077817 kubelet[2502]: I0417 23:31:24.057491 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a0c505f-4b9c-4262-acf5-27c692151472-clustermesh-secrets\") pod \"6a0c505f-4b9c-4262-acf5-27c692151472\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " Apr 17 23:31:24.077817 kubelet[2502]: I0417 23:31:24.057511 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-host-proc-sys-net\") pod \"6a0c505f-4b9c-4262-acf5-27c692151472\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " Apr 17 23:31:24.077817 kubelet[2502]: I0417 23:31:24.057533 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-cilium-cgroup\") pod \"6a0c505f-4b9c-4262-acf5-27c692151472\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " Apr 17 23:31:24.077817 kubelet[2502]: I0417 23:31:24.057555 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-cilium-run\") pod \"6a0c505f-4b9c-4262-acf5-27c692151472\" (UID: \"6a0c505f-4b9c-4262-acf5-27c692151472\") " Apr 17 23:31:24.077817 kubelet[2502]: I0417 23:31:24.060809 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6a0c505f-4b9c-4262-acf5-27c692151472" (UID: "6a0c505f-4b9c-4262-acf5-27c692151472"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:31:24.080770 kubelet[2502]: I0417 23:31:24.066698 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6a0c505f-4b9c-4262-acf5-27c692151472" (UID: "6a0c505f-4b9c-4262-acf5-27c692151472"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:31:24.080770 kubelet[2502]: I0417 23:31:24.079365 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6a0c505f-4b9c-4262-acf5-27c692151472" (UID: "6a0c505f-4b9c-4262-acf5-27c692151472"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:31:24.080770 kubelet[2502]: I0417 23:31:24.079468 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6a0c505f-4b9c-4262-acf5-27c692151472" (UID: "6a0c505f-4b9c-4262-acf5-27c692151472"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:31:24.082287 kubelet[2502]: I0417 23:31:24.058157 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6a0c505f-4b9c-4262-acf5-27c692151472" (UID: "6a0c505f-4b9c-4262-acf5-27c692151472"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:31:24.082287 kubelet[2502]: I0417 23:31:24.082061 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6a0c505f-4b9c-4262-acf5-27c692151472" (UID: "6a0c505f-4b9c-4262-acf5-27c692151472"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:31:24.082503 kubelet[2502]: I0417 23:31:24.082403 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6a0c505f-4b9c-4262-acf5-27c692151472" (UID: "6a0c505f-4b9c-4262-acf5-27c692151472"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:31:24.082503 kubelet[2502]: I0417 23:31:24.082428 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-cni-path" (OuterVolumeSpecName: "cni-path") pod "6a0c505f-4b9c-4262-acf5-27c692151472" (UID: "6a0c505f-4b9c-4262-acf5-27c692151472"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:31:24.082503 kubelet[2502]: I0417 23:31:24.082449 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-hostproc" (OuterVolumeSpecName: "hostproc") pod "6a0c505f-4b9c-4262-acf5-27c692151472" (UID: "6a0c505f-4b9c-4262-acf5-27c692151472"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:31:24.082503 kubelet[2502]: I0417 23:31:24.082465 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6a0c505f-4b9c-4262-acf5-27c692151472" (UID: "6a0c505f-4b9c-4262-acf5-27c692151472"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:31:24.108673 kubelet[2502]: I0417 23:31:24.105553 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf10608a-9a4a-48ce-bda3-f59395fc07e4-kube-api-access-vx76t" (OuterVolumeSpecName: "kube-api-access-vx76t") pod "bf10608a-9a4a-48ce-bda3-f59395fc07e4" (UID: "bf10608a-9a4a-48ce-bda3-f59395fc07e4"). InnerVolumeSpecName "kube-api-access-vx76t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:31:24.108673 kubelet[2502]: I0417 23:31:24.107573 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a0c505f-4b9c-4262-acf5-27c692151472-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6a0c505f-4b9c-4262-acf5-27c692151472" (UID: "6a0c505f-4b9c-4262-acf5-27c692151472"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:31:24.110452 kubelet[2502]: I0417 23:31:24.109529 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a0c505f-4b9c-4262-acf5-27c692151472-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6a0c505f-4b9c-4262-acf5-27c692151472" (UID: "6a0c505f-4b9c-4262-acf5-27c692151472"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:31:24.120756 kubelet[2502]: I0417 23:31:24.120614 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a0c505f-4b9c-4262-acf5-27c692151472-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6a0c505f-4b9c-4262-acf5-27c692151472" (UID: "6a0c505f-4b9c-4262-acf5-27c692151472"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:31:24.125508 kubelet[2502]: I0417 23:31:24.125424 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf10608a-9a4a-48ce-bda3-f59395fc07e4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bf10608a-9a4a-48ce-bda3-f59395fc07e4" (UID: "bf10608a-9a4a-48ce-bda3-f59395fc07e4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:31:24.126127 kubelet[2502]: I0417 23:31:24.125596 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a0c505f-4b9c-4262-acf5-27c692151472-kube-api-access-c898n" (OuterVolumeSpecName: "kube-api-access-c898n") pod "6a0c505f-4b9c-4262-acf5-27c692151472" (UID: "6a0c505f-4b9c-4262-acf5-27c692151472"). InnerVolumeSpecName "kube-api-access-c898n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:31:24.157696 kubelet[2502]: I0417 23:31:24.157516 2502 scope.go:117] "RemoveContainer" containerID="122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f" Apr 17 23:31:24.158654 kubelet[2502]: I0417 23:31:24.157777 2502 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:24.158654 kubelet[2502]: I0417 23:31:24.157791 2502 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:24.158654 kubelet[2502]: I0417 23:31:24.157801 2502 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:24.158654 kubelet[2502]: I0417 23:31:24.157811 2502 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c898n\" (UniqueName: \"kubernetes.io/projected/6a0c505f-4b9c-4262-acf5-27c692151472-kube-api-access-c898n\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:24.158654 kubelet[2502]: I0417 23:31:24.157820 2502 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:24.158654 kubelet[2502]: I0417 23:31:24.157829 2502 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:24.158654 kubelet[2502]: I0417 23:31:24.158054 2502 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:24.165922 systemd[1]: Removed slice kubepods-burstable-pod6a0c505f_4b9c_4262_acf5_27c692151472.slice - libcontainer container kubepods-burstable-pod6a0c505f_4b9c_4262_acf5_27c692151472.slice. Apr 17 23:31:24.167372 kubelet[2502]: I0417 23:31:24.166676 2502 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:24.167372 kubelet[2502]: I0417 23:31:24.166720 2502 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf10608a-9a4a-48ce-bda3-f59395fc07e4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:24.167372 kubelet[2502]: I0417 23:31:24.166731 2502 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a0c505f-4b9c-4262-acf5-27c692151472-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:24.167372 kubelet[2502]: I0417 23:31:24.166737 2502 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vx76t\" (UniqueName: \"kubernetes.io/projected/bf10608a-9a4a-48ce-bda3-f59395fc07e4-kube-api-access-vx76t\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:24.167372 kubelet[2502]: I0417 23:31:24.166745 2502 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:24.167372 kubelet[2502]: I0417 23:31:24.166752 2502 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a0c505f-4b9c-4262-acf5-27c692151472-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:24.167372 kubelet[2502]: I0417 23:31:24.166757 2502 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:24.167372 kubelet[2502]: I0417 23:31:24.166763 2502 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a0c505f-4b9c-4262-acf5-27c692151472-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:24.167573 containerd[1454]: time="2026-04-17T23:31:24.166585858Z" level=info msg="RemoveContainer for \"122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f\"" Apr 17 23:31:24.167801 kubelet[2502]: I0417 23:31:24.166769 2502 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a0c505f-4b9c-4262-acf5-27c692151472-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:24.168048 systemd[1]: kubepods-burstable-pod6a0c505f_4b9c_4262_acf5_27c692151472.slice: Consumed 7.497s CPU time. Apr 17 23:31:24.181599 systemd[1]: Removed slice kubepods-besteffort-podbf10608a_9a4a_48ce_bda3_f59395fc07e4.slice - libcontainer container kubepods-besteffort-podbf10608a_9a4a_48ce_bda3_f59395fc07e4.slice. Apr 17 23:31:24.187918 containerd[1454]: time="2026-04-17T23:31:24.187677642Z" level=info msg="RemoveContainer for \"122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f\" returns successfully" Apr 17 23:31:24.188742 kubelet[2502]: I0417 23:31:24.188600 2502 scope.go:117] "RemoveContainer" containerID="0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e" Apr 17 23:31:24.202723 containerd[1454]: time="2026-04-17T23:31:24.202523756Z" level=info msg="RemoveContainer for \"0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e\"" Apr 17 23:31:24.226423 containerd[1454]: time="2026-04-17T23:31:24.225820404Z" level=info msg="RemoveContainer for \"0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e\" returns successfully" Apr 17 23:31:24.227005 kubelet[2502]: I0417 23:31:24.226952 2502 scope.go:117] "RemoveContainer" containerID="512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e" Apr 17 23:31:24.243586 containerd[1454]: time="2026-04-17T23:31:24.243364313Z" level=info msg="RemoveContainer for \"512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e\"" Apr 17 23:31:24.267707 containerd[1454]: time="2026-04-17T23:31:24.266683592Z" level=info msg="RemoveContainer for \"512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e\" returns successfully" Apr 17 23:31:24.268665 kubelet[2502]: I0417 23:31:24.268421 2502 scope.go:117] "RemoveContainer" containerID="e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d" Apr 17 23:31:24.345546 containerd[1454]: time="2026-04-17T23:31:24.279977589Z" level=info msg="RemoveContainer for \"e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d\"" Apr 17 23:31:24.366645 containerd[1454]: time="2026-04-17T23:31:24.366443306Z" level=info msg="RemoveContainer for \"e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d\" returns successfully" Apr 17 23:31:24.368448 kubelet[2502]: I0417 23:31:24.367676 2502 scope.go:117] "RemoveContainer" containerID="c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f" Apr 17 23:31:24.373418 containerd[1454]: time="2026-04-17T23:31:24.372715459Z" level=info msg="RemoveContainer for \"c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f\"" Apr 17 23:31:24.390540 containerd[1454]: time="2026-04-17T23:31:24.390489979Z" level=info msg="RemoveContainer for \"c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f\" returns successfully" Apr 17 23:31:24.391473 kubelet[2502]: I0417 23:31:24.391446 2502 scope.go:117] "RemoveContainer" containerID="122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f" Apr 17 23:31:24.403470 containerd[1454]: time="2026-04-17T23:31:24.403317864Z" level=error msg="ContainerStatus for \"122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f\": not found" Apr 17 23:31:24.429735 kubelet[2502]: E0417 23:31:24.426742 2502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f\": not found" containerID="122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f" Apr 17 23:31:24.432382 kubelet[2502]: I0417 23:31:24.431514 2502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f"} err="failed to get container status \"122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f\": rpc error: code = NotFound desc = an error occurred when try to find container \"122416165fa8ad36882a6c6487ba9c3c7af1b57c5cda407c0a65fd7bdf05825f\": not found" Apr 17 23:31:24.432760 kubelet[2502]: I0417 23:31:24.432382 2502 scope.go:117] "RemoveContainer" containerID="0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e" Apr 17 23:31:24.434473 containerd[1454]: time="2026-04-17T23:31:24.433447657Z" level=error msg="ContainerStatus for \"0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e\": not found" Apr 17 23:31:24.434726 kubelet[2502]: E0417 23:31:24.434283 2502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e\": not found" containerID="0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e" Apr 17 23:31:24.434726 kubelet[2502]: I0417 23:31:24.434317 2502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e"} err="failed to get container status \"0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f9a1c663438c622677fe9654d40686c6ce3c6302796d37f856e3d945958202e\": not found" Apr 17 23:31:24.434726 kubelet[2502]: I0417 23:31:24.434347 2502 scope.go:117] "RemoveContainer" containerID="512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e" Apr 17 23:31:24.435343 containerd[1454]: time="2026-04-17T23:31:24.435310879Z" level=error msg="ContainerStatus for \"512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e\": not found" Apr 17 23:31:24.436608 kubelet[2502]: E0417 23:31:24.436321 2502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e\": not found" containerID="512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e" Apr 17 23:31:24.436608 kubelet[2502]: I0417 23:31:24.436549 2502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e"} err="failed to get container status \"512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e\": rpc error: code = NotFound desc = an error occurred when try to find container \"512af0f855a04038bdb71348ae62f58d03f66eb95ab8b639aa20085fd6b7606e\": not found" Apr 17 23:31:24.436608 kubelet[2502]: I0417 23:31:24.436582 2502 scope.go:117] "RemoveContainer" containerID="e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d" Apr 17 23:31:24.437379 containerd[1454]: time="2026-04-17T23:31:24.436936177Z" level=error msg="ContainerStatus for \"e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d\": not found" Apr 17 23:31:24.437657 kubelet[2502]: E0417 23:31:24.437617 2502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d\": not found" containerID="e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d" Apr 17 23:31:24.437657 kubelet[2502]: I0417 23:31:24.437646 2502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d"} err="failed to get container status \"e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5f0bd3c01e3915d7cf7543770d99d441f5105670fe3e353ca81db603de57a3d\": not found" Apr 17 23:31:24.437704 kubelet[2502]: I0417 23:31:24.437665 2502 scope.go:117] "RemoveContainer" containerID="c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f" Apr 17 23:31:24.437909 containerd[1454]: time="2026-04-17T23:31:24.437887301Z" level=error msg="ContainerStatus for \"c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f\": not found" Apr 17 23:31:24.438273 kubelet[2502]: E0417 23:31:24.438222 2502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f\": not found" containerID="c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f" Apr 17 23:31:24.438349 kubelet[2502]: I0417 23:31:24.438277 2502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f"} err="failed to get container status \"c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9b9795214d5696153ca7fae221cb0a0cd30956ecba1bada0b07ef30fba18a4f\": not found" Apr 17 23:31:24.438349 kubelet[2502]: I0417 23:31:24.438294 2502 scope.go:117] "RemoveContainer" containerID="10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e" Apr 17 23:31:24.439998 containerd[1454]: time="2026-04-17T23:31:24.439879874Z" level=info msg="RemoveContainer for \"10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e\"" Apr 17 23:31:24.448206 containerd[1454]: time="2026-04-17T23:31:24.447708826Z" level=info msg="RemoveContainer for \"10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e\" returns successfully" Apr 17 23:31:24.449512 kubelet[2502]: I0417 23:31:24.449386 2502 scope.go:117] "RemoveContainer" containerID="10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e" Apr 17 23:31:24.451174 containerd[1454]: time="2026-04-17T23:31:24.450930322Z" level=error msg="ContainerStatus for \"10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e\": not found" Apr 17 23:31:24.452832 kubelet[2502]: E0417 23:31:24.452526 2502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e\": not found" containerID="10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e" Apr 17 23:31:24.453404 kubelet[2502]: I0417 23:31:24.453026 2502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e"} err="failed to get container status \"10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e\": rpc error: code = NotFound desc = an error occurred when try to find container \"10669be45f036c28442ea3a6dacffda41efc88de074c6880504db341cb9e879e\": not found" Apr 17 23:31:24.577038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e348b2a4c491af8fc50cfed016cafe1d3a96d6a44e812aa477e9997ce103c90c-rootfs.mount: Deactivated successfully. Apr 17 23:31:24.577457 systemd[1]: var-lib-kubelet-pods-bf10608a\x2d9a4a\x2d48ce\x2dbda3\x2df59395fc07e4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvx76t.mount: Deactivated successfully. Apr 17 23:31:24.577526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b-rootfs.mount: Deactivated successfully. Apr 17 23:31:24.577583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-084f46548de4cc1b6c1b84681121d2bc6d53659cad9670be3d5e29aef9ae2b0b-shm.mount: Deactivated successfully. Apr 17 23:31:24.577648 systemd[1]: var-lib-kubelet-pods-6a0c505f\x2d4b9c\x2d4262\x2dacf5\x2d27c692151472-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc898n.mount: Deactivated successfully. Apr 17 23:31:24.577694 systemd[1]: var-lib-kubelet-pods-6a0c505f\x2d4b9c\x2d4262\x2dacf5\x2d27c692151472-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 17 23:31:24.577764 systemd[1]: var-lib-kubelet-pods-6a0c505f\x2d4b9c\x2d4262\x2dacf5\x2d27c692151472-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 17 23:31:25.464918 sshd[4143]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:25.473694 systemd[1]: sshd@22-10.0.0.26:22-10.0.0.1:37242.service: Deactivated successfully. Apr 17 23:31:25.477742 systemd[1]: session-23.scope: Deactivated successfully. Apr 17 23:31:25.480738 systemd-logind[1434]: Session 23 logged out. Waiting for processes to exit. Apr 17 23:31:25.490727 systemd[1]: Started sshd@23-10.0.0.26:22-10.0.0.1:37258.service - OpenSSH per-connection server daemon (10.0.0.1:37258). Apr 17 23:31:25.492835 systemd-logind[1434]: Removed session 23. Apr 17 23:31:25.549526 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 37258 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:25.552737 sshd[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:25.560583 systemd-logind[1434]: New session 24 of user core. Apr 17 23:31:25.576497 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 17 23:31:25.740343 kubelet[2502]: I0417 23:31:25.739743 2502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a0c505f-4b9c-4262-acf5-27c692151472" path="/var/lib/kubelet/pods/6a0c505f-4b9c-4262-acf5-27c692151472/volumes" Apr 17 23:31:25.743601 kubelet[2502]: I0417 23:31:25.743491 2502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf10608a-9a4a-48ce-bda3-f59395fc07e4" path="/var/lib/kubelet/pods/bf10608a-9a4a-48ce-bda3-f59395fc07e4/volumes" Apr 17 23:31:26.734651 kubelet[2502]: E0417 23:31:26.734512 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:26.784943 kubelet[2502]: E0417 23:31:26.784657 2502 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 23:31:27.007386 sshd[4304]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:27.017828 systemd[1]: sshd@23-10.0.0.26:22-10.0.0.1:37258.service: Deactivated successfully. Apr 17 23:31:27.021208 systemd[1]: session-24.scope: Deactivated successfully. Apr 17 23:31:27.021562 systemd[1]: session-24.scope: Consumed 1.214s CPU time. Apr 17 23:31:27.028025 systemd-logind[1434]: Session 24 logged out. Waiting for processes to exit. Apr 17 23:31:27.042016 systemd[1]: Started sshd@24-10.0.0.26:22-10.0.0.1:37274.service - OpenSSH per-connection server daemon (10.0.0.1:37274). Apr 17 23:31:27.047844 systemd-logind[1434]: Removed session 24. Apr 17 23:31:27.102409 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 37274 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:27.105323 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:27.128944 systemd-logind[1434]: New session 25 of user core. Apr 17 23:31:27.143672 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 17 23:31:27.155796 systemd[1]: Created slice kubepods-burstable-pod7f943a6c_3dd1_4705_be70_5c43c9504ee8.slice - libcontainer container kubepods-burstable-pod7f943a6c_3dd1_4705_be70_5c43c9504ee8.slice. Apr 17 23:31:27.223156 sshd[4317]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:27.237605 systemd[1]: sshd@24-10.0.0.26:22-10.0.0.1:37274.service: Deactivated successfully. Apr 17 23:31:27.239947 systemd[1]: session-25.scope: Deactivated successfully. Apr 17 23:31:27.240653 kubelet[2502]: I0417 23:31:27.240528 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f943a6c-3dd1-4705-be70-5c43c9504ee8-cilium-cgroup\") pod \"cilium-x2nc5\" (UID: \"7f943a6c-3dd1-4705-be70-5c43c9504ee8\") " pod="kube-system/cilium-x2nc5" Apr 17 23:31:27.240724 kubelet[2502]: I0417 23:31:27.240675 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f943a6c-3dd1-4705-be70-5c43c9504ee8-cilium-config-path\") pod \"cilium-x2nc5\" (UID: \"7f943a6c-3dd1-4705-be70-5c43c9504ee8\") " pod="kube-system/cilium-x2nc5" Apr 17 23:31:27.240724 kubelet[2502]: I0417 23:31:27.240698 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f943a6c-3dd1-4705-be70-5c43c9504ee8-host-proc-sys-kernel\") pod \"cilium-x2nc5\" (UID: \"7f943a6c-3dd1-4705-be70-5c43c9504ee8\") " pod="kube-system/cilium-x2nc5" Apr 17 23:31:27.240793 kubelet[2502]: I0417 23:31:27.240722 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f943a6c-3dd1-4705-be70-5c43c9504ee8-cilium-run\") pod \"cilium-x2nc5\" (UID: \"7f943a6c-3dd1-4705-be70-5c43c9504ee8\") " pod="kube-system/cilium-x2nc5" Apr 17 23:31:27.240793 kubelet[2502]: I0417 23:31:27.240746 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f943a6c-3dd1-4705-be70-5c43c9504ee8-clustermesh-secrets\") pod \"cilium-x2nc5\" (UID: \"7f943a6c-3dd1-4705-be70-5c43c9504ee8\") " pod="kube-system/cilium-x2nc5" Apr 17 23:31:27.240793 kubelet[2502]: I0417 23:31:27.240769 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f943a6c-3dd1-4705-be70-5c43c9504ee8-bpf-maps\") pod \"cilium-x2nc5\" (UID: \"7f943a6c-3dd1-4705-be70-5c43c9504ee8\") " pod="kube-system/cilium-x2nc5" Apr 17 23:31:27.240793 kubelet[2502]: I0417 23:31:27.240787 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f943a6c-3dd1-4705-be70-5c43c9504ee8-etc-cni-netd\") pod \"cilium-x2nc5\" (UID: \"7f943a6c-3dd1-4705-be70-5c43c9504ee8\") " pod="kube-system/cilium-x2nc5" Apr 17 23:31:27.242584 kubelet[2502]: I0417 23:31:27.240804 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzs74\" (UniqueName: \"kubernetes.io/projected/7f943a6c-3dd1-4705-be70-5c43c9504ee8-kube-api-access-tzs74\") pod \"cilium-x2nc5\" (UID: \"7f943a6c-3dd1-4705-be70-5c43c9504ee8\") " pod="kube-system/cilium-x2nc5" Apr 17 23:31:27.242584 kubelet[2502]: I0417 23:31:27.240825 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f943a6c-3dd1-4705-be70-5c43c9504ee8-lib-modules\") pod \"cilium-x2nc5\" (UID: \"7f943a6c-3dd1-4705-be70-5c43c9504ee8\") " pod="kube-system/cilium-x2nc5" Apr 17 23:31:27.242584 kubelet[2502]: I0417 23:31:27.240842 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f943a6c-3dd1-4705-be70-5c43c9504ee8-xtables-lock\") pod \"cilium-x2nc5\" (UID: \"7f943a6c-3dd1-4705-be70-5c43c9504ee8\") " pod="kube-system/cilium-x2nc5" Apr 17 23:31:27.242584 kubelet[2502]: I0417 23:31:27.241241 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f943a6c-3dd1-4705-be70-5c43c9504ee8-hostproc\") pod \"cilium-x2nc5\" (UID: \"7f943a6c-3dd1-4705-be70-5c43c9504ee8\") " pod="kube-system/cilium-x2nc5" Apr 17 23:31:27.242584 kubelet[2502]: I0417 23:31:27.241271 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f943a6c-3dd1-4705-be70-5c43c9504ee8-cni-path\") pod \"cilium-x2nc5\" (UID: \"7f943a6c-3dd1-4705-be70-5c43c9504ee8\") " pod="kube-system/cilium-x2nc5" Apr 17 23:31:27.242584 kubelet[2502]: I0417 23:31:27.241309 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7f943a6c-3dd1-4705-be70-5c43c9504ee8-cilium-ipsec-secrets\") pod \"cilium-x2nc5\" (UID: \"7f943a6c-3dd1-4705-be70-5c43c9504ee8\") " pod="kube-system/cilium-x2nc5" Apr 17 23:31:27.242759 kubelet[2502]: I0417 23:31:27.241352 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f943a6c-3dd1-4705-be70-5c43c9504ee8-host-proc-sys-net\") pod \"cilium-x2nc5\" (UID: \"7f943a6c-3dd1-4705-be70-5c43c9504ee8\") " pod="kube-system/cilium-x2nc5" Apr 17 23:31:27.242759 kubelet[2502]: I0417 23:31:27.241392 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f943a6c-3dd1-4705-be70-5c43c9504ee8-hubble-tls\") pod \"cilium-x2nc5\" (UID: \"7f943a6c-3dd1-4705-be70-5c43c9504ee8\") " pod="kube-system/cilium-x2nc5" Apr 17 23:31:27.242780 systemd-logind[1434]: Session 25 logged out. Waiting for processes to exit. Apr 17 23:31:27.256487 systemd[1]: Started sshd@25-10.0.0.26:22-10.0.0.1:37280.service - OpenSSH per-connection server daemon (10.0.0.1:37280). Apr 17 23:31:27.257498 systemd-logind[1434]: Removed session 25. Apr 17 23:31:27.297843 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 37280 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:27.299245 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:27.319306 systemd-logind[1434]: New session 26 of user core. Apr 17 23:31:27.328720 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 17 23:31:27.469625 kubelet[2502]: E0417 23:31:27.468328 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:27.491263 containerd[1454]: time="2026-04-17T23:31:27.491218596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x2nc5,Uid:7f943a6c-3dd1-4705-be70-5c43c9504ee8,Namespace:kube-system,Attempt:0,}" Apr 17 23:31:27.535672 containerd[1454]: time="2026-04-17T23:31:27.535026071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:31:27.535672 containerd[1454]: time="2026-04-17T23:31:27.535394266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:31:27.535672 containerd[1454]: time="2026-04-17T23:31:27.535447293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:27.535672 containerd[1454]: time="2026-04-17T23:31:27.535537681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:27.572646 systemd[1]: Started cri-containerd-73bd354f6ff1fb34fdf2536cfd48f7ae11cc9787f01146724b6ee5b5821a9250.scope - libcontainer container 73bd354f6ff1fb34fdf2536cfd48f7ae11cc9787f01146724b6ee5b5821a9250. Apr 17 23:31:27.613160 containerd[1454]: time="2026-04-17T23:31:27.613110333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x2nc5,Uid:7f943a6c-3dd1-4705-be70-5c43c9504ee8,Namespace:kube-system,Attempt:0,} returns sandbox id \"73bd354f6ff1fb34fdf2536cfd48f7ae11cc9787f01146724b6ee5b5821a9250\"" Apr 17 23:31:27.614174 kubelet[2502]: E0417 23:31:27.614026 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:27.620157 containerd[1454]: time="2026-04-17T23:31:27.619918729Z" level=info msg="CreateContainer within sandbox \"73bd354f6ff1fb34fdf2536cfd48f7ae11cc9787f01146724b6ee5b5821a9250\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 23:31:27.641656 containerd[1454]: time="2026-04-17T23:31:27.641550819Z" level=info msg="CreateContainer within sandbox \"73bd354f6ff1fb34fdf2536cfd48f7ae11cc9787f01146724b6ee5b5821a9250\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c838d9bc712f07f258d7e10bb26277578e1d8668cfbdb6b3ec84664a1de94ce4\"" Apr 17 23:31:27.642688 containerd[1454]: time="2026-04-17T23:31:27.642611415Z" level=info msg="StartContainer for \"c838d9bc712f07f258d7e10bb26277578e1d8668cfbdb6b3ec84664a1de94ce4\"" Apr 17 23:31:27.693629 systemd[1]: Started cri-containerd-c838d9bc712f07f258d7e10bb26277578e1d8668cfbdb6b3ec84664a1de94ce4.scope - libcontainer container c838d9bc712f07f258d7e10bb26277578e1d8668cfbdb6b3ec84664a1de94ce4. Apr 17 23:31:27.726390 containerd[1454]: time="2026-04-17T23:31:27.726344357Z" level=info msg="StartContainer for \"c838d9bc712f07f258d7e10bb26277578e1d8668cfbdb6b3ec84664a1de94ce4\" returns successfully" Apr 17 23:31:27.736469 systemd[1]: cri-containerd-c838d9bc712f07f258d7e10bb26277578e1d8668cfbdb6b3ec84664a1de94ce4.scope: Deactivated successfully. Apr 17 23:31:27.768929 containerd[1454]: time="2026-04-17T23:31:27.768704438Z" level=info msg="shim disconnected" id=c838d9bc712f07f258d7e10bb26277578e1d8668cfbdb6b3ec84664a1de94ce4 namespace=k8s.io Apr 17 23:31:27.768929 containerd[1454]: time="2026-04-17T23:31:27.768790694Z" level=warning msg="cleaning up after shim disconnected" id=c838d9bc712f07f258d7e10bb26277578e1d8668cfbdb6b3ec84664a1de94ce4 namespace=k8s.io Apr 17 23:31:27.768929 containerd[1454]: time="2026-04-17T23:31:27.768799838Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:31:28.182266 kubelet[2502]: E0417 23:31:28.182148 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:28.191922 containerd[1454]: time="2026-04-17T23:31:28.191769916Z" level=info msg="CreateContainer within sandbox \"73bd354f6ff1fb34fdf2536cfd48f7ae11cc9787f01146724b6ee5b5821a9250\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 23:31:28.222026 containerd[1454]: time="2026-04-17T23:31:28.221905559Z" level=info msg="CreateContainer within sandbox \"73bd354f6ff1fb34fdf2536cfd48f7ae11cc9787f01146724b6ee5b5821a9250\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f4ab19064c3c3e4dc808fd121a01e5c6e3ee5e788a0503bbf47dd281b1031a9a\"" Apr 17 23:31:28.222801 containerd[1454]: time="2026-04-17T23:31:28.222632870Z" level=info msg="StartContainer for \"f4ab19064c3c3e4dc808fd121a01e5c6e3ee5e788a0503bbf47dd281b1031a9a\"" Apr 17 23:31:28.258473 systemd[1]: Started cri-containerd-f4ab19064c3c3e4dc808fd121a01e5c6e3ee5e788a0503bbf47dd281b1031a9a.scope - libcontainer container f4ab19064c3c3e4dc808fd121a01e5c6e3ee5e788a0503bbf47dd281b1031a9a. Apr 17 23:31:28.293811 containerd[1454]: time="2026-04-17T23:31:28.293668772Z" level=info msg="StartContainer for \"f4ab19064c3c3e4dc808fd121a01e5c6e3ee5e788a0503bbf47dd281b1031a9a\" returns successfully" Apr 17 23:31:28.305275 systemd[1]: cri-containerd-f4ab19064c3c3e4dc808fd121a01e5c6e3ee5e788a0503bbf47dd281b1031a9a.scope: Deactivated successfully. Apr 17 23:31:28.335041 containerd[1454]: time="2026-04-17T23:31:28.334956835Z" level=info msg="shim disconnected" id=f4ab19064c3c3e4dc808fd121a01e5c6e3ee5e788a0503bbf47dd281b1031a9a namespace=k8s.io Apr 17 23:31:28.335041 containerd[1454]: time="2026-04-17T23:31:28.335029804Z" level=warning msg="cleaning up after shim disconnected" id=f4ab19064c3c3e4dc808fd121a01e5c6e3ee5e788a0503bbf47dd281b1031a9a namespace=k8s.io Apr 17 23:31:28.335041 containerd[1454]: time="2026-04-17T23:31:28.335037259Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:31:29.191014 kubelet[2502]: E0417 23:31:29.190910 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:29.199168 containerd[1454]: time="2026-04-17T23:31:29.199056040Z" level=info msg="CreateContainer within sandbox \"73bd354f6ff1fb34fdf2536cfd48f7ae11cc9787f01146724b6ee5b5821a9250\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 23:31:29.224488 containerd[1454]: time="2026-04-17T23:31:29.224283881Z" level=info msg="CreateContainer within sandbox \"73bd354f6ff1fb34fdf2536cfd48f7ae11cc9787f01146724b6ee5b5821a9250\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2c4f40249cf75a92918e639629cfdc4d05757dd7a78a376a6bc8ce5d37e2c0df\"" Apr 17 23:31:29.227377 containerd[1454]: time="2026-04-17T23:31:29.225351385Z" level=info msg="StartContainer for \"2c4f40249cf75a92918e639629cfdc4d05757dd7a78a376a6bc8ce5d37e2c0df\"" Apr 17 23:31:29.277357 systemd[1]: Started cri-containerd-2c4f40249cf75a92918e639629cfdc4d05757dd7a78a376a6bc8ce5d37e2c0df.scope - libcontainer container 2c4f40249cf75a92918e639629cfdc4d05757dd7a78a376a6bc8ce5d37e2c0df. Apr 17 23:31:29.308466 systemd[1]: cri-containerd-2c4f40249cf75a92918e639629cfdc4d05757dd7a78a376a6bc8ce5d37e2c0df.scope: Deactivated successfully. Apr 17 23:31:29.308907 containerd[1454]: time="2026-04-17T23:31:29.308754639Z" level=info msg="StartContainer for \"2c4f40249cf75a92918e639629cfdc4d05757dd7a78a376a6bc8ce5d37e2c0df\" returns successfully" Apr 17 23:31:29.339820 containerd[1454]: time="2026-04-17T23:31:29.339731819Z" level=info msg="shim disconnected" id=2c4f40249cf75a92918e639629cfdc4d05757dd7a78a376a6bc8ce5d37e2c0df namespace=k8s.io Apr 17 23:31:29.339820 containerd[1454]: time="2026-04-17T23:31:29.339802188Z" level=warning msg="cleaning up after shim disconnected" id=2c4f40249cf75a92918e639629cfdc4d05757dd7a78a376a6bc8ce5d37e2c0df namespace=k8s.io Apr 17 23:31:29.339820 containerd[1454]: time="2026-04-17T23:31:29.339810032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:31:29.350685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c4f40249cf75a92918e639629cfdc4d05757dd7a78a376a6bc8ce5d37e2c0df-rootfs.mount: Deactivated successfully. Apr 17 23:31:30.197984 kubelet[2502]: E0417 23:31:30.197145 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:30.204444 containerd[1454]: time="2026-04-17T23:31:30.204307040Z" level=info msg="CreateContainer within sandbox \"73bd354f6ff1fb34fdf2536cfd48f7ae11cc9787f01146724b6ee5b5821a9250\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 23:31:30.235192 containerd[1454]: time="2026-04-17T23:31:30.234645170Z" level=info msg="CreateContainer within sandbox \"73bd354f6ff1fb34fdf2536cfd48f7ae11cc9787f01146724b6ee5b5821a9250\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a4c75283c662f3944d7bf626051448037114e57867a25b4c2ec7a0daaec3e9d0\"" Apr 17 23:31:30.237196 containerd[1454]: time="2026-04-17T23:31:30.237111394Z" level=info msg="StartContainer for \"a4c75283c662f3944d7bf626051448037114e57867a25b4c2ec7a0daaec3e9d0\"" Apr 17 23:31:30.309629 systemd[1]: Started cri-containerd-a4c75283c662f3944d7bf626051448037114e57867a25b4c2ec7a0daaec3e9d0.scope - libcontainer container a4c75283c662f3944d7bf626051448037114e57867a25b4c2ec7a0daaec3e9d0. Apr 17 23:31:30.350451 systemd[1]: cri-containerd-a4c75283c662f3944d7bf626051448037114e57867a25b4c2ec7a0daaec3e9d0.scope: Deactivated successfully. Apr 17 23:31:30.353690 containerd[1454]: time="2026-04-17T23:31:30.353573350Z" level=info msg="StartContainer for \"a4c75283c662f3944d7bf626051448037114e57867a25b4c2ec7a0daaec3e9d0\" returns successfully" Apr 17 23:31:30.380739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4c75283c662f3944d7bf626051448037114e57867a25b4c2ec7a0daaec3e9d0-rootfs.mount: Deactivated successfully. Apr 17 23:31:30.390802 containerd[1454]: time="2026-04-17T23:31:30.389973288Z" level=info msg="shim disconnected" id=a4c75283c662f3944d7bf626051448037114e57867a25b4c2ec7a0daaec3e9d0 namespace=k8s.io Apr 17 23:31:30.390802 containerd[1454]: time="2026-04-17T23:31:30.390762803Z" level=warning msg="cleaning up after shim disconnected" id=a4c75283c662f3944d7bf626051448037114e57867a25b4c2ec7a0daaec3e9d0 namespace=k8s.io Apr 17 23:31:30.390802 containerd[1454]: time="2026-04-17T23:31:30.390795181Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:31:31.205365 kubelet[2502]: E0417 23:31:31.205290 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:31.211778 containerd[1454]: time="2026-04-17T23:31:31.211685343Z" level=info msg="CreateContainer within sandbox \"73bd354f6ff1fb34fdf2536cfd48f7ae11cc9787f01146724b6ee5b5821a9250\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 23:31:31.235398 containerd[1454]: time="2026-04-17T23:31:31.235324771Z" level=info msg="CreateContainer within sandbox \"73bd354f6ff1fb34fdf2536cfd48f7ae11cc9787f01146724b6ee5b5821a9250\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d342d87a0da4908f0fdd640079e8c81338baabe684adb3a989de5ad3c311b996\"" Apr 17 23:31:31.236443 containerd[1454]: time="2026-04-17T23:31:31.236394518Z" level=info msg="StartContainer for \"d342d87a0da4908f0fdd640079e8c81338baabe684adb3a989de5ad3c311b996\"" Apr 17 23:31:31.282384 systemd[1]: Started cri-containerd-d342d87a0da4908f0fdd640079e8c81338baabe684adb3a989de5ad3c311b996.scope - libcontainer container d342d87a0da4908f0fdd640079e8c81338baabe684adb3a989de5ad3c311b996. Apr 17 23:31:31.321336 containerd[1454]: time="2026-04-17T23:31:31.321277240Z" level=info msg="StartContainer for \"d342d87a0da4908f0fdd640079e8c81338baabe684adb3a989de5ad3c311b996\" returns successfully" Apr 17 23:31:31.630138 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 17 23:31:32.212343 kubelet[2502]: E0417 23:31:32.212242 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:32.229190 kubelet[2502]: I0417 23:31:32.229127 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x2nc5" podStartSLOduration=5.229043145 podStartE2EDuration="5.229043145s" podCreationTimestamp="2026-04-17 23:31:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:31:32.22890446 +0000 UTC m=+80.597288069" watchObservedRunningTime="2026-04-17 23:31:32.229043145 +0000 UTC m=+80.597426757" Apr 17 23:31:33.462311 kubelet[2502]: E0417 23:31:33.462235 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:33.789259 systemd[1]: run-containerd-runc-k8s.io-d342d87a0da4908f0fdd640079e8c81338baabe684adb3a989de5ad3c311b996-runc.2DB6II.mount: Deactivated successfully. Apr 17 23:31:35.287495 systemd-networkd[1378]: lxc_health: Link UP Apr 17 23:31:35.294300 systemd-networkd[1378]: lxc_health: Gained carrier Apr 17 23:31:35.466400 kubelet[2502]: E0417 23:31:35.466338 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:35.946213 systemd[1]: run-containerd-runc-k8s.io-d342d87a0da4908f0fdd640079e8c81338baabe684adb3a989de5ad3c311b996-runc.gJYvkq.mount: Deactivated successfully. Apr 17 23:31:36.227946 kubelet[2502]: E0417 23:31:36.226391 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:36.695429 systemd-networkd[1378]: lxc_health: Gained IPv6LL Apr 17 23:31:37.233589 kubelet[2502]: E0417 23:31:37.233509 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:37.739561 kubelet[2502]: E0417 23:31:37.739497 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:38.733860 kubelet[2502]: E0417 23:31:38.733746 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:40.733707 kubelet[2502]: E0417 23:31:40.733584 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:42.426430 systemd[1]: run-containerd-runc-k8s.io-d342d87a0da4908f0fdd640079e8c81338baabe684adb3a989de5ad3c311b996-runc.r6m2XR.mount: Deactivated successfully. Apr 17 23:31:44.595596 sshd[4326]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:44.599409 systemd[1]: sshd@25-10.0.0.26:22-10.0.0.1:37280.service: Deactivated successfully. Apr 17 23:31:44.601165 systemd[1]: session-26.scope: Deactivated successfully. Apr 17 23:31:44.601916 systemd-logind[1434]: Session 26 logged out. Waiting for processes to exit. Apr 17 23:31:44.603895 systemd-logind[1434]: Removed session 26.