Apr 21 10:12:36.848493 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:12:36.848512 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:12:36.848522 kernel: BIOS-provided physical RAM map: Apr 21 10:12:36.848527 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 21 10:12:36.848532 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 21 10:12:36.848538 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 21 10:12:36.848544 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 21 10:12:36.848549 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 21 10:12:36.848554 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 10:12:36.848561 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 21 10:12:36.848566 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 10:12:36.848571 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 21 10:12:36.848576 kernel: NX (Execute Disable) protection: active Apr 21 10:12:36.848582 kernel: APIC: Static calls initialized Apr 21 10:12:36.848588 kernel: SMBIOS 2.8 present. Apr 21 10:12:36.848595 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 21 10:12:36.848601 kernel: Hypervisor detected: KVM Apr 21 10:12:36.848607 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:12:36.848612 kernel: kvm-clock: using sched offset of 3431741552 cycles Apr 21 10:12:36.848619 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:12:36.848628 kernel: tsc: Detected 2793.438 MHz processor Apr 21 10:12:36.848636 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:12:36.848646 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:12:36.848654 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 21 10:12:36.848665 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 21 10:12:36.848671 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:12:36.848676 kernel: Using GB pages for direct mapping Apr 21 10:12:36.848681 kernel: ACPI: Early table checksum verification disabled Apr 21 10:12:36.848686 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 21 10:12:36.848690 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:12:36.848695 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:12:36.848700 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:12:36.848705 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 21 10:12:36.848710 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:12:36.848715 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:12:36.848720 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:12:36.848724 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:12:36.848729 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 21 10:12:36.848734 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 21 10:12:36.848739 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 21 10:12:36.848746 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 21 10:12:36.848752 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 21 10:12:36.848757 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 21 10:12:36.848762 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 21 10:12:36.848767 kernel: No NUMA configuration found Apr 21 10:12:36.848772 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 21 10:12:36.848777 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 21 10:12:36.848783 kernel: Zone ranges: Apr 21 10:12:36.848788 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:12:36.848793 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 21 10:12:36.848798 kernel: Normal empty Apr 21 10:12:36.848803 kernel: Movable zone start for each node Apr 21 10:12:36.848835 kernel: Early memory node ranges Apr 21 10:12:36.848840 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 21 10:12:36.848845 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 21 10:12:36.848850 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 21 10:12:36.848855 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:12:36.848862 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 21 10:12:36.848867 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 21 10:12:36.848872 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:12:36.848877 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:12:36.848882 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:12:36.848887 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:12:36.848892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:12:36.848897 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:12:36.848901 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:12:36.848908 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:12:36.848913 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:12:36.848918 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:12:36.848923 kernel: TSC deadline timer available Apr 21 10:12:36.848928 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 21 10:12:36.848932 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:12:36.848937 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 10:12:36.848942 kernel: kvm-guest: setup PV sched yield Apr 21 10:12:36.848947 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 21 10:12:36.848954 kernel: Booting paravirtualized kernel on KVM Apr 21 10:12:36.848959 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:12:36.848964 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 21 10:12:36.848969 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 21 10:12:36.848974 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 21 10:12:36.848979 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 21 10:12:36.848984 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:12:36.848989 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:12:36.848994 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:12:36.849001 kernel: random: crng init done Apr 21 10:12:36.849006 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:12:36.849011 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:12:36.849017 kernel: Fallback order for Node 0: 0 Apr 21 10:12:36.849022 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 21 10:12:36.849026 kernel: Policy zone: DMA32 Apr 21 10:12:36.849031 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:12:36.849037 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 137896K reserved, 0K cma-reserved) Apr 21 10:12:36.849043 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 21 10:12:36.849048 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:12:36.849053 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:12:36.849058 kernel: Dynamic Preempt: voluntary Apr 21 10:12:36.849063 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:12:36.849069 kernel: rcu: RCU event tracing is enabled. Apr 21 10:12:36.849074 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 21 10:12:36.849079 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:12:36.849084 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:12:36.849089 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:12:36.849095 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:12:36.849100 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 21 10:12:36.849105 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 21 10:12:36.849110 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:12:36.849115 kernel: Console: colour VGA+ 80x25 Apr 21 10:12:36.849120 kernel: printk: console [ttyS0] enabled Apr 21 10:12:36.849125 kernel: ACPI: Core revision 20230628 Apr 21 10:12:36.849130 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:12:36.849135 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:12:36.849142 kernel: x2apic enabled Apr 21 10:12:36.849147 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:12:36.849152 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 10:12:36.849157 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 10:12:36.849162 kernel: kvm-guest: setup PV IPIs Apr 21 10:12:36.849167 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:12:36.849173 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:12:36.849184 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 21 10:12:36.849190 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:12:36.849195 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 21 10:12:36.849201 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 21 10:12:36.849206 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:12:36.849213 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:12:36.849218 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:12:36.849224 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 10:12:36.849242 kernel: RETBleed: Vulnerable Apr 21 10:12:36.849250 kernel: Speculative Store Bypass: Vulnerable Apr 21 10:12:36.849255 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:12:36.849261 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 10:12:36.849266 kernel: active return thunk: its_return_thunk Apr 21 10:12:36.849272 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 10:12:36.849286 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:12:36.849292 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:12:36.849298 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:12:36.849303 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:12:36.849310 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:12:36.849316 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:12:36.849322 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:12:36.849327 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 10:12:36.849333 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 10:12:36.849338 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 10:12:36.849344 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 10:12:36.849349 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:12:36.849410 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:12:36.849418 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:12:36.849423 kernel: landlock: Up and running. Apr 21 10:12:36.849429 kernel: SELinux: Initializing. Apr 21 10:12:36.849435 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:12:36.849440 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:12:36.849446 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 21 10:12:36.849451 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:12:36.849457 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:12:36.849463 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:12:36.849470 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 21 10:12:36.849476 kernel: signal: max sigframe size: 3632 Apr 21 10:12:36.849481 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:12:36.849487 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:12:36.849492 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 10:12:36.849498 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:12:36.849503 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:12:36.849509 kernel: .... node #0, CPUs: #1 #2 #3 Apr 21 10:12:36.849514 kernel: smp: Brought up 1 node, 4 CPUs Apr 21 10:12:36.849521 kernel: smpboot: Max logical packages: 1 Apr 21 10:12:36.849527 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 21 10:12:36.849532 kernel: devtmpfs: initialized Apr 21 10:12:36.849538 kernel: x86/mm: Memory block size: 128MB Apr 21 10:12:36.849544 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:12:36.849549 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 21 10:12:36.849555 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:12:36.849561 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:12:36.849566 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:12:36.849573 kernel: audit: type=2000 audit(1776766355.938:1): state=initialized audit_enabled=0 res=1 Apr 21 10:12:36.849578 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:12:36.849584 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:12:36.849589 kernel: cpuidle: using governor menu Apr 21 10:12:36.849595 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:12:36.849600 kernel: dca service started, version 1.12.1 Apr 21 10:12:36.849606 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 21 10:12:36.849611 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 10:12:36.849617 kernel: PCI: Using configuration type 1 for base access Apr 21 10:12:36.849624 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:12:36.849629 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:12:36.849635 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:12:36.849640 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:12:36.849646 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:12:36.849651 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:12:36.849657 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:12:36.849662 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:12:36.849668 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:12:36.849675 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:12:36.849680 kernel: ACPI: Interpreter enabled Apr 21 10:12:36.849686 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 10:12:36.849691 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:12:36.849697 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:12:36.849702 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:12:36.849708 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:12:36.849713 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:12:36.849840 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:12:36.849907 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:12:36.849963 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:12:36.849970 kernel: PCI host bridge to bus 0000:00 Apr 21 10:12:36.850027 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:12:36.850145 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:12:36.850209 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:12:36.850260 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 21 10:12:36.850308 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 10:12:36.850414 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 21 10:12:36.850467 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:12:36.850531 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:12:36.850593 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 21 10:12:36.850652 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 21 10:12:36.850706 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 21 10:12:36.850761 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 21 10:12:36.850840 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:12:36.850903 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 21 10:12:36.850959 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 21 10:12:36.851015 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 21 10:12:36.851072 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 21 10:12:36.851131 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 21 10:12:36.851185 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 21 10:12:36.851239 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 21 10:12:36.851293 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 21 10:12:36.851436 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 21 10:12:36.851498 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 21 10:12:36.851556 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 21 10:12:36.851611 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 21 10:12:36.851667 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 21 10:12:36.851726 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:12:36.851782 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:12:36.851878 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:12:36.851936 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 21 10:12:36.851992 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 21 10:12:36.852051 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:12:36.852105 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 21 10:12:36.852112 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:12:36.852118 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:12:36.852124 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:12:36.852129 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:12:36.852136 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:12:36.852141 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:12:36.852147 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:12:36.852152 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:12:36.852158 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:12:36.852163 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:12:36.852168 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:12:36.852174 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:12:36.852179 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:12:36.852186 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:12:36.852192 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:12:36.852197 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:12:36.852203 kernel: iommu: Default domain type: Translated Apr 21 10:12:36.852208 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:12:36.852214 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:12:36.852219 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:12:36.852224 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 21 10:12:36.852230 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 21 10:12:36.852297 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:12:36.852380 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:12:36.852438 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:12:36.852445 kernel: vgaarb: loaded Apr 21 10:12:36.852451 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:12:36.852467 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:12:36.852473 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:12:36.852486 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:12:36.852492 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:12:36.852515 kernel: pnp: PnP ACPI init Apr 21 10:12:36.852670 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 10:12:36.852694 kernel: pnp: PnP ACPI: found 6 devices Apr 21 10:12:36.852700 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:12:36.852714 kernel: NET: Registered PF_INET protocol family Apr 21 10:12:36.852734 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:12:36.852748 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:12:36.852761 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:12:36.852777 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:12:36.852791 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:12:36.852796 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:12:36.852822 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:12:36.852828 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:12:36.852833 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:12:36.852839 kernel: NET: Registered PF_XDP protocol family Apr 21 10:12:36.852893 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:12:36.853024 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:12:36.853104 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:12:36.853159 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 21 10:12:36.853211 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 10:12:36.853263 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 21 10:12:36.853270 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:12:36.853277 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 10:12:36.853283 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:12:36.853288 kernel: Initialise system trusted keyrings Apr 21 10:12:36.853296 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:12:36.853301 kernel: Key type asymmetric registered Apr 21 10:12:36.853307 kernel: Asymmetric key parser 'x509' registered Apr 21 10:12:36.853312 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:12:36.853318 kernel: io scheduler mq-deadline registered Apr 21 10:12:36.853324 kernel: io scheduler kyber registered Apr 21 10:12:36.853329 kernel: io scheduler bfq registered Apr 21 10:12:36.853335 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:12:36.853341 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:12:36.853348 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:12:36.853386 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 21 10:12:36.853392 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:12:36.853398 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:12:36.853403 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:12:36.853409 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:12:36.853415 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:12:36.853484 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 21 10:12:36.853492 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:12:36.853549 kernel: rtc_cmos 00:04: registered as rtc0 Apr 21 10:12:36.853602 kernel: rtc_cmos 00:04: setting system clock to 2026-04-21T10:12:36 UTC (1776766356) Apr 21 10:12:36.853655 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 21 10:12:36.853663 kernel: intel_pstate: CPU model not supported Apr 21 10:12:36.853668 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:12:36.853674 kernel: Segment Routing with IPv6 Apr 21 10:12:36.853679 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:12:36.853685 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:12:36.853692 kernel: Key type dns_resolver registered Apr 21 10:12:36.853698 kernel: IPI shorthand broadcast: enabled Apr 21 10:12:36.853703 kernel: sched_clock: Marking stable (682009516, 223322293)->(1035464631, -130132822) Apr 21 10:12:36.853709 kernel: registered taskstats version 1 Apr 21 10:12:36.853714 kernel: Loading compiled-in X.509 certificates Apr 21 10:12:36.853720 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:12:36.853726 kernel: Key type .fscrypt registered Apr 21 10:12:36.853731 kernel: Key type fscrypt-provisioning registered Apr 21 10:12:36.853737 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:12:36.853744 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:12:36.853749 kernel: ima: No architecture policies found Apr 21 10:12:36.853755 kernel: clk: Disabling unused clocks Apr 21 10:12:36.853760 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:12:36.853766 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:12:36.853771 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:12:36.853777 kernel: Run /init as init process Apr 21 10:12:36.853783 kernel: with arguments: Apr 21 10:12:36.853788 kernel: /init Apr 21 10:12:36.853795 kernel: with environment: Apr 21 10:12:36.853800 kernel: HOME=/ Apr 21 10:12:36.853806 kernel: TERM=linux Apr 21 10:12:36.853840 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:12:36.853850 systemd[1]: Detected virtualization kvm. Apr 21 10:12:36.853856 systemd[1]: Detected architecture x86-64. Apr 21 10:12:36.853862 systemd[1]: Running in initrd. Apr 21 10:12:36.853868 systemd[1]: No hostname configured, using default hostname. Apr 21 10:12:36.853875 systemd[1]: Hostname set to . Apr 21 10:12:36.853881 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:12:36.853887 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:12:36.853892 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:12:36.853898 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:12:36.853905 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:12:36.853911 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:12:36.853916 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:12:36.853924 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:12:36.853940 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:12:36.853947 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:12:36.853953 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:12:36.853959 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:12:36.853966 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:12:36.853972 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:12:36.853978 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:12:36.853984 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:12:36.853990 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:12:36.853996 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:12:36.854002 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:12:36.854007 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:12:36.854015 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:12:36.854021 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:12:36.854027 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:12:36.854033 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:12:36.854039 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:12:36.854045 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:12:36.854051 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:12:36.854057 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:12:36.854062 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:12:36.854070 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:12:36.854076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:12:36.854082 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:12:36.854088 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:12:36.854107 systemd-journald[193]: Collecting audit messages is disabled. Apr 21 10:12:36.854123 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:12:36.854133 systemd-journald[193]: Journal started Apr 21 10:12:36.854148 systemd-journald[193]: Runtime Journal (/run/log/journal/a49a1b22a195436cb8258098562622ca) is 6.0M, max 48.4M, 42.3M free. Apr 21 10:12:36.860427 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:12:36.854270 systemd-modules-load[194]: Inserted module 'overlay' Apr 21 10:12:36.993731 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:12:36.993764 kernel: Bridge firewalling registered Apr 21 10:12:36.878506 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 21 10:12:36.999102 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:12:37.000836 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:12:37.002590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:12:37.005661 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:12:37.018525 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:12:37.022420 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:12:37.022968 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:12:37.023687 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:12:37.035381 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:12:37.038543 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:12:37.043590 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:12:37.043780 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:12:37.050439 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:12:37.055993 dracut-cmdline[226]: dracut-dracut-053 Apr 21 10:12:37.065593 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:12:37.062523 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:12:37.091473 systemd-resolved[239]: Positive Trust Anchors: Apr 21 10:12:37.091494 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:12:37.091518 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:12:37.093418 systemd-resolved[239]: Defaulting to hostname 'linux'. Apr 21 10:12:37.094180 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:12:37.107897 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:12:37.131576 kernel: SCSI subsystem initialized Apr 21 10:12:37.140402 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:12:37.151409 kernel: iscsi: registered transport (tcp) Apr 21 10:12:37.169562 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:12:37.169623 kernel: QLogic iSCSI HBA Driver Apr 21 10:12:37.201468 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:12:37.210565 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:12:37.236962 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:12:37.237011 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:12:37.239116 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:12:37.276601 kernel: raid6: avx512x4 gen() 43432 MB/s Apr 21 10:12:37.293570 kernel: raid6: avx512x2 gen() 42736 MB/s Apr 21 10:12:37.310564 kernel: raid6: avx512x1 gen() 42575 MB/s Apr 21 10:12:37.327593 kernel: raid6: avx2x4 gen() 36232 MB/s Apr 21 10:12:37.344406 kernel: raid6: avx2x2 gen() 36108 MB/s Apr 21 10:12:37.362103 kernel: raid6: avx2x1 gen() 28125 MB/s Apr 21 10:12:37.362140 kernel: raid6: using algorithm avx512x4 gen() 43432 MB/s Apr 21 10:12:37.380087 kernel: raid6: .... xor() 9777 MB/s, rmw enabled Apr 21 10:12:37.380132 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:12:37.398411 kernel: xor: automatically using best checksumming function avx Apr 21 10:12:37.523578 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:12:37.532246 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:12:37.541604 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:12:37.553327 systemd-udevd[417]: Using default interface naming scheme 'v255'. Apr 21 10:12:37.556978 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:12:37.565458 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:12:37.574903 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Apr 21 10:12:37.598223 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:12:37.607549 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:12:37.637007 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:12:37.644553 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:12:37.654903 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:12:37.655516 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:12:37.661341 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:12:37.664493 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:12:37.671442 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 21 10:12:37.672374 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:12:37.673498 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:12:37.676604 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:12:37.685084 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 21 10:12:37.676675 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:12:37.681576 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:12:37.683371 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:12:37.685132 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:12:37.695029 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:12:37.703483 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:12:37.703517 kernel: GPT:9289727 != 19775487 Apr 21 10:12:37.703525 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:12:37.703533 kernel: GPT:9289727 != 19775487 Apr 21 10:12:37.704391 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:12:37.704403 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:12:37.706385 kernel: libata version 3.00 loaded. Apr 21 10:12:37.706410 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:12:37.708039 kernel: AES CTR mode by8 optimization enabled Apr 21 10:12:37.710887 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:12:37.714606 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:12:37.714766 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:12:37.716544 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:12:37.720426 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:12:37.720554 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:12:37.735400 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (476) Apr 21 10:12:37.736407 kernel: scsi host0: ahci Apr 21 10:12:37.736739 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 21 10:12:37.826112 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (471) Apr 21 10:12:37.826141 kernel: scsi host1: ahci Apr 21 10:12:37.826278 kernel: scsi host2: ahci Apr 21 10:12:37.826351 kernel: scsi host3: ahci Apr 21 10:12:37.826466 kernel: scsi host4: ahci Apr 21 10:12:37.826536 kernel: scsi host5: ahci Apr 21 10:12:37.826608 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 21 10:12:37.826619 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 21 10:12:37.826627 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 21 10:12:37.826634 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 21 10:12:37.826641 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 21 10:12:37.826648 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 21 10:12:37.823549 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:12:37.835205 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 21 10:12:37.840114 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:12:37.844088 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 21 10:12:37.844164 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 21 10:12:37.864558 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:12:37.868284 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:12:37.874324 disk-uuid[556]: Primary Header is updated. Apr 21 10:12:37.874324 disk-uuid[556]: Secondary Entries is updated. Apr 21 10:12:37.874324 disk-uuid[556]: Secondary Header is updated. Apr 21 10:12:37.877649 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:12:37.887292 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:12:38.051518 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 10:12:38.051647 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:12:38.054387 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:12:38.054429 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 10:12:38.055391 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:12:38.056387 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:12:38.057397 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 10:12:38.059007 kernel: ata3.00: applying bridge limits Apr 21 10:12:38.059958 kernel: ata3.00: configured for UDMA/100 Apr 21 10:12:38.062408 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 10:12:38.110218 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 10:12:38.110476 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 10:12:38.128407 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 21 10:12:38.887225 disk-uuid[557]: The operation has completed successfully. Apr 21 10:12:38.889523 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:12:38.908644 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:12:38.908747 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:12:38.931643 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:12:38.936574 sh[595]: Success Apr 21 10:12:38.948396 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:12:38.974750 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:12:38.993129 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:12:38.996118 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:12:39.005961 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:12:39.005986 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:12:39.005995 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:12:39.007448 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:12:39.008541 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:12:39.014670 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:12:39.017245 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:12:39.033538 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:12:39.037882 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:12:39.046133 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:12:39.046152 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:12:39.046160 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:12:39.050408 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:12:39.057119 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:12:39.060317 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:12:39.065960 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:12:39.073529 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:12:39.119388 ignition[681]: Ignition 2.19.0 Apr 21 10:12:39.119649 ignition[681]: Stage: fetch-offline Apr 21 10:12:39.119675 ignition[681]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:12:39.119681 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:12:39.119746 ignition[681]: parsed url from cmdline: "" Apr 21 10:12:39.119748 ignition[681]: no config URL provided Apr 21 10:12:39.119752 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:12:39.119756 ignition[681]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:12:39.119777 ignition[681]: op(1): [started] loading QEMU firmware config module Apr 21 10:12:39.119781 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 21 10:12:39.127785 ignition[681]: op(1): [finished] loading QEMU firmware config module Apr 21 10:12:39.150416 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:12:39.167513 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:12:39.240188 ignition[681]: parsing config with SHA512: e1de8dccdad0cbd00364da87f0722c78c11bb0e6a86f3c5eea384f7a82f330fefc5802e235d426284eff67acdac74833d21dddda33110a1e43bb278e2e9bb7db Apr 21 10:12:39.243486 unknown[681]: fetched base config from "system" Apr 21 10:12:39.243770 ignition[681]: fetch-offline: fetch-offline passed Apr 21 10:12:39.243499 unknown[681]: fetched user config from "qemu" Apr 21 10:12:39.243814 ignition[681]: Ignition finished successfully Apr 21 10:12:39.249431 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:12:39.254589 systemd-networkd[784]: lo: Link UP Apr 21 10:12:39.254609 systemd-networkd[784]: lo: Gained carrier Apr 21 10:12:39.255453 systemd-networkd[784]: Enumeration completed Apr 21 10:12:39.255516 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:12:39.255920 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:12:39.255922 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:12:39.257380 systemd-networkd[784]: eth0: Link UP Apr 21 10:12:39.257383 systemd-networkd[784]: eth0: Gained carrier Apr 21 10:12:39.257391 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:12:39.257572 systemd[1]: Reached target network.target - Network. Apr 21 10:12:39.259986 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 21 10:12:39.268494 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:12:39.275398 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:12:39.278970 ignition[787]: Ignition 2.19.0 Apr 21 10:12:39.281432 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:12:39.278975 ignition[787]: Stage: kargs Apr 21 10:12:39.283894 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:12:39.279094 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:12:39.279100 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:12:39.279742 ignition[787]: kargs: kargs passed Apr 21 10:12:39.279772 ignition[787]: Ignition finished successfully Apr 21 10:12:39.297172 ignition[796]: Ignition 2.19.0 Apr 21 10:12:39.297188 ignition[796]: Stage: disks Apr 21 10:12:39.297302 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:12:39.298821 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:12:39.297308 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:12:39.299286 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:12:39.297935 ignition[796]: disks: disks passed Apr 21 10:12:39.301676 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:12:39.297964 ignition[796]: Ignition finished successfully Apr 21 10:12:39.306176 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:12:39.307650 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:12:39.310449 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:12:39.321504 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:12:39.333976 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:12:39.338326 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:12:39.340805 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:12:39.420261 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:12:39.424739 kernel: EXT4-fs (vda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:12:39.420768 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:12:39.436715 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:12:39.439579 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:12:39.446965 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Apr 21 10:12:39.446985 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:12:39.446994 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:12:39.447002 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:12:39.441514 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:12:39.441544 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:12:39.457535 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:12:39.441560 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:12:39.448675 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:12:39.456478 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:12:39.459999 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:12:39.487287 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:12:39.491279 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:12:39.495621 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:12:39.499683 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:12:39.565906 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:12:39.580534 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:12:39.584101 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:12:39.588629 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:12:39.603329 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:12:39.607264 ignition[928]: INFO : Ignition 2.19.0 Apr 21 10:12:39.607264 ignition[928]: INFO : Stage: mount Apr 21 10:12:39.609544 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:12:39.609544 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:12:39.609544 ignition[928]: INFO : mount: mount passed Apr 21 10:12:39.609544 ignition[928]: INFO : Ignition finished successfully Apr 21 10:12:39.611249 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:12:39.622496 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:12:40.004734 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:12:40.013588 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:12:40.022474 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Apr 21 10:12:40.022564 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:12:40.022574 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:12:40.023625 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:12:40.028401 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:12:40.028942 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:12:40.048594 ignition[959]: INFO : Ignition 2.19.0 Apr 21 10:12:40.048594 ignition[959]: INFO : Stage: files Apr 21 10:12:40.051284 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:12:40.051284 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:12:40.051284 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:12:40.051284 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:12:40.051284 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:12:40.061201 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:12:40.061201 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:12:40.061201 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:12:40.061201 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:12:40.061201 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:12:40.052593 unknown[959]: wrote ssh authorized keys file for user: core Apr 21 10:12:40.133346 systemd-resolved[239]: Detected conflict on linux IN A 10.0.0.21 Apr 21 10:12:40.133442 systemd-resolved[239]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Apr 21 10:12:40.155100 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:12:40.571680 systemd-networkd[784]: eth0: Gained IPv6LL Apr 21 10:12:40.910785 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:12:40.910785 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 10:12:40.910785 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 21 10:12:41.240952 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 10:12:41.488227 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 10:12:41.488227 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:12:41.488227 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:12:41.495993 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:12:41.495993 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:12:41.495993 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:12:41.495993 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:12:41.495993 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:12:41.495993 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:12:41.495993 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:12:41.495993 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:12:41.495993 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:12:41.495993 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:12:41.495993 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:12:41.495993 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 21 10:12:41.738181 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 21 10:12:42.065656 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:12:42.065656 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 21 10:12:42.070952 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:12:42.070952 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:12:42.070952 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 21 10:12:42.070952 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 21 10:12:42.070952 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:12:42.070952 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:12:42.070952 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 21 10:12:42.070952 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 21 10:12:42.090943 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:12:42.090943 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:12:42.090943 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 21 10:12:42.090943 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:12:42.090943 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:12:42.090943 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:12:42.090943 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:12:42.090943 ignition[959]: INFO : files: files passed Apr 21 10:12:42.090943 ignition[959]: INFO : Ignition finished successfully Apr 21 10:12:42.090765 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:12:42.103616 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:12:42.106519 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:12:42.109245 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:12:42.128709 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Apr 21 10:12:42.109308 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:12:42.134161 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:12:42.134161 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:12:42.117084 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:12:42.140248 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:12:42.119597 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:12:42.122620 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:12:42.154482 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:12:42.154612 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:12:42.156284 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:12:42.161089 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:12:42.164141 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:12:42.184583 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:12:42.198430 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:12:42.199542 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:12:42.213888 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:12:42.214073 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:12:42.219339 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:12:42.220996 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:12:42.221135 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:12:42.228068 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:12:42.228221 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:12:42.231021 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:12:42.233501 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:12:42.236765 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:12:42.239902 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:12:42.242899 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:12:42.245679 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:12:42.249091 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:12:42.251832 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:12:42.254532 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:12:42.254663 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:12:42.260973 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:12:42.261094 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:12:42.264058 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:12:42.264204 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:12:42.267266 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:12:42.267411 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:12:42.274901 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:12:42.274992 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:12:42.276466 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:12:42.279416 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:12:42.283479 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:12:42.289154 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:12:42.289340 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:12:42.294701 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:12:42.294821 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:12:42.299124 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:12:42.299236 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:12:42.300641 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:12:42.300779 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:12:42.307203 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:12:42.307310 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:12:42.325748 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:12:42.328987 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:12:42.329156 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:12:42.336745 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:12:42.339119 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:12:42.339395 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:12:42.343321 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:12:42.353609 ignition[1013]: INFO : Ignition 2.19.0 Apr 21 10:12:42.353609 ignition[1013]: INFO : Stage: umount Apr 21 10:12:42.353609 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:12:42.353609 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:12:42.353609 ignition[1013]: INFO : umount: umount passed Apr 21 10:12:42.353609 ignition[1013]: INFO : Ignition finished successfully Apr 21 10:12:42.343443 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:12:42.348626 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:12:42.348694 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:12:42.355575 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:12:42.367400 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:12:42.368265 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:12:42.371699 systemd[1]: Stopped target network.target - Network. Apr 21 10:12:42.374040 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:12:42.374093 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:12:42.379707 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:12:42.379754 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:12:42.381569 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:12:42.381605 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:12:42.386600 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:12:42.386651 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:12:42.389766 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:12:42.392796 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:12:42.398464 systemd-networkd[784]: eth0: DHCPv6 lease lost Apr 21 10:12:42.401112 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:12:42.401232 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:12:42.405732 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:12:42.405768 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:12:42.421514 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:12:42.425725 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:12:42.425799 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:12:42.433904 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:12:42.436801 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:12:42.436926 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:12:42.442287 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:12:42.442431 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:12:42.447750 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:12:42.447891 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:12:42.450011 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:12:42.450054 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:12:42.452563 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:12:42.452590 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:12:42.458112 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:12:42.458149 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:12:42.462302 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:12:42.462342 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:12:42.466408 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:12:42.466443 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:12:42.470627 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:12:42.470661 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:12:42.481520 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:12:42.483084 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:12:42.483124 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:12:42.484762 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:12:42.484795 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:12:42.487655 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:12:42.487684 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:12:42.490229 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:12:42.490259 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:12:42.493231 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:12:42.493261 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:12:42.496590 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:12:42.496673 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:12:42.499449 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:12:42.499586 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:12:42.505765 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:12:42.521533 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:12:42.527170 systemd[1]: Switching root. Apr 21 10:12:42.561830 systemd-journald[193]: Journal stopped Apr 21 10:12:43.304392 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 21 10:12:43.304437 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:12:43.304449 kernel: SELinux: policy capability open_perms=1 Apr 21 10:12:43.304457 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:12:43.304464 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:12:43.304474 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:12:43.304482 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:12:43.304489 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:12:43.304496 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:12:43.304504 kernel: audit: type=1403 audit(1776766362.715:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:12:43.304514 systemd[1]: Successfully loaded SELinux policy in 40.579ms. Apr 21 10:12:43.304534 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.877ms. Apr 21 10:12:43.304544 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:12:43.304553 systemd[1]: Detected virtualization kvm. Apr 21 10:12:43.304561 systemd[1]: Detected architecture x86-64. Apr 21 10:12:43.304568 systemd[1]: Detected first boot. Apr 21 10:12:43.304576 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:12:43.304584 zram_generator::config[1056]: No configuration found. Apr 21 10:12:43.304596 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:12:43.304607 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:12:43.304615 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:12:43.304623 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:12:43.304631 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:12:43.304639 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:12:43.304647 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:12:43.304654 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:12:43.304662 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:12:43.304669 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:12:43.304679 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:12:43.304686 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:12:43.304694 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:12:43.304702 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:12:43.304709 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:12:43.304718 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:12:43.304725 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:12:43.304733 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:12:43.304742 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:12:43.304750 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:12:43.304757 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:12:43.304765 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:12:43.304773 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:12:43.304780 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:12:43.304788 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:12:43.304796 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:12:43.304805 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:12:43.304813 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:12:43.304820 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:12:43.304827 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:12:43.304835 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:12:43.304843 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:12:43.304867 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:12:43.304875 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:12:43.304883 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:12:43.304893 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:12:43.304901 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:12:43.304909 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:12:43.304916 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:12:43.304924 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:12:43.304932 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:12:43.304940 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:12:43.304948 systemd[1]: Reached target machines.target - Containers. Apr 21 10:12:43.304956 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:12:43.304966 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:12:43.304973 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:12:43.304981 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:12:43.304989 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:12:43.304997 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:12:43.305005 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:12:43.305012 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:12:43.305020 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:12:43.305029 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:12:43.305039 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:12:43.305047 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:12:43.305054 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:12:43.305061 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:12:43.305069 kernel: fuse: init (API version 7.39) Apr 21 10:12:43.305077 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:12:43.305084 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:12:43.305092 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:12:43.305101 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:12:43.305108 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:12:43.305116 kernel: loop: module loaded Apr 21 10:12:43.305123 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:12:43.305131 systemd[1]: Stopped verity-setup.service. Apr 21 10:12:43.305149 systemd-journald[1137]: Collecting audit messages is disabled. Apr 21 10:12:43.305165 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:12:43.305173 systemd-journald[1137]: Journal started Apr 21 10:12:43.305190 systemd-journald[1137]: Runtime Journal (/run/log/journal/a49a1b22a195436cb8258098562622ca) is 6.0M, max 48.4M, 42.3M free. Apr 21 10:12:43.042514 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:12:43.059467 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 21 10:12:43.059889 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:12:43.310375 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:12:43.312188 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:12:43.314631 kernel: ACPI: bus type drm_connector registered Apr 21 10:12:43.314769 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:12:43.316683 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:12:43.318333 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:12:43.320111 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:12:43.321750 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:12:43.323332 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:12:43.325265 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:12:43.327215 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:12:43.327340 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:12:43.329223 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:12:43.329345 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:12:43.331150 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:12:43.331253 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:12:43.332938 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:12:43.333049 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:12:43.335001 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:12:43.335128 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:12:43.336892 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:12:43.337010 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:12:43.338729 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:12:43.340525 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:12:43.342505 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:12:43.351514 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:12:43.367562 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:12:43.370564 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:12:43.372292 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:12:43.372327 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:12:43.374604 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:12:43.377283 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:12:43.379783 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:12:43.381536 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:12:43.382298 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:12:43.384909 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:12:43.386697 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:12:43.387423 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:12:43.389162 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:12:43.391529 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:12:43.398435 systemd-journald[1137]: Time spent on flushing to /var/log/journal/a49a1b22a195436cb8258098562622ca is 11.956ms for 953 entries. Apr 21 10:12:43.398435 systemd-journald[1137]: System Journal (/var/log/journal/a49a1b22a195436cb8258098562622ca) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:12:43.424110 systemd-journald[1137]: Received client request to flush runtime journal. Apr 21 10:12:43.424143 kernel: loop0: detected capacity change from 0 to 140768 Apr 21 10:12:43.394506 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:12:43.399673 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:12:43.402786 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:12:43.405576 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:12:43.407598 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:12:43.410513 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:12:43.414754 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:12:43.424731 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:12:43.427256 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:12:43.431014 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:12:43.436392 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:12:43.442550 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:12:43.445408 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:12:43.448436 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:12:43.452171 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:12:43.457818 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:12:43.458252 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:12:43.460390 kernel: loop1: detected capacity change from 0 to 142488 Apr 21 10:12:43.464758 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 21 10:12:43.475610 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 21 10:12:43.475622 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 21 10:12:43.480186 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:12:43.502389 kernel: loop2: detected capacity change from 0 to 228704 Apr 21 10:12:43.533400 kernel: loop3: detected capacity change from 0 to 140768 Apr 21 10:12:43.544379 kernel: loop4: detected capacity change from 0 to 142488 Apr 21 10:12:43.555503 kernel: loop5: detected capacity change from 0 to 228704 Apr 21 10:12:43.562880 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 21 10:12:43.564177 (sd-merge)[1195]: Merged extensions into '/usr'. Apr 21 10:12:43.566744 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:12:43.566766 systemd[1]: Reloading... Apr 21 10:12:43.607385 zram_generator::config[1217]: No configuration found. Apr 21 10:12:43.631231 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:12:43.685718 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:12:43.714954 systemd[1]: Reloading finished in 147 ms. Apr 21 10:12:43.742392 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:12:43.745165 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:12:43.759569 systemd[1]: Starting ensure-sysext.service... Apr 21 10:12:43.761821 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:12:43.766088 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:12:43.766098 systemd[1]: Reloading... Apr 21 10:12:43.777765 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:12:43.777991 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:12:43.778546 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:12:43.778725 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 21 10:12:43.778773 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 21 10:12:43.780821 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:12:43.780843 systemd-tmpfiles[1259]: Skipping /boot Apr 21 10:12:43.787739 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:12:43.787748 systemd-tmpfiles[1259]: Skipping /boot Apr 21 10:12:43.799392 zram_generator::config[1282]: No configuration found. Apr 21 10:12:43.869569 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:12:43.898469 systemd[1]: Reloading finished in 132 ms. Apr 21 10:12:43.912838 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:12:43.925825 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:12:43.933186 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:12:43.936007 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:12:43.938621 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:12:43.942772 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:12:43.946535 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:12:43.949704 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:12:43.953108 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:12:43.953213 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:12:43.955631 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:12:43.958581 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:12:43.963578 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:12:43.965447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:12:43.966763 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:12:43.968531 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:12:43.969240 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:12:43.971681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:12:43.971791 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:12:43.973989 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:12:43.974093 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:12:43.976581 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Apr 21 10:12:43.976627 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:12:43.976712 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:12:43.983188 augenrules[1350]: No rules Apr 21 10:12:43.984979 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:12:43.990705 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:12:43.990977 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:12:44.001423 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:12:44.004152 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:12:44.008477 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:12:44.010082 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:12:44.012665 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:12:44.014138 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:12:44.015263 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:12:44.017310 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:12:44.019759 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:12:44.021905 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:12:44.024026 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:12:44.024205 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:12:44.026273 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:12:44.026614 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:12:44.028960 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:12:44.029523 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:12:44.031727 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:12:44.050903 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1365) Apr 21 10:12:44.050786 systemd[1]: Finished ensure-sysext.service. Apr 21 10:12:44.052521 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 10:12:44.058954 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:12:44.059045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:12:44.066606 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:12:44.069064 systemd-resolved[1328]: Positive Trust Anchors: Apr 21 10:12:44.069088 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:12:44.069113 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:12:44.073567 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:12:44.073611 systemd-resolved[1328]: Defaulting to hostname 'linux'. Apr 21 10:12:44.080396 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 21 10:12:44.084549 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:12:44.087490 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:12:44.088546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:12:44.090287 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:12:44.092222 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:12:44.093217 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 10:12:44.093458 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 21 10:12:44.093553 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 10:12:44.098282 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 10:12:44.101190 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:12:44.101270 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:12:44.101607 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:12:44.103952 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:12:44.104106 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:12:44.106213 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:12:44.106490 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:12:44.109468 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 21 10:12:44.110652 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:12:44.110894 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:12:44.113068 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:12:44.113264 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:12:44.126186 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:12:44.132500 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:12:44.141529 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:12:44.151241 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:12:44.151291 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:12:44.154542 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:12:44.162380 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:12:44.165558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:12:44.176470 systemd-networkd[1406]: lo: Link UP Apr 21 10:12:44.176489 systemd-networkd[1406]: lo: Gained carrier Apr 21 10:12:44.177413 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 10:12:44.177795 systemd-networkd[1406]: Enumeration completed Apr 21 10:12:44.178256 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:12:44.178271 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:12:44.178985 systemd-networkd[1406]: eth0: Link UP Apr 21 10:12:44.178999 systemd-networkd[1406]: eth0: Gained carrier Apr 21 10:12:44.179008 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:12:44.179515 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:12:44.181317 systemd[1]: Reached target network.target - Network. Apr 21 10:12:44.182781 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:12:44.227539 systemd-networkd[1406]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:12:44.227652 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:12:44.228487 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Apr 21 10:12:44.229718 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 21 10:12:44.229777 systemd-timesyncd[1407]: Initial clock synchronization to Tue 2026-04-21 10:12:44.569281 UTC. Apr 21 10:12:44.288296 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:12:44.327044 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:12:44.341539 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:12:44.347637 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:12:44.392894 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:12:44.394953 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:12:44.396570 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:12:44.398240 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:12:44.399992 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:12:44.401899 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:12:44.403504 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:12:44.405278 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:12:44.407032 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:12:44.407054 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:12:44.408338 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:12:44.410320 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:12:44.412928 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:12:44.420093 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:12:44.422613 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:12:44.424609 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:12:44.426185 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:12:44.427561 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:12:44.428938 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:12:44.428956 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:12:44.429691 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:12:44.431393 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:12:44.431907 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:12:44.436460 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:12:44.440169 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:12:44.441642 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:12:44.443513 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:12:44.445074 jq[1436]: false Apr 21 10:12:44.446302 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:12:44.448945 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:12:44.451237 dbus-daemon[1435]: [system] SELinux support is enabled Apr 21 10:12:44.451319 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:12:44.454641 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:12:44.456484 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:12:44.456747 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:12:44.460393 extend-filesystems[1437]: Found loop3 Apr 21 10:12:44.460393 extend-filesystems[1437]: Found loop4 Apr 21 10:12:44.460393 extend-filesystems[1437]: Found loop5 Apr 21 10:12:44.460393 extend-filesystems[1437]: Found sr0 Apr 21 10:12:44.460393 extend-filesystems[1437]: Found vda Apr 21 10:12:44.460393 extend-filesystems[1437]: Found vda1 Apr 21 10:12:44.460393 extend-filesystems[1437]: Found vda2 Apr 21 10:12:44.460393 extend-filesystems[1437]: Found vda3 Apr 21 10:12:44.460393 extend-filesystems[1437]: Found usr Apr 21 10:12:44.460393 extend-filesystems[1437]: Found vda4 Apr 21 10:12:44.460393 extend-filesystems[1437]: Found vda6 Apr 21 10:12:44.460393 extend-filesystems[1437]: Found vda7 Apr 21 10:12:44.460393 extend-filesystems[1437]: Found vda9 Apr 21 10:12:44.460393 extend-filesystems[1437]: Checking size of /dev/vda9 Apr 21 10:12:44.459505 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:12:44.477757 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:12:44.481549 jq[1448]: true Apr 21 10:12:44.479799 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:12:44.483565 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:12:44.488665 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:12:44.489968 update_engine[1444]: I20260421 10:12:44.489531 1444 main.cc:92] Flatcar Update Engine starting Apr 21 10:12:44.489407 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:12:44.489594 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:12:44.489695 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:12:44.490836 update_engine[1444]: I20260421 10:12:44.490808 1444 update_check_scheduler.cc:74] Next update check in 7m2s Apr 21 10:12:44.491716 extend-filesystems[1437]: Resized partition /dev/vda9 Apr 21 10:12:44.494446 extend-filesystems[1459]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:12:44.502460 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1375) Apr 21 10:12:44.493276 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:12:44.493440 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:12:44.505388 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 21 10:12:44.508911 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:12:44.514663 jq[1462]: true Apr 21 10:12:44.514738 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Apr 21 10:12:44.514749 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:12:44.516891 systemd-logind[1443]: New seat seat0. Apr 21 10:12:44.522551 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:12:44.526431 tar[1460]: linux-amd64/LICENSE Apr 21 10:12:44.526431 tar[1460]: linux-amd64/helm Apr 21 10:12:44.532163 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:12:44.534921 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:12:44.535040 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:12:44.537007 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 21 10:12:44.537253 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:12:44.557486 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 21 10:12:44.557486 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 21 10:12:44.557486 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 21 10:12:44.537329 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:12:44.563461 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Apr 21 10:12:44.547588 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:12:44.563091 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:12:44.563223 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:12:44.573174 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:12:44.575522 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:12:44.576419 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:12:44.578441 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 21 10:12:44.668210 containerd[1463]: time="2026-04-21T10:12:44.668069099Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:12:44.687194 containerd[1463]: time="2026-04-21T10:12:44.687145333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:12:44.688543 containerd[1463]: time="2026-04-21T10:12:44.688490957Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:12:44.688543 containerd[1463]: time="2026-04-21T10:12:44.688528626Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:12:44.688543 containerd[1463]: time="2026-04-21T10:12:44.688540015Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:12:44.689095 containerd[1463]: time="2026-04-21T10:12:44.689057163Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:12:44.689115 containerd[1463]: time="2026-04-21T10:12:44.689095982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:12:44.689160 containerd[1463]: time="2026-04-21T10:12:44.689139040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:12:44.689176 containerd[1463]: time="2026-04-21T10:12:44.689162227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:12:44.689305 containerd[1463]: time="2026-04-21T10:12:44.689274588Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:12:44.689305 containerd[1463]: time="2026-04-21T10:12:44.689299727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:12:44.689337 containerd[1463]: time="2026-04-21T10:12:44.689308945Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:12:44.689337 containerd[1463]: time="2026-04-21T10:12:44.689315690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:12:44.689410 containerd[1463]: time="2026-04-21T10:12:44.689393760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:12:44.689605 containerd[1463]: time="2026-04-21T10:12:44.689570168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:12:44.689683 containerd[1463]: time="2026-04-21T10:12:44.689664493Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:12:44.689703 containerd[1463]: time="2026-04-21T10:12:44.689685270Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:12:44.689763 containerd[1463]: time="2026-04-21T10:12:44.689745577Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:12:44.689805 containerd[1463]: time="2026-04-21T10:12:44.689790123Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:12:44.694899 containerd[1463]: time="2026-04-21T10:12:44.694853192Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:12:44.694930 containerd[1463]: time="2026-04-21T10:12:44.694906798Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:12:44.694930 containerd[1463]: time="2026-04-21T10:12:44.694919416Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:12:44.694960 containerd[1463]: time="2026-04-21T10:12:44.694930416Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:12:44.694960 containerd[1463]: time="2026-04-21T10:12:44.694941170Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:12:44.695054 containerd[1463]: time="2026-04-21T10:12:44.695024043Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:12:44.695222 containerd[1463]: time="2026-04-21T10:12:44.695204201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:12:44.695327 containerd[1463]: time="2026-04-21T10:12:44.695298220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:12:44.695327 containerd[1463]: time="2026-04-21T10:12:44.695323681Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:12:44.695386 containerd[1463]: time="2026-04-21T10:12:44.695337357Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:12:44.695386 containerd[1463]: time="2026-04-21T10:12:44.695351419Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:12:44.695427 containerd[1463]: time="2026-04-21T10:12:44.695390358Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:12:44.695427 containerd[1463]: time="2026-04-21T10:12:44.695399450Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:12:44.695427 containerd[1463]: time="2026-04-21T10:12:44.695408815Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:12:44.695427 containerd[1463]: time="2026-04-21T10:12:44.695418886Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:12:44.695481 containerd[1463]: time="2026-04-21T10:12:44.695427557Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:12:44.695481 containerd[1463]: time="2026-04-21T10:12:44.695436951Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:12:44.695481 containerd[1463]: time="2026-04-21T10:12:44.695444835Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:12:44.695481 containerd[1463]: time="2026-04-21T10:12:44.695458273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.695481 containerd[1463]: time="2026-04-21T10:12:44.695466954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.695481 containerd[1463]: time="2026-04-21T10:12:44.695475224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.695572 containerd[1463]: time="2026-04-21T10:12:44.695483321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.695572 containerd[1463]: time="2026-04-21T10:12:44.695491853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.695572 containerd[1463]: time="2026-04-21T10:12:44.695500386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.695572 containerd[1463]: time="2026-04-21T10:12:44.695508400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.695572 containerd[1463]: time="2026-04-21T10:12:44.695517090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.695572 containerd[1463]: time="2026-04-21T10:12:44.695526603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.695572 containerd[1463]: time="2026-04-21T10:12:44.695541810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.695572 containerd[1463]: time="2026-04-21T10:12:44.695550560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.695572 containerd[1463]: time="2026-04-21T10:12:44.695558431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.695572 containerd[1463]: time="2026-04-21T10:12:44.695566669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.695713 containerd[1463]: time="2026-04-21T10:12:44.695579938Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:12:44.695713 containerd[1463]: time="2026-04-21T10:12:44.695594613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.695713 containerd[1463]: time="2026-04-21T10:12:44.695602428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.695713 containerd[1463]: time="2026-04-21T10:12:44.695616944Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:12:44.695713 containerd[1463]: time="2026-04-21T10:12:44.695653620Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:12:44.695713 containerd[1463]: time="2026-04-21T10:12:44.695667031Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:12:44.695713 containerd[1463]: time="2026-04-21T10:12:44.695674693Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:12:44.695713 containerd[1463]: time="2026-04-21T10:12:44.695682477Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:12:44.695713 containerd[1463]: time="2026-04-21T10:12:44.695688630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.695713 containerd[1463]: time="2026-04-21T10:12:44.695699947Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:12:44.695713 containerd[1463]: time="2026-04-21T10:12:44.695710911Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:12:44.695882 containerd[1463]: time="2026-04-21T10:12:44.695721098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:12:44.696023 containerd[1463]: time="2026-04-21T10:12:44.695971460Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:12:44.696023 containerd[1463]: time="2026-04-21T10:12:44.696022560Z" level=info msg="Connect containerd service" Apr 21 10:12:44.696470 containerd[1463]: time="2026-04-21T10:12:44.696213872Z" level=info msg="using legacy CRI server" Apr 21 10:12:44.696470 containerd[1463]: time="2026-04-21T10:12:44.696262854Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:12:44.697263 containerd[1463]: time="2026-04-21T10:12:44.697221226Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:12:44.698064 containerd[1463]: time="2026-04-21T10:12:44.698033843Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:12:44.698408 containerd[1463]: time="2026-04-21T10:12:44.698334062Z" level=info msg="Start subscribing containerd event" Apr 21 10:12:44.698408 containerd[1463]: time="2026-04-21T10:12:44.698402798Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:12:44.698463 containerd[1463]: time="2026-04-21T10:12:44.698423990Z" level=info msg="Start recovering state" Apr 21 10:12:44.698463 containerd[1463]: time="2026-04-21T10:12:44.698441779Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:12:44.698528 containerd[1463]: time="2026-04-21T10:12:44.698498872Z" level=info msg="Start event monitor" Apr 21 10:12:44.698544 containerd[1463]: time="2026-04-21T10:12:44.698528893Z" level=info msg="Start snapshots syncer" Apr 21 10:12:44.698544 containerd[1463]: time="2026-04-21T10:12:44.698537920Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:12:44.698574 containerd[1463]: time="2026-04-21T10:12:44.698548539Z" level=info msg="Start streaming server" Apr 21 10:12:44.698688 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:12:44.699552 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:12:44.700439 containerd[1463]: time="2026-04-21T10:12:44.700401210Z" level=info msg="containerd successfully booted in 0.033076s" Apr 21 10:12:44.720284 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:12:44.729648 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:12:44.736395 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:12:44.736568 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:12:44.739262 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:12:44.749764 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:12:44.757733 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:12:44.760413 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:12:44.762152 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:12:44.925647 tar[1460]: linux-amd64/README.md Apr 21 10:12:44.938796 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:12:45.507864 systemd-networkd[1406]: eth0: Gained IPv6LL Apr 21 10:12:45.510278 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:12:45.512897 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:12:45.526630 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 21 10:12:45.529786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:12:45.532157 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:12:45.545822 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 21 10:12:45.545978 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 21 10:12:45.548059 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:12:45.550635 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:12:46.174333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:12:46.176577 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:12:46.178362 systemd[1]: Startup finished in 796ms (kernel) + 6.039s (initrd) + 3.502s (userspace) = 10.338s. Apr 21 10:12:46.178780 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:12:46.582014 kubelet[1548]: E0421 10:12:46.581885 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:12:46.584239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:12:46.584363 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:12:50.411696 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:12:50.412643 systemd[1]: Started sshd@0-10.0.0.21:22-10.0.0.1:45194.service - OpenSSH per-connection server daemon (10.0.0.1:45194). Apr 21 10:12:50.456635 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 45194 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:12:50.458134 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:12:50.465202 systemd-logind[1443]: New session 1 of user core. Apr 21 10:12:50.465982 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:12:50.477807 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:12:50.486205 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:12:50.487968 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:12:50.494004 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:12:50.560977 systemd[1565]: Queued start job for default target default.target. Apr 21 10:12:50.570362 systemd[1565]: Created slice app.slice - User Application Slice. Apr 21 10:12:50.570448 systemd[1565]: Reached target paths.target - Paths. Apr 21 10:12:50.570459 systemd[1565]: Reached target timers.target - Timers. Apr 21 10:12:50.571611 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:12:50.581990 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:12:50.582067 systemd[1565]: Reached target sockets.target - Sockets. Apr 21 10:12:50.582079 systemd[1565]: Reached target basic.target - Basic System. Apr 21 10:12:50.582137 systemd[1565]: Reached target default.target - Main User Target. Apr 21 10:12:50.582163 systemd[1565]: Startup finished in 83ms. Apr 21 10:12:50.582536 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:12:50.583958 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:12:50.642556 systemd[1]: Started sshd@1-10.0.0.21:22-10.0.0.1:45204.service - OpenSSH per-connection server daemon (10.0.0.1:45204). Apr 21 10:12:50.675070 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 45204 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:12:50.676074 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:12:50.679631 systemd-logind[1443]: New session 2 of user core. Apr 21 10:12:50.685936 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:12:50.739845 sshd[1576]: pam_unix(sshd:session): session closed for user core Apr 21 10:12:50.748903 systemd[1]: sshd@1-10.0.0.21:22-10.0.0.1:45204.service: Deactivated successfully. Apr 21 10:12:50.750028 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:12:50.751076 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:12:50.752029 systemd[1]: Started sshd@2-10.0.0.21:22-10.0.0.1:45220.service - OpenSSH per-connection server daemon (10.0.0.1:45220). Apr 21 10:12:50.752616 systemd-logind[1443]: Removed session 2. Apr 21 10:12:50.784065 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 45220 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:12:50.785181 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:12:50.788575 systemd-logind[1443]: New session 3 of user core. Apr 21 10:12:50.803590 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:12:50.851946 sshd[1583]: pam_unix(sshd:session): session closed for user core Apr 21 10:12:50.860441 systemd[1]: sshd@2-10.0.0.21:22-10.0.0.1:45220.service: Deactivated successfully. Apr 21 10:12:50.861529 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:12:50.862467 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:12:50.863574 systemd[1]: Started sshd@3-10.0.0.21:22-10.0.0.1:45234.service - OpenSSH per-connection server daemon (10.0.0.1:45234). Apr 21 10:12:50.864108 systemd-logind[1443]: Removed session 3. Apr 21 10:12:50.896134 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 45234 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:12:50.897303 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:12:50.901095 systemd-logind[1443]: New session 4 of user core. Apr 21 10:12:50.910537 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:12:50.963473 sshd[1590]: pam_unix(sshd:session): session closed for user core Apr 21 10:12:50.976472 systemd[1]: sshd@3-10.0.0.21:22-10.0.0.1:45234.service: Deactivated successfully. Apr 21 10:12:50.977662 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:12:50.978697 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:12:50.992628 systemd[1]: Started sshd@4-10.0.0.21:22-10.0.0.1:45244.service - OpenSSH per-connection server daemon (10.0.0.1:45244). Apr 21 10:12:50.993458 systemd-logind[1443]: Removed session 4. Apr 21 10:12:51.021567 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 45244 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:12:51.022576 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:12:51.025848 systemd-logind[1443]: New session 5 of user core. Apr 21 10:12:51.039523 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:12:51.117421 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:12:51.117627 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:12:51.136480 sudo[1600]: pam_unix(sudo:session): session closed for user root Apr 21 10:12:51.138985 sshd[1597]: pam_unix(sshd:session): session closed for user core Apr 21 10:12:51.151566 systemd[1]: sshd@4-10.0.0.21:22-10.0.0.1:45244.service: Deactivated successfully. Apr 21 10:12:51.152705 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:12:51.153727 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:12:51.160704 systemd[1]: Started sshd@5-10.0.0.21:22-10.0.0.1:45250.service - OpenSSH per-connection server daemon (10.0.0.1:45250). Apr 21 10:12:51.161350 systemd-logind[1443]: Removed session 5. Apr 21 10:12:51.190637 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 45250 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:12:51.192062 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:12:51.195787 systemd-logind[1443]: New session 6 of user core. Apr 21 10:12:51.211558 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:12:51.263637 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:12:51.263849 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:12:51.267245 sudo[1609]: pam_unix(sudo:session): session closed for user root Apr 21 10:12:51.271472 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:12:51.271671 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:12:51.289668 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:12:51.291221 auditctl[1612]: No rules Apr 21 10:12:51.291528 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:12:51.291680 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:12:51.293540 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:12:51.316271 augenrules[1630]: No rules Apr 21 10:12:51.317311 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:12:51.318079 sudo[1608]: pam_unix(sudo:session): session closed for user root Apr 21 10:12:51.319407 sshd[1605]: pam_unix(sshd:session): session closed for user core Apr 21 10:12:51.338434 systemd[1]: sshd@5-10.0.0.21:22-10.0.0.1:45250.service: Deactivated successfully. Apr 21 10:12:51.339561 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:12:51.340529 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:12:51.346707 systemd[1]: Started sshd@6-10.0.0.21:22-10.0.0.1:45256.service - OpenSSH per-connection server daemon (10.0.0.1:45256). Apr 21 10:12:51.347494 systemd-logind[1443]: Removed session 6. Apr 21 10:12:51.376249 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 45256 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:12:51.377535 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:12:51.381012 systemd-logind[1443]: New session 7 of user core. Apr 21 10:12:51.389563 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:12:51.441715 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:12:51.441939 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:12:51.671959 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:12:51.672061 (dockerd)[1659]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:12:51.890940 dockerd[1659]: time="2026-04-21T10:12:51.890873789Z" level=info msg="Starting up" Apr 21 10:12:51.975935 dockerd[1659]: time="2026-04-21T10:12:51.975814192Z" level=info msg="Loading containers: start." Apr 21 10:12:52.080420 kernel: Initializing XFRM netlink socket Apr 21 10:12:52.153093 systemd-networkd[1406]: docker0: Link UP Apr 21 10:12:52.177960 dockerd[1659]: time="2026-04-21T10:12:52.177872804Z" level=info msg="Loading containers: done." Apr 21 10:12:52.189565 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2679700232-merged.mount: Deactivated successfully. Apr 21 10:12:52.191020 dockerd[1659]: time="2026-04-21T10:12:52.190949351Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:12:52.191095 dockerd[1659]: time="2026-04-21T10:12:52.191083876Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:12:52.191200 dockerd[1659]: time="2026-04-21T10:12:52.191169750Z" level=info msg="Daemon has completed initialization" Apr 21 10:12:52.224758 dockerd[1659]: time="2026-04-21T10:12:52.224672531Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:12:52.224987 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:12:52.702565 containerd[1463]: time="2026-04-21T10:12:52.702492185Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 21 10:12:53.250095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3824698458.mount: Deactivated successfully. Apr 21 10:12:54.175773 containerd[1463]: time="2026-04-21T10:12:54.175701993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:54.176472 containerd[1463]: time="2026-04-21T10:12:54.176418636Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 21 10:12:54.177239 containerd[1463]: time="2026-04-21T10:12:54.177213953Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:54.179681 containerd[1463]: time="2026-04-21T10:12:54.179640135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:54.180681 containerd[1463]: time="2026-04-21T10:12:54.180657371Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.478118475s" Apr 21 10:12:54.180741 containerd[1463]: time="2026-04-21T10:12:54.180688240Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 21 10:12:54.181442 containerd[1463]: time="2026-04-21T10:12:54.181412330Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 21 10:12:55.403160 containerd[1463]: time="2026-04-21T10:12:55.403081940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:55.403692 containerd[1463]: time="2026-04-21T10:12:55.403636009Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 21 10:12:55.405031 containerd[1463]: time="2026-04-21T10:12:55.405000420Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:55.409084 containerd[1463]: time="2026-04-21T10:12:55.409042472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:55.410032 containerd[1463]: time="2026-04-21T10:12:55.409997902Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.228561268s" Apr 21 10:12:55.410065 containerd[1463]: time="2026-04-21T10:12:55.410040393Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 21 10:12:55.410901 containerd[1463]: time="2026-04-21T10:12:55.410884451Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 21 10:12:56.538680 containerd[1463]: time="2026-04-21T10:12:56.537537909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:56.539094 containerd[1463]: time="2026-04-21T10:12:56.539028753Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 21 10:12:56.540798 containerd[1463]: time="2026-04-21T10:12:56.540008405Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:56.544964 containerd[1463]: time="2026-04-21T10:12:56.544915967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:56.548252 containerd[1463]: time="2026-04-21T10:12:56.548211130Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.137304726s" Apr 21 10:12:56.548280 containerd[1463]: time="2026-04-21T10:12:56.548252666Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 21 10:12:56.548904 containerd[1463]: time="2026-04-21T10:12:56.548872031Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 21 10:12:56.835631 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:12:56.848744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:12:56.957613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:12:56.960855 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:12:57.010169 kubelet[1881]: E0421 10:12:57.010136 1881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:12:57.013602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:12:57.013802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:12:57.384034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount595685172.mount: Deactivated successfully. Apr 21 10:12:57.731066 containerd[1463]: time="2026-04-21T10:12:57.730930644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:57.731863 containerd[1463]: time="2026-04-21T10:12:57.731811251Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 21 10:12:57.732759 containerd[1463]: time="2026-04-21T10:12:57.732722184Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:57.734893 containerd[1463]: time="2026-04-21T10:12:57.734855723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:57.735423 containerd[1463]: time="2026-04-21T10:12:57.735389224Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.186463225s" Apr 21 10:12:57.735423 containerd[1463]: time="2026-04-21T10:12:57.735424338Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 21 10:12:57.736225 containerd[1463]: time="2026-04-21T10:12:57.736059437Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 21 10:12:58.137127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount900069591.mount: Deactivated successfully. Apr 21 10:12:58.742788 containerd[1463]: time="2026-04-21T10:12:58.742724810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:58.743592 containerd[1463]: time="2026-04-21T10:12:58.743556428Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 21 10:12:58.744578 containerd[1463]: time="2026-04-21T10:12:58.744548837Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:58.746826 containerd[1463]: time="2026-04-21T10:12:58.746789062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:58.747850 containerd[1463]: time="2026-04-21T10:12:58.747820747Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.011738615s" Apr 21 10:12:58.747877 containerd[1463]: time="2026-04-21T10:12:58.747848804Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 21 10:12:58.748573 containerd[1463]: time="2026-04-21T10:12:58.748543356Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 21 10:12:59.170333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1505667988.mount: Deactivated successfully. Apr 21 10:12:59.176466 containerd[1463]: time="2026-04-21T10:12:59.176422492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:59.177123 containerd[1463]: time="2026-04-21T10:12:59.177087237Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 21 10:12:59.178312 containerd[1463]: time="2026-04-21T10:12:59.178268589Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:59.180121 containerd[1463]: time="2026-04-21T10:12:59.180081352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:59.180706 containerd[1463]: time="2026-04-21T10:12:59.180671452Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 432.078554ms" Apr 21 10:12:59.180706 containerd[1463]: time="2026-04-21T10:12:59.180703452Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 21 10:12:59.181294 containerd[1463]: time="2026-04-21T10:12:59.181277916Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 21 10:12:59.638168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2426167091.mount: Deactivated successfully. Apr 21 10:13:00.222680 containerd[1463]: time="2026-04-21T10:13:00.222625659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:00.223459 containerd[1463]: time="2026-04-21T10:13:00.223403209Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 21 10:13:00.224460 containerd[1463]: time="2026-04-21T10:13:00.224423550Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:00.226794 containerd[1463]: time="2026-04-21T10:13:00.226752663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:00.227638 containerd[1463]: time="2026-04-21T10:13:00.227595916Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.046296389s" Apr 21 10:13:00.227674 containerd[1463]: time="2026-04-21T10:13:00.227643086Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 21 10:13:02.423991 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:13:02.438681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:13:02.460506 systemd[1]: Reloading requested from client PID 2048 ('systemctl') (unit session-7.scope)... Apr 21 10:13:02.460532 systemd[1]: Reloading... Apr 21 10:13:02.519440 zram_generator::config[2087]: No configuration found. Apr 21 10:13:02.595585 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:13:02.642162 systemd[1]: Reloading finished in 181 ms. Apr 21 10:13:02.687039 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:13:02.689260 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:13:02.689452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:13:02.690581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:13:02.783119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:13:02.786593 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:13:02.818384 kubelet[2137]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:13:02.818384 kubelet[2137]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:13:02.818384 kubelet[2137]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:13:02.818773 kubelet[2137]: I0421 10:13:02.818441 2137 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:13:03.081224 kubelet[2137]: I0421 10:13:03.081106 2137 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:13:03.081224 kubelet[2137]: I0421 10:13:03.081140 2137 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:13:03.081425 kubelet[2137]: I0421 10:13:03.081392 2137 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:13:03.102907 kubelet[2137]: E0421 10:13:03.101851 2137 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:13:03.105303 kubelet[2137]: I0421 10:13:03.105275 2137 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:13:03.111710 kubelet[2137]: E0421 10:13:03.111666 2137 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:13:03.111710 kubelet[2137]: I0421 10:13:03.111704 2137 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:13:03.115238 kubelet[2137]: I0421 10:13:03.115215 2137 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:13:03.115787 kubelet[2137]: I0421 10:13:03.115736 2137 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:13:03.115942 kubelet[2137]: I0421 10:13:03.115775 2137 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:13:03.115942 kubelet[2137]: I0421 10:13:03.115933 2137 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:13:03.115942 kubelet[2137]: I0421 10:13:03.115941 2137 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:13:03.116057 kubelet[2137]: I0421 10:13:03.116036 2137 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:13:03.119117 kubelet[2137]: I0421 10:13:03.119072 2137 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:13:03.119117 kubelet[2137]: I0421 10:13:03.119098 2137 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:13:03.119117 kubelet[2137]: I0421 10:13:03.119118 2137 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:13:03.120506 kubelet[2137]: I0421 10:13:03.120481 2137 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:13:03.122929 kubelet[2137]: I0421 10:13:03.122631 2137 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:13:03.123125 kubelet[2137]: I0421 10:13:03.123088 2137 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:13:03.124530 kubelet[2137]: W0421 10:13:03.123897 2137 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:13:03.126752 kubelet[2137]: E0421 10:13:03.126697 2137 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:13:03.127374 kubelet[2137]: E0421 10:13:03.126887 2137 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:13:03.127903 kubelet[2137]: I0421 10:13:03.127885 2137 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:13:03.127946 kubelet[2137]: I0421 10:13:03.127932 2137 server.go:1289] "Started kubelet" Apr 21 10:13:03.128229 kubelet[2137]: I0421 10:13:03.128184 2137 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:13:03.133613 kubelet[2137]: I0421 10:13:03.133570 2137 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:13:03.135058 kubelet[2137]: I0421 10:13:03.135036 2137 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:13:03.135177 kubelet[2137]: E0421 10:13:03.133943 2137 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8579c9b31aaf5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 10:13:03.127898869 +0000 UTC m=+0.338202744,LastTimestamp:2026-04-21 10:13:03.127898869 +0000 UTC m=+0.338202744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 10:13:03.135519 kubelet[2137]: I0421 10:13:03.135501 2137 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:13:03.137614 kubelet[2137]: E0421 10:13:03.137576 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:03.139395 kubelet[2137]: I0421 10:13:03.137730 2137 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:13:03.139395 kubelet[2137]: I0421 10:13:03.137829 2137 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:13:03.139395 kubelet[2137]: I0421 10:13:03.137895 2137 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:13:03.139395 kubelet[2137]: E0421 10:13:03.138560 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="200ms" Apr 21 10:13:03.139395 kubelet[2137]: E0421 10:13:03.138790 2137 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:13:03.139395 kubelet[2137]: I0421 10:13:03.139124 2137 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:13:03.139395 kubelet[2137]: I0421 10:13:03.139309 2137 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:13:03.140033 kubelet[2137]: I0421 10:13:03.139978 2137 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:13:03.141469 kubelet[2137]: I0421 10:13:03.141456 2137 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:13:03.141592 kubelet[2137]: I0421 10:13:03.141577 2137 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:13:03.141967 kubelet[2137]: E0421 10:13:03.141940 2137 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:13:03.143424 kubelet[2137]: I0421 10:13:03.143392 2137 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:13:03.153101 kubelet[2137]: I0421 10:13:03.153056 2137 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:13:03.153160 kubelet[2137]: I0421 10:13:03.153104 2137 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:13:03.153160 kubelet[2137]: I0421 10:13:03.153118 2137 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:13:03.154845 kubelet[2137]: I0421 10:13:03.154812 2137 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:13:03.154945 kubelet[2137]: I0421 10:13:03.154852 2137 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:13:03.154945 kubelet[2137]: I0421 10:13:03.154872 2137 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:13:03.154945 kubelet[2137]: I0421 10:13:03.154881 2137 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:13:03.154945 kubelet[2137]: E0421 10:13:03.154910 2137 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:13:03.210077 kubelet[2137]: I0421 10:13:03.210004 2137 policy_none.go:49] "None policy: Start" Apr 21 10:13:03.210077 kubelet[2137]: I0421 10:13:03.210039 2137 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:13:03.210077 kubelet[2137]: I0421 10:13:03.210050 2137 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:13:03.210461 kubelet[2137]: E0421 10:13:03.210420 2137 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:13:03.215428 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:13:03.233721 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:13:03.238764 kubelet[2137]: E0421 10:13:03.238685 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:03.243731 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:13:03.245448 kubelet[2137]: E0421 10:13:03.245416 2137 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:13:03.245603 kubelet[2137]: I0421 10:13:03.245589 2137 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:13:03.245733 kubelet[2137]: I0421 10:13:03.245601 2137 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:13:03.245989 kubelet[2137]: I0421 10:13:03.245866 2137 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:13:03.247276 kubelet[2137]: E0421 10:13:03.247256 2137 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:13:03.247386 kubelet[2137]: E0421 10:13:03.247290 2137 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 21 10:13:03.265863 systemd[1]: Created slice kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice - libcontainer container kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice. Apr 21 10:13:03.274070 kubelet[2137]: E0421 10:13:03.274001 2137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:13:03.277031 systemd[1]: Created slice kubepods-burstable-pod47e50b39a5c3f70a3898f39186f718d0.slice - libcontainer container kubepods-burstable-pod47e50b39a5c3f70a3898f39186f718d0.slice. Apr 21 10:13:03.278159 kubelet[2137]: E0421 10:13:03.278140 2137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:13:03.279726 systemd[1]: Created slice kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice - libcontainer container kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice. Apr 21 10:13:03.281111 kubelet[2137]: E0421 10:13:03.281073 2137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:13:03.339788 kubelet[2137]: I0421 10:13:03.339512 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:13:03.339788 kubelet[2137]: I0421 10:13:03.339558 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:13:03.339788 kubelet[2137]: E0421 10:13:03.339732 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="400ms" Apr 21 10:13:03.339788 kubelet[2137]: I0421 10:13:03.339754 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47e50b39a5c3f70a3898f39186f718d0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"47e50b39a5c3f70a3898f39186f718d0\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:13:03.339788 kubelet[2137]: I0421 10:13:03.339791 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47e50b39a5c3f70a3898f39186f718d0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"47e50b39a5c3f70a3898f39186f718d0\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:13:03.340011 kubelet[2137]: I0421 10:13:03.339808 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:13:03.340011 kubelet[2137]: I0421 10:13:03.339829 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:13:03.340011 kubelet[2137]: I0421 10:13:03.339841 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47e50b39a5c3f70a3898f39186f718d0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"47e50b39a5c3f70a3898f39186f718d0\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:13:03.340011 kubelet[2137]: I0421 10:13:03.339855 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:13:03.340011 kubelet[2137]: I0421 10:13:03.339867 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:13:03.348035 kubelet[2137]: I0421 10:13:03.347970 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:13:03.348305 kubelet[2137]: E0421 10:13:03.348251 2137 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Apr 21 10:13:03.552005 kubelet[2137]: I0421 10:13:03.551930 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:13:03.552403 kubelet[2137]: E0421 10:13:03.552347 2137 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Apr 21 10:13:03.575465 kubelet[2137]: E0421 10:13:03.575336 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:03.576965 containerd[1463]: time="2026-04-21T10:13:03.576863486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 21 10:13:03.579175 kubelet[2137]: E0421 10:13:03.579145 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:03.579785 containerd[1463]: time="2026-04-21T10:13:03.579754409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:47e50b39a5c3f70a3898f39186f718d0,Namespace:kube-system,Attempt:0,}" Apr 21 10:13:03.582010 kubelet[2137]: E0421 10:13:03.581975 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:03.582338 containerd[1463]: time="2026-04-21T10:13:03.582309184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 21 10:13:03.741091 kubelet[2137]: E0421 10:13:03.740773 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="800ms" Apr 21 10:13:03.954386 kubelet[2137]: I0421 10:13:03.954317 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:13:03.954695 kubelet[2137]: E0421 10:13:03.954670 2137 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Apr 21 10:13:03.974168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4233651785.mount: Deactivated successfully. Apr 21 10:13:03.980874 containerd[1463]: time="2026-04-21T10:13:03.980818478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:13:03.981531 containerd[1463]: time="2026-04-21T10:13:03.981490852Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 21 10:13:03.984323 containerd[1463]: time="2026-04-21T10:13:03.984284891Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:13:03.985484 containerd[1463]: time="2026-04-21T10:13:03.985436628Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:13:03.986172 containerd[1463]: time="2026-04-21T10:13:03.986127979Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:13:03.987148 containerd[1463]: time="2026-04-21T10:13:03.987113749Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:13:03.987684 containerd[1463]: time="2026-04-21T10:13:03.987667118Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:13:03.988433 containerd[1463]: time="2026-04-21T10:13:03.988405118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:13:03.988915 containerd[1463]: time="2026-04-21T10:13:03.988889359Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 411.90038ms" Apr 21 10:13:03.991392 containerd[1463]: time="2026-04-21T10:13:03.991306942Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 408.923275ms" Apr 21 10:13:03.995587 containerd[1463]: time="2026-04-21T10:13:03.995426794Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 415.601927ms" Apr 21 10:13:04.042820 kubelet[2137]: E0421 10:13:04.042734 2137 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:13:04.062080 kubelet[2137]: E0421 10:13:04.062053 2137 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:13:04.127747 containerd[1463]: time="2026-04-21T10:13:04.127620814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:13:04.127747 containerd[1463]: time="2026-04-21T10:13:04.127689145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:13:04.127747 containerd[1463]: time="2026-04-21T10:13:04.127701166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:04.127962 containerd[1463]: time="2026-04-21T10:13:04.127759729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:04.128197 containerd[1463]: time="2026-04-21T10:13:04.128127886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:13:04.128197 containerd[1463]: time="2026-04-21T10:13:04.128171347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:13:04.128197 containerd[1463]: time="2026-04-21T10:13:04.128183562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:04.128310 containerd[1463]: time="2026-04-21T10:13:04.128227176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:04.128957 containerd[1463]: time="2026-04-21T10:13:04.128879139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:13:04.129000 containerd[1463]: time="2026-04-21T10:13:04.128957530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:13:04.129000 containerd[1463]: time="2026-04-21T10:13:04.128972381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:04.129127 containerd[1463]: time="2026-04-21T10:13:04.129072773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:04.149729 systemd[1]: Started cri-containerd-62736d555b6b3718ecafdf6d51f32a1971f7318aaee8aae0735fc1d45163a665.scope - libcontainer container 62736d555b6b3718ecafdf6d51f32a1971f7318aaee8aae0735fc1d45163a665. Apr 21 10:13:04.153022 systemd[1]: Started cri-containerd-224685e15e8bf4abbfa3ff5ce2373a7aec8a7b029c39ba5f88eb5c1e0b2577f8.scope - libcontainer container 224685e15e8bf4abbfa3ff5ce2373a7aec8a7b029c39ba5f88eb5c1e0b2577f8. Apr 21 10:13:04.154417 systemd[1]: Started cri-containerd-f43186a33a311c0a8bad1c9c58b0c2dde3b6c7e9588ff7e4bb87dd93c4337e06.scope - libcontainer container f43186a33a311c0a8bad1c9c58b0c2dde3b6c7e9588ff7e4bb87dd93c4337e06. Apr 21 10:13:04.188291 containerd[1463]: time="2026-04-21T10:13:04.188203829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"62736d555b6b3718ecafdf6d51f32a1971f7318aaee8aae0735fc1d45163a665\"" Apr 21 10:13:04.190717 kubelet[2137]: E0421 10:13:04.190675 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:04.196614 containerd[1463]: time="2026-04-21T10:13:04.196535798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"224685e15e8bf4abbfa3ff5ce2373a7aec8a7b029c39ba5f88eb5c1e0b2577f8\"" Apr 21 10:13:04.197130 kubelet[2137]: E0421 10:13:04.197090 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:04.199545 containerd[1463]: time="2026-04-21T10:13:04.199467492Z" level=info msg="CreateContainer within sandbox \"62736d555b6b3718ecafdf6d51f32a1971f7318aaee8aae0735fc1d45163a665\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:13:04.200928 containerd[1463]: time="2026-04-21T10:13:04.200893214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:47e50b39a5c3f70a3898f39186f718d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f43186a33a311c0a8bad1c9c58b0c2dde3b6c7e9588ff7e4bb87dd93c4337e06\"" Apr 21 10:13:04.201411 kubelet[2137]: E0421 10:13:04.201345 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:04.202111 containerd[1463]: time="2026-04-21T10:13:04.202073034Z" level=info msg="CreateContainer within sandbox \"224685e15e8bf4abbfa3ff5ce2373a7aec8a7b029c39ba5f88eb5c1e0b2577f8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:13:04.205312 containerd[1463]: time="2026-04-21T10:13:04.205282611Z" level=info msg="CreateContainer within sandbox \"f43186a33a311c0a8bad1c9c58b0c2dde3b6c7e9588ff7e4bb87dd93c4337e06\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:13:04.217994 containerd[1463]: time="2026-04-21T10:13:04.217948887Z" level=info msg="CreateContainer within sandbox \"62736d555b6b3718ecafdf6d51f32a1971f7318aaee8aae0735fc1d45163a665\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"902dadc50db4057516b67aba832dfd8814f793122a439bb791c0bc41172d51fa\"" Apr 21 10:13:04.218631 containerd[1463]: time="2026-04-21T10:13:04.218602549Z" level=info msg="StartContainer for \"902dadc50db4057516b67aba832dfd8814f793122a439bb791c0bc41172d51fa\"" Apr 21 10:13:04.222233 containerd[1463]: time="2026-04-21T10:13:04.222143776Z" level=info msg="CreateContainer within sandbox \"224685e15e8bf4abbfa3ff5ce2373a7aec8a7b029c39ba5f88eb5c1e0b2577f8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"91b5935baca3be56ff55f3964123c015e43b644ed8c92d342a2c6c812d355459\"" Apr 21 10:13:04.222921 containerd[1463]: time="2026-04-21T10:13:04.222907081Z" level=info msg="StartContainer for \"91b5935baca3be56ff55f3964123c015e43b644ed8c92d342a2c6c812d355459\"" Apr 21 10:13:04.226753 containerd[1463]: time="2026-04-21T10:13:04.226665475Z" level=info msg="CreateContainer within sandbox \"f43186a33a311c0a8bad1c9c58b0c2dde3b6c7e9588ff7e4bb87dd93c4337e06\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8cd5f056c26cb530915ea7df95b63f16669afb6ee1b491a955b45f226fe855fc\"" Apr 21 10:13:04.227092 containerd[1463]: time="2026-04-21T10:13:04.227067944Z" level=info msg="StartContainer for \"8cd5f056c26cb530915ea7df95b63f16669afb6ee1b491a955b45f226fe855fc\"" Apr 21 10:13:04.242514 systemd[1]: Started cri-containerd-902dadc50db4057516b67aba832dfd8814f793122a439bb791c0bc41172d51fa.scope - libcontainer container 902dadc50db4057516b67aba832dfd8814f793122a439bb791c0bc41172d51fa. Apr 21 10:13:04.246571 systemd[1]: Started cri-containerd-8cd5f056c26cb530915ea7df95b63f16669afb6ee1b491a955b45f226fe855fc.scope - libcontainer container 8cd5f056c26cb530915ea7df95b63f16669afb6ee1b491a955b45f226fe855fc. Apr 21 10:13:04.247681 systemd[1]: Started cri-containerd-91b5935baca3be56ff55f3964123c015e43b644ed8c92d342a2c6c812d355459.scope - libcontainer container 91b5935baca3be56ff55f3964123c015e43b644ed8c92d342a2c6c812d355459. Apr 21 10:13:04.287286 containerd[1463]: time="2026-04-21T10:13:04.287215476Z" level=info msg="StartContainer for \"8cd5f056c26cb530915ea7df95b63f16669afb6ee1b491a955b45f226fe855fc\" returns successfully" Apr 21 10:13:04.290977 containerd[1463]: time="2026-04-21T10:13:04.290863163Z" level=info msg="StartContainer for \"902dadc50db4057516b67aba832dfd8814f793122a439bb791c0bc41172d51fa\" returns successfully" Apr 21 10:13:04.294943 containerd[1463]: time="2026-04-21T10:13:04.294899964Z" level=info msg="StartContainer for \"91b5935baca3be56ff55f3964123c015e43b644ed8c92d342a2c6c812d355459\" returns successfully" Apr 21 10:13:04.296426 kubelet[2137]: E0421 10:13:04.296388 2137 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:13:04.756470 kubelet[2137]: I0421 10:13:04.756421 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:13:04.966835 kubelet[2137]: E0421 10:13:04.966771 2137 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 21 10:13:05.066006 kubelet[2137]: I0421 10:13:05.065583 2137 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 21 10:13:05.066006 kubelet[2137]: E0421 10:13:05.065616 2137 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 21 10:13:05.079759 kubelet[2137]: E0421 10:13:05.079673 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:05.162592 kubelet[2137]: E0421 10:13:05.162547 2137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:13:05.162720 kubelet[2137]: E0421 10:13:05.162697 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:05.163478 kubelet[2137]: E0421 10:13:05.163436 2137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:13:05.163534 kubelet[2137]: E0421 10:13:05.163524 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:05.164292 kubelet[2137]: E0421 10:13:05.164274 2137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:13:05.164410 kubelet[2137]: E0421 10:13:05.164395 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:05.180568 kubelet[2137]: E0421 10:13:05.180504 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:05.281450 kubelet[2137]: E0421 10:13:05.281300 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:05.382421 kubelet[2137]: E0421 10:13:05.381998 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:05.483234 kubelet[2137]: E0421 10:13:05.483109 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:05.584185 kubelet[2137]: E0421 10:13:05.584042 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:05.685193 kubelet[2137]: E0421 10:13:05.684707 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:05.768013 kernel: hrtimer: interrupt took 2981591 ns Apr 21 10:13:05.790638 kubelet[2137]: E0421 10:13:05.789584 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:05.901640 kubelet[2137]: E0421 10:13:05.900385 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:06.003130 kubelet[2137]: E0421 10:13:06.002823 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:06.112595 kubelet[2137]: E0421 10:13:06.111334 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:06.204550 kubelet[2137]: E0421 10:13:06.204241 2137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:13:06.204550 kubelet[2137]: E0421 10:13:06.204311 2137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:13:06.204550 kubelet[2137]: E0421 10:13:06.204447 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:06.204550 kubelet[2137]: E0421 10:13:06.204486 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:06.293276 kubelet[2137]: E0421 10:13:06.212403 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:06.313614 kubelet[2137]: E0421 10:13:06.313508 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:06.414605 kubelet[2137]: E0421 10:13:06.414494 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:06.515808 kubelet[2137]: E0421 10:13:06.515716 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:06.616886 kubelet[2137]: E0421 10:13:06.616626 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:06.717809 kubelet[2137]: E0421 10:13:06.717714 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:06.738556 kubelet[2137]: I0421 10:13:06.738295 2137 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:13:06.756204 kubelet[2137]: I0421 10:13:06.755927 2137 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:13:06.759761 kubelet[2137]: I0421 10:13:06.759681 2137 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:13:07.124684 kubelet[2137]: I0421 10:13:07.124588 2137 apiserver.go:52] "Watching apiserver" Apr 21 10:13:07.130277 kubelet[2137]: E0421 10:13:07.130205 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:07.138798 kubelet[2137]: I0421 10:13:07.138749 2137 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:13:07.204495 kubelet[2137]: E0421 10:13:07.204433 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:07.204611 kubelet[2137]: E0421 10:13:07.204505 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:07.227669 systemd[1]: Reloading requested from client PID 2428 ('systemctl') (unit session-7.scope)... Apr 21 10:13:07.227693 systemd[1]: Reloading... Apr 21 10:13:07.322583 zram_generator::config[2467]: No configuration found. Apr 21 10:13:07.402238 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:13:07.467190 systemd[1]: Reloading finished in 239 ms. Apr 21 10:13:07.509165 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:13:07.539743 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:13:07.540025 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:13:07.553881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:13:07.713482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:13:07.717914 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:13:07.851686 kubelet[2512]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:13:07.851686 kubelet[2512]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:13:07.851686 kubelet[2512]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:13:07.852166 kubelet[2512]: I0421 10:13:07.851700 2512 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:13:07.857750 kubelet[2512]: I0421 10:13:07.857707 2512 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:13:07.857750 kubelet[2512]: I0421 10:13:07.857739 2512 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:13:07.857914 kubelet[2512]: I0421 10:13:07.857897 2512 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:13:07.859016 kubelet[2512]: I0421 10:13:07.858982 2512 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:13:07.860727 kubelet[2512]: I0421 10:13:07.860694 2512 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:13:07.863642 kubelet[2512]: E0421 10:13:07.863611 2512 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:13:07.863642 kubelet[2512]: I0421 10:13:07.863644 2512 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:13:07.871578 kubelet[2512]: I0421 10:13:07.871513 2512 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:13:07.871698 kubelet[2512]: I0421 10:13:07.871671 2512 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:13:07.871810 kubelet[2512]: I0421 10:13:07.871696 2512 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:13:07.871897 kubelet[2512]: I0421 10:13:07.871814 2512 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:13:07.871897 kubelet[2512]: I0421 10:13:07.871821 2512 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:13:07.871897 kubelet[2512]: I0421 10:13:07.871853 2512 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:13:07.871993 kubelet[2512]: I0421 10:13:07.871979 2512 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:13:07.872012 kubelet[2512]: I0421 10:13:07.871995 2512 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:13:07.872012 kubelet[2512]: I0421 10:13:07.872011 2512 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:13:07.872064 kubelet[2512]: I0421 10:13:07.872042 2512 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:13:07.872822 kubelet[2512]: I0421 10:13:07.872724 2512 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:13:07.874397 kubelet[2512]: I0421 10:13:07.873109 2512 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:13:07.876976 kubelet[2512]: I0421 10:13:07.876916 2512 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:13:07.876976 kubelet[2512]: I0421 10:13:07.876957 2512 server.go:1289] "Started kubelet" Apr 21 10:13:07.877690 kubelet[2512]: I0421 10:13:07.877451 2512 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:13:07.877690 kubelet[2512]: I0421 10:13:07.877682 2512 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:13:07.877780 kubelet[2512]: I0421 10:13:07.877717 2512 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:13:07.878170 kubelet[2512]: I0421 10:13:07.878153 2512 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:13:07.880460 kubelet[2512]: I0421 10:13:07.880324 2512 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:13:07.880766 kubelet[2512]: E0421 10:13:07.880737 2512 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:13:07.880796 kubelet[2512]: I0421 10:13:07.880786 2512 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:13:07.881329 kubelet[2512]: I0421 10:13:07.880910 2512 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:13:07.881329 kubelet[2512]: I0421 10:13:07.881124 2512 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:13:07.882473 kubelet[2512]: I0421 10:13:07.881645 2512 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:13:07.882473 kubelet[2512]: I0421 10:13:07.881708 2512 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:13:07.883956 kubelet[2512]: I0421 10:13:07.883926 2512 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:13:07.886901 kubelet[2512]: I0421 10:13:07.885759 2512 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:13:07.902053 kubelet[2512]: I0421 10:13:07.901957 2512 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:13:07.904697 kubelet[2512]: I0421 10:13:07.904664 2512 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:13:07.904697 kubelet[2512]: I0421 10:13:07.904696 2512 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:13:07.904830 kubelet[2512]: I0421 10:13:07.904714 2512 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:13:07.904830 kubelet[2512]: I0421 10:13:07.904754 2512 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:13:07.905415 kubelet[2512]: E0421 10:13:07.904837 2512 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:13:07.926968 kubelet[2512]: I0421 10:13:07.926942 2512 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:13:07.926968 kubelet[2512]: I0421 10:13:07.926962 2512 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:13:07.926968 kubelet[2512]: I0421 10:13:07.926975 2512 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:13:07.927098 kubelet[2512]: I0421 10:13:07.927065 2512 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 10:13:07.927098 kubelet[2512]: I0421 10:13:07.927071 2512 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 10:13:07.927098 kubelet[2512]: I0421 10:13:07.927084 2512 policy_none.go:49] "None policy: Start" Apr 21 10:13:07.927098 kubelet[2512]: I0421 10:13:07.927091 2512 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:13:07.927098 kubelet[2512]: I0421 10:13:07.927097 2512 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:13:07.927200 kubelet[2512]: I0421 10:13:07.927184 2512 state_mem.go:75] "Updated machine memory state" Apr 21 10:13:07.930533 kubelet[2512]: E0421 10:13:07.930198 2512 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:13:07.930533 kubelet[2512]: I0421 10:13:07.930314 2512 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:13:07.930533 kubelet[2512]: I0421 10:13:07.930322 2512 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:13:07.930533 kubelet[2512]: I0421 10:13:07.930493 2512 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:13:07.931510 kubelet[2512]: E0421 10:13:07.931311 2512 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:13:08.007556 kubelet[2512]: I0421 10:13:08.007093 2512 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:13:08.007556 kubelet[2512]: I0421 10:13:08.007129 2512 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:13:08.007556 kubelet[2512]: I0421 10:13:08.007287 2512 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:13:08.014055 kubelet[2512]: E0421 10:13:08.014011 2512 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 10:13:08.014326 kubelet[2512]: E0421 10:13:08.014302 2512 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 21 10:13:08.014326 kubelet[2512]: E0421 10:13:08.014325 2512 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:13:08.034164 kubelet[2512]: I0421 10:13:08.034126 2512 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:13:08.040262 kubelet[2512]: I0421 10:13:08.040232 2512 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 21 10:13:08.040328 kubelet[2512]: I0421 10:13:08.040301 2512 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 21 10:13:08.082591 kubelet[2512]: I0421 10:13:08.082542 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47e50b39a5c3f70a3898f39186f718d0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"47e50b39a5c3f70a3898f39186f718d0\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:13:08.082591 kubelet[2512]: I0421 10:13:08.082588 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:13:08.082749 kubelet[2512]: I0421 10:13:08.082612 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:13:08.082749 kubelet[2512]: I0421 10:13:08.082634 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47e50b39a5c3f70a3898f39186f718d0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"47e50b39a5c3f70a3898f39186f718d0\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:13:08.082749 kubelet[2512]: I0421 10:13:08.082648 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47e50b39a5c3f70a3898f39186f718d0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"47e50b39a5c3f70a3898f39186f718d0\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:13:08.082749 kubelet[2512]: I0421 10:13:08.082660 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:13:08.082749 kubelet[2512]: I0421 10:13:08.082673 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:13:08.082898 kubelet[2512]: I0421 10:13:08.082689 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:13:08.082898 kubelet[2512]: I0421 10:13:08.082703 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:13:08.224389 sudo[2553]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 21 10:13:08.224611 sudo[2553]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 21 10:13:08.315759 kubelet[2512]: E0421 10:13:08.315415 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:08.315759 kubelet[2512]: E0421 10:13:08.315467 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:08.315759 kubelet[2512]: E0421 10:13:08.315487 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:08.873692 kubelet[2512]: I0421 10:13:08.873343 2512 apiserver.go:52] "Watching apiserver" Apr 21 10:13:08.881294 kubelet[2512]: I0421 10:13:08.881248 2512 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:13:08.910861 sudo[2553]: pam_unix(sudo:session): session closed for user root Apr 21 10:13:08.916640 kubelet[2512]: I0421 10:13:08.916601 2512 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:13:08.916983 kubelet[2512]: I0421 10:13:08.916849 2512 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:13:08.917294 kubelet[2512]: E0421 10:13:08.917257 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:08.923860 kubelet[2512]: E0421 10:13:08.923825 2512 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 10:13:08.924856 kubelet[2512]: E0421 10:13:08.924028 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:08.924856 kubelet[2512]: E0421 10:13:08.924736 2512 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 21 10:13:08.924981 kubelet[2512]: E0421 10:13:08.924939 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:08.940752 kubelet[2512]: I0421 10:13:08.940702 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.940689301 podStartE2EDuration="2.940689301s" podCreationTimestamp="2026-04-21 10:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:13:08.934757778 +0000 UTC m=+1.213513626" watchObservedRunningTime="2026-04-21 10:13:08.940689301 +0000 UTC m=+1.219445136" Apr 21 10:13:08.947204 kubelet[2512]: I0421 10:13:08.947131 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.94711858 podStartE2EDuration="2.94711858s" podCreationTimestamp="2026-04-21 10:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:13:08.940934727 +0000 UTC m=+1.219690573" watchObservedRunningTime="2026-04-21 10:13:08.94711858 +0000 UTC m=+1.225874426" Apr 21 10:13:09.921748 kubelet[2512]: E0421 10:13:09.921684 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:09.922109 kubelet[2512]: E0421 10:13:09.921857 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:10.622675 sudo[1641]: pam_unix(sudo:session): session closed for user root Apr 21 10:13:10.624427 sshd[1638]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:10.628474 systemd[1]: sshd@6-10.0.0.21:22-10.0.0.1:45256.service: Deactivated successfully. Apr 21 10:13:10.629885 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:13:10.630037 systemd[1]: session-7.scope: Consumed 5.090s CPU time, 161.1M memory peak, 0B memory swap peak. Apr 21 10:13:10.630474 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:13:10.631517 systemd-logind[1443]: Removed session 7. Apr 21 10:13:14.021247 kubelet[2512]: I0421 10:13:14.021186 2512 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:13:14.021805 kubelet[2512]: I0421 10:13:14.021744 2512 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:13:14.021829 containerd[1463]: time="2026-04-21T10:13:14.021513134Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:13:14.859722 kubelet[2512]: I0421 10:13:14.859655 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=8.859642525 podStartE2EDuration="8.859642525s" podCreationTimestamp="2026-04-21 10:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:13:08.94744034 +0000 UTC m=+1.226196174" watchObservedRunningTime="2026-04-21 10:13:14.859642525 +0000 UTC m=+7.138398362" Apr 21 10:13:14.870471 systemd[1]: Created slice kubepods-besteffort-pod08c69c2b_09f7_4f91_aa89_73f449fabfdf.slice - libcontainer container kubepods-besteffort-pod08c69c2b_09f7_4f91_aa89_73f449fabfdf.slice. Apr 21 10:13:14.879342 systemd[1]: Created slice kubepods-burstable-pod5685b68a_829a_423f_864f_f0ad638578f5.slice - libcontainer container kubepods-burstable-pod5685b68a_829a_423f_864f_f0ad638578f5.slice. Apr 21 10:13:14.985398 kubelet[2512]: I0421 10:13:14.985240 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-cni-path\") pod \"cilium-9c7h5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " pod="kube-system/cilium-9c7h5" Apr 21 10:13:14.985398 kubelet[2512]: I0421 10:13:14.985333 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-host-proc-sys-kernel\") pod \"cilium-9c7h5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " pod="kube-system/cilium-9c7h5" Apr 21 10:13:14.985398 kubelet[2512]: I0421 10:13:14.985384 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-etc-cni-netd\") pod \"cilium-9c7h5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " pod="kube-system/cilium-9c7h5" Apr 21 10:13:14.985398 kubelet[2512]: I0421 10:13:14.985401 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-xtables-lock\") pod \"cilium-9c7h5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " pod="kube-system/cilium-9c7h5" Apr 21 10:13:14.985398 kubelet[2512]: I0421 10:13:14.985417 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5685b68a-829a-423f-864f-f0ad638578f5-hubble-tls\") pod \"cilium-9c7h5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " pod="kube-system/cilium-9c7h5" Apr 21 10:13:14.985780 kubelet[2512]: I0421 10:13:14.985432 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfmrt\" (UniqueName: \"kubernetes.io/projected/5685b68a-829a-423f-864f-f0ad638578f5-kube-api-access-lfmrt\") pod \"cilium-9c7h5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " pod="kube-system/cilium-9c7h5" Apr 21 10:13:14.985780 kubelet[2512]: I0421 10:13:14.985447 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08c69c2b-09f7-4f91-aa89-73f449fabfdf-lib-modules\") pod \"kube-proxy-z9jwc\" (UID: \"08c69c2b-09f7-4f91-aa89-73f449fabfdf\") " pod="kube-system/kube-proxy-z9jwc" Apr 21 10:13:14.985780 kubelet[2512]: I0421 10:13:14.985460 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vccvc\" (UniqueName: \"kubernetes.io/projected/08c69c2b-09f7-4f91-aa89-73f449fabfdf-kube-api-access-vccvc\") pod \"kube-proxy-z9jwc\" (UID: \"08c69c2b-09f7-4f91-aa89-73f449fabfdf\") " pod="kube-system/kube-proxy-z9jwc" Apr 21 10:13:14.985780 kubelet[2512]: I0421 10:13:14.985485 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/08c69c2b-09f7-4f91-aa89-73f449fabfdf-kube-proxy\") pod \"kube-proxy-z9jwc\" (UID: \"08c69c2b-09f7-4f91-aa89-73f449fabfdf\") " pod="kube-system/kube-proxy-z9jwc" Apr 21 10:13:14.985780 kubelet[2512]: I0421 10:13:14.985498 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-cilium-cgroup\") pod \"cilium-9c7h5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " pod="kube-system/cilium-9c7h5" Apr 21 10:13:14.985885 kubelet[2512]: I0421 10:13:14.985513 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-bpf-maps\") pod \"cilium-9c7h5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " pod="kube-system/cilium-9c7h5" Apr 21 10:13:14.985885 kubelet[2512]: I0421 10:13:14.985525 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-lib-modules\") pod \"cilium-9c7h5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " pod="kube-system/cilium-9c7h5" Apr 21 10:13:14.985885 kubelet[2512]: I0421 10:13:14.985542 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5685b68a-829a-423f-864f-f0ad638578f5-clustermesh-secrets\") pod \"cilium-9c7h5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " pod="kube-system/cilium-9c7h5" Apr 21 10:13:14.985885 kubelet[2512]: I0421 10:13:14.985555 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5685b68a-829a-423f-864f-f0ad638578f5-cilium-config-path\") pod \"cilium-9c7h5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " pod="kube-system/cilium-9c7h5" Apr 21 10:13:14.985885 kubelet[2512]: I0421 10:13:14.985580 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-host-proc-sys-net\") pod \"cilium-9c7h5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " pod="kube-system/cilium-9c7h5" Apr 21 10:13:14.985885 kubelet[2512]: I0421 10:13:14.985593 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-cilium-run\") pod \"cilium-9c7h5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " pod="kube-system/cilium-9c7h5" Apr 21 10:13:14.986029 kubelet[2512]: I0421 10:13:14.985607 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08c69c2b-09f7-4f91-aa89-73f449fabfdf-xtables-lock\") pod \"kube-proxy-z9jwc\" (UID: \"08c69c2b-09f7-4f91-aa89-73f449fabfdf\") " pod="kube-system/kube-proxy-z9jwc" Apr 21 10:13:14.986029 kubelet[2512]: I0421 10:13:14.985623 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-hostproc\") pod \"cilium-9c7h5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " pod="kube-system/cilium-9c7h5" Apr 21 10:13:15.168287 systemd[1]: Created slice kubepods-besteffort-pod6cf9b5cd_e317_47c3_b873_421b5120912f.slice - libcontainer container kubepods-besteffort-pod6cf9b5cd_e317_47c3_b873_421b5120912f.slice. Apr 21 10:13:15.176298 kubelet[2512]: E0421 10:13:15.176230 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:15.178483 containerd[1463]: time="2026-04-21T10:13:15.178454693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z9jwc,Uid:08c69c2b-09f7-4f91-aa89-73f449fabfdf,Namespace:kube-system,Attempt:0,}" Apr 21 10:13:15.181054 kubelet[2512]: E0421 10:13:15.181013 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:15.189232 containerd[1463]: time="2026-04-21T10:13:15.189003706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9c7h5,Uid:5685b68a-829a-423f-864f-f0ad638578f5,Namespace:kube-system,Attempt:0,}" Apr 21 10:13:15.242059 containerd[1463]: time="2026-04-21T10:13:15.241740072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:13:15.244382 containerd[1463]: time="2026-04-21T10:13:15.242959592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:13:15.244382 containerd[1463]: time="2026-04-21T10:13:15.243045798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:15.244382 containerd[1463]: time="2026-04-21T10:13:15.243340470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:15.286860 systemd[1]: Started cri-containerd-1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93.scope - libcontainer container 1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93. Apr 21 10:13:15.289544 kubelet[2512]: I0421 10:13:15.289501 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5wpd\" (UniqueName: \"kubernetes.io/projected/6cf9b5cd-e317-47c3-b873-421b5120912f-kube-api-access-x5wpd\") pod \"cilium-operator-6c4d7847fc-hzl67\" (UID: \"6cf9b5cd-e317-47c3-b873-421b5120912f\") " pod="kube-system/cilium-operator-6c4d7847fc-hzl67" Apr 21 10:13:15.289755 kubelet[2512]: I0421 10:13:15.289675 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cf9b5cd-e317-47c3-b873-421b5120912f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hzl67\" (UID: \"6cf9b5cd-e317-47c3-b873-421b5120912f\") " pod="kube-system/cilium-operator-6c4d7847fc-hzl67" Apr 21 10:13:15.296838 containerd[1463]: time="2026-04-21T10:13:15.296507908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:13:15.296838 containerd[1463]: time="2026-04-21T10:13:15.296550075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:13:15.296838 containerd[1463]: time="2026-04-21T10:13:15.296596502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:15.296838 containerd[1463]: time="2026-04-21T10:13:15.296681396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:15.312530 systemd[1]: Started cri-containerd-cdf6d316f183e8c331cd4f1830fa53433a5875900a386796ea9b3b0f0c1bfedf.scope - libcontainer container cdf6d316f183e8c331cd4f1830fa53433a5875900a386796ea9b3b0f0c1bfedf. Apr 21 10:13:15.313029 containerd[1463]: time="2026-04-21T10:13:15.313001189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9c7h5,Uid:5685b68a-829a-423f-864f-f0ad638578f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93\"" Apr 21 10:13:15.313736 kubelet[2512]: E0421 10:13:15.313560 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:15.314635 containerd[1463]: time="2026-04-21T10:13:15.314615829Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 21 10:13:15.331670 containerd[1463]: time="2026-04-21T10:13:15.331601129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z9jwc,Uid:08c69c2b-09f7-4f91-aa89-73f449fabfdf,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdf6d316f183e8c331cd4f1830fa53433a5875900a386796ea9b3b0f0c1bfedf\"" Apr 21 10:13:15.332273 kubelet[2512]: E0421 10:13:15.332247 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:15.338898 containerd[1463]: time="2026-04-21T10:13:15.338825504Z" level=info msg="CreateContainer within sandbox \"cdf6d316f183e8c331cd4f1830fa53433a5875900a386796ea9b3b0f0c1bfedf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:13:15.354868 containerd[1463]: time="2026-04-21T10:13:15.354797068Z" level=info msg="CreateContainer within sandbox \"cdf6d316f183e8c331cd4f1830fa53433a5875900a386796ea9b3b0f0c1bfedf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1ed021bbd6308882c0793e5f6c9b36428395e0313c1a26b0f6bfbb2395e4d5be\"" Apr 21 10:13:15.355864 containerd[1463]: time="2026-04-21T10:13:15.355839026Z" level=info msg="StartContainer for \"1ed021bbd6308882c0793e5f6c9b36428395e0313c1a26b0f6bfbb2395e4d5be\"" Apr 21 10:13:15.384978 systemd[1]: Started cri-containerd-1ed021bbd6308882c0793e5f6c9b36428395e0313c1a26b0f6bfbb2395e4d5be.scope - libcontainer container 1ed021bbd6308882c0793e5f6c9b36428395e0313c1a26b0f6bfbb2395e4d5be. Apr 21 10:13:15.412219 containerd[1463]: time="2026-04-21T10:13:15.412167299Z" level=info msg="StartContainer for \"1ed021bbd6308882c0793e5f6c9b36428395e0313c1a26b0f6bfbb2395e4d5be\" returns successfully" Apr 21 10:13:15.472839 kubelet[2512]: E0421 10:13:15.472674 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:15.473898 containerd[1463]: time="2026-04-21T10:13:15.473831571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hzl67,Uid:6cf9b5cd-e317-47c3-b873-421b5120912f,Namespace:kube-system,Attempt:0,}" Apr 21 10:13:15.509941 containerd[1463]: time="2026-04-21T10:13:15.509700602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:13:15.510083 containerd[1463]: time="2026-04-21T10:13:15.509748333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:13:15.510083 containerd[1463]: time="2026-04-21T10:13:15.509925476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:15.511288 containerd[1463]: time="2026-04-21T10:13:15.511119964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:15.531493 systemd[1]: Started cri-containerd-48addf0078507c12c3c1d5508fe4be6f98c50e7d034559068524d83df02f2095.scope - libcontainer container 48addf0078507c12c3c1d5508fe4be6f98c50e7d034559068524d83df02f2095. Apr 21 10:13:15.582174 containerd[1463]: time="2026-04-21T10:13:15.582133049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hzl67,Uid:6cf9b5cd-e317-47c3-b873-421b5120912f,Namespace:kube-system,Attempt:0,} returns sandbox id \"48addf0078507c12c3c1d5508fe4be6f98c50e7d034559068524d83df02f2095\"" Apr 21 10:13:15.583093 kubelet[2512]: E0421 10:13:15.583074 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:15.939461 kubelet[2512]: E0421 10:13:15.937072 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:16.510089 kubelet[2512]: E0421 10:13:16.510007 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:16.523471 kubelet[2512]: I0421 10:13:16.523410 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z9jwc" podStartSLOduration=2.523392962 podStartE2EDuration="2.523392962s" podCreationTimestamp="2026-04-21 10:13:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:13:15.946744114 +0000 UTC m=+8.225499957" watchObservedRunningTime="2026-04-21 10:13:16.523392962 +0000 UTC m=+8.802148809" Apr 21 10:13:16.939324 kubelet[2512]: E0421 10:13:16.939167 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:17.795749 kubelet[2512]: E0421 10:13:17.795710 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:17.940887 kubelet[2512]: E0421 10:13:17.940816 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:17.941059 kubelet[2512]: E0421 10:13:17.941007 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:18.495277 kubelet[2512]: E0421 10:13:18.495240 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:18.942671 kubelet[2512]: E0421 10:13:18.942462 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:24.266598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount887986186.mount: Deactivated successfully. Apr 21 10:13:25.663220 containerd[1463]: time="2026-04-21T10:13:25.663125909Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:25.663890 containerd[1463]: time="2026-04-21T10:13:25.663808987Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 21 10:13:25.664948 containerd[1463]: time="2026-04-21T10:13:25.664900989Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:25.666503 containerd[1463]: time="2026-04-21T10:13:25.666471004Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.351727445s" Apr 21 10:13:25.666556 containerd[1463]: time="2026-04-21T10:13:25.666505913Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 21 10:13:25.674199 containerd[1463]: time="2026-04-21T10:13:25.673503042Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 21 10:13:25.691530 containerd[1463]: time="2026-04-21T10:13:25.691461348Z" level=info msg="CreateContainer within sandbox \"1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 10:13:25.708293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount336509763.mount: Deactivated successfully. Apr 21 10:13:25.724381 containerd[1463]: time="2026-04-21T10:13:25.724261823Z" level=info msg="CreateContainer within sandbox \"1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b\"" Apr 21 10:13:25.725285 containerd[1463]: time="2026-04-21T10:13:25.725254544Z" level=info msg="StartContainer for \"34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b\"" Apr 21 10:13:25.758556 systemd[1]: Started cri-containerd-34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b.scope - libcontainer container 34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b. Apr 21 10:13:25.781779 containerd[1463]: time="2026-04-21T10:13:25.781604245Z" level=info msg="StartContainer for \"34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b\" returns successfully" Apr 21 10:13:25.789767 systemd[1]: cri-containerd-34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b.scope: Deactivated successfully. Apr 21 10:13:25.893142 containerd[1463]: time="2026-04-21T10:13:25.890871593Z" level=info msg="shim disconnected" id=34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b namespace=k8s.io Apr 21 10:13:25.893142 containerd[1463]: time="2026-04-21T10:13:25.893085770Z" level=warning msg="cleaning up after shim disconnected" id=34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b namespace=k8s.io Apr 21 10:13:25.893142 containerd[1463]: time="2026-04-21T10:13:25.893096520Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:13:25.967908 kubelet[2512]: E0421 10:13:25.967715 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:25.976032 containerd[1463]: time="2026-04-21T10:13:25.975839605Z" level=info msg="CreateContainer within sandbox \"1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 10:13:26.002907 containerd[1463]: time="2026-04-21T10:13:26.002850687Z" level=info msg="CreateContainer within sandbox \"1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b\"" Apr 21 10:13:26.003943 containerd[1463]: time="2026-04-21T10:13:26.003907684Z" level=info msg="StartContainer for \"4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b\"" Apr 21 10:13:26.031552 systemd[1]: Started cri-containerd-4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b.scope - libcontainer container 4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b. Apr 21 10:13:26.051738 containerd[1463]: time="2026-04-21T10:13:26.051703402Z" level=info msg="StartContainer for \"4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b\" returns successfully" Apr 21 10:13:26.061807 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:13:26.061988 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:13:26.062082 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:13:26.070588 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:13:26.070938 systemd[1]: cri-containerd-4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b.scope: Deactivated successfully. Apr 21 10:13:26.094339 containerd[1463]: time="2026-04-21T10:13:26.094222227Z" level=info msg="shim disconnected" id=4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b namespace=k8s.io Apr 21 10:13:26.094339 containerd[1463]: time="2026-04-21T10:13:26.094301494Z" level=warning msg="cleaning up after shim disconnected" id=4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b namespace=k8s.io Apr 21 10:13:26.094339 containerd[1463]: time="2026-04-21T10:13:26.094312128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:13:26.096264 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:13:26.705643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b-rootfs.mount: Deactivated successfully. Apr 21 10:13:26.956843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount122340060.mount: Deactivated successfully. Apr 21 10:13:26.967314 kubelet[2512]: E0421 10:13:26.967241 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:26.976058 containerd[1463]: time="2026-04-21T10:13:26.975998960Z" level=info msg="CreateContainer within sandbox \"1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 10:13:27.002611 containerd[1463]: time="2026-04-21T10:13:27.002467858Z" level=info msg="CreateContainer within sandbox \"1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993\"" Apr 21 10:13:27.005791 containerd[1463]: time="2026-04-21T10:13:27.005443221Z" level=info msg="StartContainer for \"9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993\"" Apr 21 10:13:27.033650 systemd[1]: Started cri-containerd-9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993.scope - libcontainer container 9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993. Apr 21 10:13:27.062599 systemd[1]: cri-containerd-9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993.scope: Deactivated successfully. Apr 21 10:13:27.064069 containerd[1463]: time="2026-04-21T10:13:27.064013213Z" level=info msg="StartContainer for \"9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993\" returns successfully" Apr 21 10:13:27.091496 containerd[1463]: time="2026-04-21T10:13:27.091428138Z" level=info msg="shim disconnected" id=9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993 namespace=k8s.io Apr 21 10:13:27.091496 containerd[1463]: time="2026-04-21T10:13:27.091473275Z" level=warning msg="cleaning up after shim disconnected" id=9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993 namespace=k8s.io Apr 21 10:13:27.091496 containerd[1463]: time="2026-04-21T10:13:27.091479732Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:13:27.395813 containerd[1463]: time="2026-04-21T10:13:27.395748631Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:27.396580 containerd[1463]: time="2026-04-21T10:13:27.396535566Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 21 10:13:27.397708 containerd[1463]: time="2026-04-21T10:13:27.397676205Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:27.399145 containerd[1463]: time="2026-04-21T10:13:27.398895336Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.725353019s" Apr 21 10:13:27.399145 containerd[1463]: time="2026-04-21T10:13:27.398923920Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 21 10:13:27.408470 containerd[1463]: time="2026-04-21T10:13:27.408410999Z" level=info msg="CreateContainer within sandbox \"48addf0078507c12c3c1d5508fe4be6f98c50e7d034559068524d83df02f2095\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 21 10:13:27.419577 containerd[1463]: time="2026-04-21T10:13:27.419510276Z" level=info msg="CreateContainer within sandbox \"48addf0078507c12c3c1d5508fe4be6f98c50e7d034559068524d83df02f2095\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f\"" Apr 21 10:13:27.420087 containerd[1463]: time="2026-04-21T10:13:27.420055518Z" level=info msg="StartContainer for \"d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f\"" Apr 21 10:13:27.446575 systemd[1]: Started cri-containerd-d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f.scope - libcontainer container d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f. Apr 21 10:13:27.473486 containerd[1463]: time="2026-04-21T10:13:27.473401747Z" level=info msg="StartContainer for \"d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f\" returns successfully" Apr 21 10:13:27.706266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993-rootfs.mount: Deactivated successfully. Apr 21 10:13:27.982645 kubelet[2512]: E0421 10:13:27.982603 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:27.994317 kubelet[2512]: E0421 10:13:27.994271 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:28.003648 containerd[1463]: time="2026-04-21T10:13:28.003607554Z" level=info msg="CreateContainer within sandbox \"1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 10:13:28.023637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4017785685.mount: Deactivated successfully. Apr 21 10:13:28.029430 containerd[1463]: time="2026-04-21T10:13:28.028087746Z" level=info msg="CreateContainer within sandbox \"1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2\"" Apr 21 10:13:28.029430 containerd[1463]: time="2026-04-21T10:13:28.028580277Z" level=info msg="StartContainer for \"9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2\"" Apr 21 10:13:28.082110 kubelet[2512]: I0421 10:13:28.082053 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hzl67" podStartSLOduration=1.266119971 podStartE2EDuration="13.082038197s" podCreationTimestamp="2026-04-21 10:13:15 +0000 UTC" firstStartedPulling="2026-04-21 10:13:15.58370769 +0000 UTC m=+7.862463525" lastFinishedPulling="2026-04-21 10:13:27.399625913 +0000 UTC m=+19.678381751" observedRunningTime="2026-04-21 10:13:28.027976884 +0000 UTC m=+20.306732730" watchObservedRunningTime="2026-04-21 10:13:28.082038197 +0000 UTC m=+20.360794043" Apr 21 10:13:28.099617 systemd[1]: Started cri-containerd-9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2.scope - libcontainer container 9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2. Apr 21 10:13:28.128311 systemd[1]: cri-containerd-9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2.scope: Deactivated successfully. Apr 21 10:13:28.151831 containerd[1463]: time="2026-04-21T10:13:28.151754222Z" level=info msg="StartContainer for \"9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2\" returns successfully" Apr 21 10:13:28.190946 containerd[1463]: time="2026-04-21T10:13:28.190795933Z" level=info msg="shim disconnected" id=9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2 namespace=k8s.io Apr 21 10:13:28.190946 containerd[1463]: time="2026-04-21T10:13:28.190860769Z" level=warning msg="cleaning up after shim disconnected" id=9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2 namespace=k8s.io Apr 21 10:13:28.190946 containerd[1463]: time="2026-04-21T10:13:28.190867541Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:13:28.705868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2-rootfs.mount: Deactivated successfully. Apr 21 10:13:28.990445 kubelet[2512]: E0421 10:13:28.990417 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:28.990768 kubelet[2512]: E0421 10:13:28.990751 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:29.000326 containerd[1463]: time="2026-04-21T10:13:29.000261409Z" level=info msg="CreateContainer within sandbox \"1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 10:13:29.029306 containerd[1463]: time="2026-04-21T10:13:29.029137541Z" level=info msg="CreateContainer within sandbox \"1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78\"" Apr 21 10:13:29.030719 containerd[1463]: time="2026-04-21T10:13:29.030652383Z" level=info msg="StartContainer for \"f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78\"" Apr 21 10:13:29.072099 systemd[1]: Started cri-containerd-f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78.scope - libcontainer container f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78. Apr 21 10:13:29.099696 containerd[1463]: time="2026-04-21T10:13:29.099644332Z" level=info msg="StartContainer for \"f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78\" returns successfully" Apr 21 10:13:29.247611 kubelet[2512]: I0421 10:13:29.247477 2512 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 21 10:13:29.291077 systemd[1]: Created slice kubepods-burstable-pod27bf3e57_a44a_484b_ac5f_6b104de8c959.slice - libcontainer container kubepods-burstable-pod27bf3e57_a44a_484b_ac5f_6b104de8c959.slice. Apr 21 10:13:29.296345 systemd[1]: Created slice kubepods-burstable-podd355cf6e_4ad7_4e8c_87f3_dc7f8cddc688.slice - libcontainer container kubepods-burstable-podd355cf6e_4ad7_4e8c_87f3_dc7f8cddc688.slice. Apr 21 10:13:29.412877 kubelet[2512]: I0421 10:13:29.412836 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d355cf6e-4ad7-4e8c-87f3-dc7f8cddc688-config-volume\") pod \"coredns-674b8bbfcf-2l4dl\" (UID: \"d355cf6e-4ad7-4e8c-87f3-dc7f8cddc688\") " pod="kube-system/coredns-674b8bbfcf-2l4dl" Apr 21 10:13:29.412877 kubelet[2512]: I0421 10:13:29.412876 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27bf3e57-a44a-484b-ac5f-6b104de8c959-config-volume\") pod \"coredns-674b8bbfcf-wwbhp\" (UID: \"27bf3e57-a44a-484b-ac5f-6b104de8c959\") " pod="kube-system/coredns-674b8bbfcf-wwbhp" Apr 21 10:13:29.413058 kubelet[2512]: I0421 10:13:29.412926 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmk22\" (UniqueName: \"kubernetes.io/projected/27bf3e57-a44a-484b-ac5f-6b104de8c959-kube-api-access-nmk22\") pod \"coredns-674b8bbfcf-wwbhp\" (UID: \"27bf3e57-a44a-484b-ac5f-6b104de8c959\") " pod="kube-system/coredns-674b8bbfcf-wwbhp" Apr 21 10:13:29.413058 kubelet[2512]: I0421 10:13:29.412980 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9rcw\" (UniqueName: \"kubernetes.io/projected/d355cf6e-4ad7-4e8c-87f3-dc7f8cddc688-kube-api-access-s9rcw\") pod \"coredns-674b8bbfcf-2l4dl\" (UID: \"d355cf6e-4ad7-4e8c-87f3-dc7f8cddc688\") " pod="kube-system/coredns-674b8bbfcf-2l4dl" Apr 21 10:13:29.595685 kubelet[2512]: E0421 10:13:29.595505 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:29.599061 containerd[1463]: time="2026-04-21T10:13:29.598653340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wwbhp,Uid:27bf3e57-a44a-484b-ac5f-6b104de8c959,Namespace:kube-system,Attempt:0,}" Apr 21 10:13:29.600172 kubelet[2512]: E0421 10:13:29.600094 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:29.600634 containerd[1463]: time="2026-04-21T10:13:29.600600639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2l4dl,Uid:d355cf6e-4ad7-4e8c-87f3-dc7f8cddc688,Namespace:kube-system,Attempt:0,}" Apr 21 10:13:29.842184 update_engine[1444]: I20260421 10:13:29.842051 1444 update_attempter.cc:509] Updating boot flags... Apr 21 10:13:29.870413 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3365) Apr 21 10:13:29.886498 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3366) Apr 21 10:13:29.917407 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3366) Apr 21 10:13:29.998461 kubelet[2512]: E0421 10:13:29.998422 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:30.013400 kubelet[2512]: I0421 10:13:30.013294 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9c7h5" podStartSLOduration=5.654758288 podStartE2EDuration="16.013280289s" podCreationTimestamp="2026-04-21 10:13:14 +0000 UTC" firstStartedPulling="2026-04-21 10:13:15.314171938 +0000 UTC m=+7.592948166" lastFinishedPulling="2026-04-21 10:13:25.672714327 +0000 UTC m=+17.951470167" observedRunningTime="2026-04-21 10:13:30.011734007 +0000 UTC m=+22.290489861" watchObservedRunningTime="2026-04-21 10:13:30.013280289 +0000 UTC m=+22.292036143" Apr 21 10:13:30.998292 systemd-networkd[1406]: cilium_host: Link UP Apr 21 10:13:30.998443 systemd-networkd[1406]: cilium_net: Link UP Apr 21 10:13:30.998445 systemd-networkd[1406]: cilium_net: Gained carrier Apr 21 10:13:30.998564 systemd-networkd[1406]: cilium_host: Gained carrier Apr 21 10:13:30.998754 systemd-networkd[1406]: cilium_host: Gained IPv6LL Apr 21 10:13:31.003788 kubelet[2512]: E0421 10:13:31.002607 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:31.084343 systemd-networkd[1406]: cilium_vxlan: Link UP Apr 21 10:13:31.084349 systemd-networkd[1406]: cilium_vxlan: Gained carrier Apr 21 10:13:31.268424 kernel: NET: Registered PF_ALG protocol family Apr 21 10:13:31.579667 systemd-networkd[1406]: cilium_net: Gained IPv6LL Apr 21 10:13:31.835077 systemd-networkd[1406]: lxc_health: Link UP Apr 21 10:13:31.843656 systemd-networkd[1406]: lxc_health: Gained carrier Apr 21 10:13:32.031258 kubelet[2512]: E0421 10:13:32.031202 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:32.166408 systemd-networkd[1406]: lxca5c8cabf7d30: Link UP Apr 21 10:13:32.168773 systemd-networkd[1406]: lxcf6cca0107abf: Link UP Apr 21 10:13:32.186440 kernel: eth0: renamed from tmp73bdb Apr 21 10:13:32.194435 kernel: eth0: renamed from tmp02276 Apr 21 10:13:32.198941 systemd-networkd[1406]: lxca5c8cabf7d30: Gained carrier Apr 21 10:13:32.200196 systemd-networkd[1406]: lxcf6cca0107abf: Gained carrier Apr 21 10:13:32.220711 systemd-networkd[1406]: cilium_vxlan: Gained IPv6LL Apr 21 10:13:33.179784 systemd-networkd[1406]: lxc_health: Gained IPv6LL Apr 21 10:13:33.184129 kubelet[2512]: E0421 10:13:33.184057 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:33.371901 systemd-networkd[1406]: lxca5c8cabf7d30: Gained IPv6LL Apr 21 10:13:34.011655 systemd-networkd[1406]: lxcf6cca0107abf: Gained IPv6LL Apr 21 10:13:34.898629 systemd[1]: Started sshd@7-10.0.0.21:22-10.0.0.1:58670.service - OpenSSH per-connection server daemon (10.0.0.1:58670). Apr 21 10:13:34.942171 sshd[3749]: Accepted publickey for core from 10.0.0.1 port 58670 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:13:34.943705 sshd[3749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:34.948531 systemd-logind[1443]: New session 8 of user core. Apr 21 10:13:34.953557 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:13:35.121780 kubelet[2512]: I0421 10:13:35.120506 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:13:35.121780 kubelet[2512]: E0421 10:13:35.121304 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:35.165198 sshd[3749]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:35.168505 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:13:35.168661 systemd[1]: sshd@7-10.0.0.21:22-10.0.0.1:58670.service: Deactivated successfully. Apr 21 10:13:35.169951 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:13:35.171161 systemd-logind[1443]: Removed session 8. Apr 21 10:13:35.750573 containerd[1463]: time="2026-04-21T10:13:35.749708672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:13:35.750991 containerd[1463]: time="2026-04-21T10:13:35.750537325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:13:35.750991 containerd[1463]: time="2026-04-21T10:13:35.750582150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:35.750991 containerd[1463]: time="2026-04-21T10:13:35.750727815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:35.752645 containerd[1463]: time="2026-04-21T10:13:35.752561729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:13:35.752645 containerd[1463]: time="2026-04-21T10:13:35.752603770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:13:35.752645 containerd[1463]: time="2026-04-21T10:13:35.752612005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:35.752730 containerd[1463]: time="2026-04-21T10:13:35.752665607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:35.766005 systemd[1]: run-containerd-runc-k8s.io-0227698c1409dc9ed39328b9f58ad4afbaaa126ff77be7b1b36868615259c573-runc.Cr85jZ.mount: Deactivated successfully. Apr 21 10:13:35.786608 systemd[1]: Started cri-containerd-0227698c1409dc9ed39328b9f58ad4afbaaa126ff77be7b1b36868615259c573.scope - libcontainer container 0227698c1409dc9ed39328b9f58ad4afbaaa126ff77be7b1b36868615259c573. Apr 21 10:13:35.788097 systemd[1]: Started cri-containerd-73bdb9020a3d44cd93c9e9228df3892ece1977a5c527a455e017ea20fe7958fb.scope - libcontainer container 73bdb9020a3d44cd93c9e9228df3892ece1977a5c527a455e017ea20fe7958fb. Apr 21 10:13:35.797602 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:13:35.798270 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:13:35.827631 containerd[1463]: time="2026-04-21T10:13:35.827585662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2l4dl,Uid:d355cf6e-4ad7-4e8c-87f3-dc7f8cddc688,Namespace:kube-system,Attempt:0,} returns sandbox id \"73bdb9020a3d44cd93c9e9228df3892ece1977a5c527a455e017ea20fe7958fb\"" Apr 21 10:13:35.828385 kubelet[2512]: E0421 10:13:35.828166 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:35.830390 containerd[1463]: time="2026-04-21T10:13:35.830305528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wwbhp,Uid:27bf3e57-a44a-484b-ac5f-6b104de8c959,Namespace:kube-system,Attempt:0,} returns sandbox id \"0227698c1409dc9ed39328b9f58ad4afbaaa126ff77be7b1b36868615259c573\"" Apr 21 10:13:35.831844 kubelet[2512]: E0421 10:13:35.831818 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:35.836858 containerd[1463]: time="2026-04-21T10:13:35.836744219Z" level=info msg="CreateContainer within sandbox \"73bdb9020a3d44cd93c9e9228df3892ece1977a5c527a455e017ea20fe7958fb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:13:35.838782 containerd[1463]: time="2026-04-21T10:13:35.838683625Z" level=info msg="CreateContainer within sandbox \"0227698c1409dc9ed39328b9f58ad4afbaaa126ff77be7b1b36868615259c573\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:13:35.862055 containerd[1463]: time="2026-04-21T10:13:35.861973512Z" level=info msg="CreateContainer within sandbox \"73bdb9020a3d44cd93c9e9228df3892ece1977a5c527a455e017ea20fe7958fb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f11e02c79f2b19e394f1a426ed6b15b47f57d1c496d063f8528293082dd8c65\"" Apr 21 10:13:35.863470 containerd[1463]: time="2026-04-21T10:13:35.863413298Z" level=info msg="StartContainer for \"9f11e02c79f2b19e394f1a426ed6b15b47f57d1c496d063f8528293082dd8c65\"" Apr 21 10:13:35.865500 containerd[1463]: time="2026-04-21T10:13:35.865425077Z" level=info msg="CreateContainer within sandbox \"0227698c1409dc9ed39328b9f58ad4afbaaa126ff77be7b1b36868615259c573\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d8e0773225db04e1372bf37572976e43eb49d3ca7d539714d8ee6fa7c2996dfd\"" Apr 21 10:13:35.866015 containerd[1463]: time="2026-04-21T10:13:35.865981083Z" level=info msg="StartContainer for \"d8e0773225db04e1372bf37572976e43eb49d3ca7d539714d8ee6fa7c2996dfd\"" Apr 21 10:13:35.889707 systemd[1]: Started cri-containerd-9f11e02c79f2b19e394f1a426ed6b15b47f57d1c496d063f8528293082dd8c65.scope - libcontainer container 9f11e02c79f2b19e394f1a426ed6b15b47f57d1c496d063f8528293082dd8c65. Apr 21 10:13:35.893032 systemd[1]: Started cri-containerd-d8e0773225db04e1372bf37572976e43eb49d3ca7d539714d8ee6fa7c2996dfd.scope - libcontainer container d8e0773225db04e1372bf37572976e43eb49d3ca7d539714d8ee6fa7c2996dfd. Apr 21 10:13:35.922543 containerd[1463]: time="2026-04-21T10:13:35.922492360Z" level=info msg="StartContainer for \"d8e0773225db04e1372bf37572976e43eb49d3ca7d539714d8ee6fa7c2996dfd\" returns successfully" Apr 21 10:13:35.922685 containerd[1463]: time="2026-04-21T10:13:35.922653233Z" level=info msg="StartContainer for \"9f11e02c79f2b19e394f1a426ed6b15b47f57d1c496d063f8528293082dd8c65\" returns successfully" Apr 21 10:13:36.043312 kubelet[2512]: E0421 10:13:36.043087 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:36.046202 kubelet[2512]: E0421 10:13:36.046126 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:36.046202 kubelet[2512]: E0421 10:13:36.046128 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:36.072634 kubelet[2512]: I0421 10:13:36.072416 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2l4dl" podStartSLOduration=21.072400498 podStartE2EDuration="21.072400498s" podCreationTimestamp="2026-04-21 10:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:13:36.059991309 +0000 UTC m=+28.338747155" watchObservedRunningTime="2026-04-21 10:13:36.072400498 +0000 UTC m=+28.351156344" Apr 21 10:13:36.072634 kubelet[2512]: I0421 10:13:36.072505 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wwbhp" podStartSLOduration=21.072501313 podStartE2EDuration="21.072501313s" podCreationTimestamp="2026-04-21 10:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:13:36.072175843 +0000 UTC m=+28.350931684" watchObservedRunningTime="2026-04-21 10:13:36.072501313 +0000 UTC m=+28.351257159" Apr 21 10:13:37.049226 kubelet[2512]: E0421 10:13:37.049150 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:37.049226 kubelet[2512]: E0421 10:13:37.049176 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:38.060793 kubelet[2512]: E0421 10:13:38.060577 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:13:40.177731 systemd[1]: Started sshd@8-10.0.0.21:22-10.0.0.1:49520.service - OpenSSH per-connection server daemon (10.0.0.1:49520). Apr 21 10:13:40.214217 sshd[3938]: Accepted publickey for core from 10.0.0.1 port 49520 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:13:40.215589 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:40.219308 systemd-logind[1443]: New session 9 of user core. Apr 21 10:13:40.228539 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:13:40.433635 sshd[3938]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:40.436809 systemd[1]: sshd@8-10.0.0.21:22-10.0.0.1:49520.service: Deactivated successfully. Apr 21 10:13:40.438057 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:13:40.438861 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:13:40.439757 systemd-logind[1443]: Removed session 9. Apr 21 10:13:45.450912 systemd[1]: Started sshd@9-10.0.0.21:22-10.0.0.1:49528.service - OpenSSH per-connection server daemon (10.0.0.1:49528). Apr 21 10:13:45.488488 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 49528 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:13:45.490317 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:45.497219 systemd-logind[1443]: New session 10 of user core. Apr 21 10:13:45.506034 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:13:45.633138 sshd[3953]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:45.635863 systemd[1]: sshd@9-10.0.0.21:22-10.0.0.1:49528.service: Deactivated successfully. Apr 21 10:13:45.637247 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:13:45.637774 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:13:45.638569 systemd-logind[1443]: Removed session 10. Apr 21 10:13:50.646222 systemd[1]: Started sshd@10-10.0.0.21:22-10.0.0.1:37600.service - OpenSSH per-connection server daemon (10.0.0.1:37600). Apr 21 10:13:50.680313 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 37600 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:13:50.681810 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:50.685826 systemd-logind[1443]: New session 11 of user core. Apr 21 10:13:50.694551 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:13:50.820153 sshd[3970]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:50.822654 systemd[1]: sshd@10-10.0.0.21:22-10.0.0.1:37600.service: Deactivated successfully. Apr 21 10:13:50.827782 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:13:50.830772 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:13:50.831777 systemd-logind[1443]: Removed session 11. Apr 21 10:13:55.839822 systemd[1]: Started sshd@11-10.0.0.21:22-10.0.0.1:37606.service - OpenSSH per-connection server daemon (10.0.0.1:37606). Apr 21 10:13:55.882389 sshd[3986]: Accepted publickey for core from 10.0.0.1 port 37606 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:13:55.883694 sshd[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:55.887848 systemd-logind[1443]: New session 12 of user core. Apr 21 10:13:55.899682 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:13:56.021887 sshd[3986]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:56.028632 systemd[1]: sshd@11-10.0.0.21:22-10.0.0.1:37606.service: Deactivated successfully. Apr 21 10:13:56.030417 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:13:56.031956 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:13:56.041711 systemd[1]: Started sshd@12-10.0.0.21:22-10.0.0.1:37612.service - OpenSSH per-connection server daemon (10.0.0.1:37612). Apr 21 10:13:56.042831 systemd-logind[1443]: Removed session 12. Apr 21 10:13:56.074147 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 37612 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:13:56.075638 sshd[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:56.080579 systemd-logind[1443]: New session 13 of user core. Apr 21 10:13:56.088638 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:13:56.288982 sshd[4001]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:56.297731 systemd[1]: sshd@12-10.0.0.21:22-10.0.0.1:37612.service: Deactivated successfully. Apr 21 10:13:56.299815 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:13:56.301393 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:13:56.313963 systemd[1]: Started sshd@13-10.0.0.21:22-10.0.0.1:37626.service - OpenSSH per-connection server daemon (10.0.0.1:37626). Apr 21 10:13:56.315076 systemd-logind[1443]: Removed session 13. Apr 21 10:13:56.347881 sshd[4014]: Accepted publickey for core from 10.0.0.1 port 37626 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:13:56.349134 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:56.352905 systemd-logind[1443]: New session 14 of user core. Apr 21 10:13:56.361577 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:13:56.485832 sshd[4014]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:56.488751 systemd[1]: sshd@13-10.0.0.21:22-10.0.0.1:37626.service: Deactivated successfully. Apr 21 10:13:56.490142 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:13:56.490793 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:13:56.491673 systemd-logind[1443]: Removed session 14. Apr 21 10:14:01.498302 systemd[1]: Started sshd@14-10.0.0.21:22-10.0.0.1:37598.service - OpenSSH per-connection server daemon (10.0.0.1:37598). Apr 21 10:14:01.534854 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 37598 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:14:01.536266 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:01.540341 systemd-logind[1443]: New session 15 of user core. Apr 21 10:14:01.548549 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:14:01.647114 sshd[4031]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:01.649822 systemd[1]: sshd@14-10.0.0.21:22-10.0.0.1:37598.service: Deactivated successfully. Apr 21 10:14:01.651061 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:14:01.651708 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:14:01.652780 systemd-logind[1443]: Removed session 15. Apr 21 10:14:06.664813 systemd[1]: Started sshd@15-10.0.0.21:22-10.0.0.1:37610.service - OpenSSH per-connection server daemon (10.0.0.1:37610). Apr 21 10:14:06.697805 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 37610 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:14:06.699072 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:06.702278 systemd-logind[1443]: New session 16 of user core. Apr 21 10:14:06.711839 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:14:06.820911 sshd[4045]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:06.827326 systemd[1]: sshd@15-10.0.0.21:22-10.0.0.1:37610.service: Deactivated successfully. Apr 21 10:14:06.828714 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:14:06.829900 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:14:06.835571 systemd[1]: Started sshd@16-10.0.0.21:22-10.0.0.1:37614.service - OpenSSH per-connection server daemon (10.0.0.1:37614). Apr 21 10:14:06.836166 systemd-logind[1443]: Removed session 16. Apr 21 10:14:06.864602 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 37614 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:14:06.865833 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:06.869248 systemd-logind[1443]: New session 17 of user core. Apr 21 10:14:06.882581 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:14:07.060904 sshd[4060]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:07.070523 systemd[1]: sshd@16-10.0.0.21:22-10.0.0.1:37614.service: Deactivated successfully. Apr 21 10:14:07.071760 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:14:07.072876 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:14:07.073889 systemd[1]: Started sshd@17-10.0.0.21:22-10.0.0.1:37622.service - OpenSSH per-connection server daemon (10.0.0.1:37622). Apr 21 10:14:07.074515 systemd-logind[1443]: Removed session 17. Apr 21 10:14:07.109377 sshd[4073]: Accepted publickey for core from 10.0.0.1 port 37622 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:14:07.110456 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:07.114160 systemd-logind[1443]: New session 18 of user core. Apr 21 10:14:07.123521 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:14:07.757191 sshd[4073]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:07.764224 systemd[1]: sshd@17-10.0.0.21:22-10.0.0.1:37622.service: Deactivated successfully. Apr 21 10:14:07.766860 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:14:07.768752 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:14:07.778092 systemd[1]: Started sshd@18-10.0.0.21:22-10.0.0.1:37628.service - OpenSSH per-connection server daemon (10.0.0.1:37628). Apr 21 10:14:07.779802 systemd-logind[1443]: Removed session 18. Apr 21 10:14:07.811885 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 37628 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:14:07.813580 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:07.818556 systemd-logind[1443]: New session 19 of user core. Apr 21 10:14:07.821947 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:14:08.125724 sshd[4093]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:08.132613 systemd[1]: sshd@18-10.0.0.21:22-10.0.0.1:37628.service: Deactivated successfully. Apr 21 10:14:08.134000 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:14:08.135186 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:14:08.140583 systemd[1]: Started sshd@19-10.0.0.21:22-10.0.0.1:37634.service - OpenSSH per-connection server daemon (10.0.0.1:37634). Apr 21 10:14:08.141251 systemd-logind[1443]: Removed session 19. Apr 21 10:14:08.171097 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 37634 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:14:08.172463 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:08.176958 systemd-logind[1443]: New session 20 of user core. Apr 21 10:14:08.185315 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:14:08.284889 sshd[4107]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:08.287586 systemd[1]: sshd@19-10.0.0.21:22-10.0.0.1:37634.service: Deactivated successfully. Apr 21 10:14:08.288852 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:14:08.289540 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:14:08.290280 systemd-logind[1443]: Removed session 20. Apr 21 10:14:13.297797 systemd[1]: Started sshd@20-10.0.0.21:22-10.0.0.1:40574.service - OpenSSH per-connection server daemon (10.0.0.1:40574). Apr 21 10:14:13.334522 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 40574 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:14:13.335741 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:13.339379 systemd-logind[1443]: New session 21 of user core. Apr 21 10:14:13.349521 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:14:13.454114 sshd[4123]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:13.456921 systemd[1]: sshd@20-10.0.0.21:22-10.0.0.1:40574.service: Deactivated successfully. Apr 21 10:14:13.458228 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:14:13.458834 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:14:13.459676 systemd-logind[1443]: Removed session 21. Apr 21 10:14:18.480684 systemd[1]: Started sshd@21-10.0.0.21:22-10.0.0.1:40590.service - OpenSSH per-connection server daemon (10.0.0.1:40590). Apr 21 10:14:18.510227 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 40590 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:14:18.511561 sshd[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:18.515076 systemd-logind[1443]: New session 22 of user core. Apr 21 10:14:18.528559 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 10:14:18.645840 sshd[4140]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:18.648477 systemd[1]: sshd@21-10.0.0.21:22-10.0.0.1:40590.service: Deactivated successfully. Apr 21 10:14:18.649757 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 10:14:18.650326 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Apr 21 10:14:18.651099 systemd-logind[1443]: Removed session 22. Apr 21 10:14:18.906063 kubelet[2512]: E0421 10:14:18.905844 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:14:23.656653 systemd[1]: Started sshd@22-10.0.0.21:22-10.0.0.1:37492.service - OpenSSH per-connection server daemon (10.0.0.1:37492). Apr 21 10:14:23.692415 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 37492 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:14:23.693791 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:23.697509 systemd-logind[1443]: New session 23 of user core. Apr 21 10:14:23.704510 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 21 10:14:23.809251 sshd[4154]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:23.817813 systemd[1]: sshd@22-10.0.0.21:22-10.0.0.1:37492.service: Deactivated successfully. Apr 21 10:14:23.819100 systemd[1]: session-23.scope: Deactivated successfully. Apr 21 10:14:23.820292 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Apr 21 10:14:23.823701 systemd[1]: Started sshd@23-10.0.0.21:22-10.0.0.1:37496.service - OpenSSH per-connection server daemon (10.0.0.1:37496). Apr 21 10:14:23.824990 systemd-logind[1443]: Removed session 23. Apr 21 10:14:23.856735 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 37496 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:14:23.857946 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:23.862549 systemd-logind[1443]: New session 24 of user core. Apr 21 10:14:23.867513 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 21 10:14:25.186609 containerd[1463]: time="2026-04-21T10:14:25.186557802Z" level=info msg="StopContainer for \"d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f\" with timeout 30 (s)" Apr 21 10:14:25.187329 containerd[1463]: time="2026-04-21T10:14:25.187305855Z" level=info msg="Stop container \"d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f\" with signal terminated" Apr 21 10:14:25.195973 systemd[1]: cri-containerd-d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f.scope: Deactivated successfully. Apr 21 10:14:25.211440 containerd[1463]: time="2026-04-21T10:14:25.210942088Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:14:25.215775 containerd[1463]: time="2026-04-21T10:14:25.215707703Z" level=info msg="StopContainer for \"f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78\" with timeout 2 (s)" Apr 21 10:14:25.216165 containerd[1463]: time="2026-04-21T10:14:25.216151021Z" level=info msg="Stop container \"f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78\" with signal terminated" Apr 21 10:14:25.222327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f-rootfs.mount: Deactivated successfully. Apr 21 10:14:25.222433 systemd-networkd[1406]: lxc_health: Link DOWN Apr 21 10:14:25.222436 systemd-networkd[1406]: lxc_health: Lost carrier Apr 21 10:14:25.235817 containerd[1463]: time="2026-04-21T10:14:25.235752856Z" level=info msg="shim disconnected" id=d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f namespace=k8s.io Apr 21 10:14:25.235817 containerd[1463]: time="2026-04-21T10:14:25.235804437Z" level=warning msg="cleaning up after shim disconnected" id=d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f namespace=k8s.io Apr 21 10:14:25.235817 containerd[1463]: time="2026-04-21T10:14:25.235811343Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:14:25.243764 systemd[1]: cri-containerd-f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78.scope: Deactivated successfully. Apr 21 10:14:25.244194 systemd[1]: cri-containerd-f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78.scope: Consumed 6.305s CPU time. Apr 21 10:14:25.257018 containerd[1463]: time="2026-04-21T10:14:25.255838573Z" level=info msg="StopContainer for \"d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f\" returns successfully" Apr 21 10:14:25.259681 containerd[1463]: time="2026-04-21T10:14:25.259643730Z" level=info msg="StopPodSandbox for \"48addf0078507c12c3c1d5508fe4be6f98c50e7d034559068524d83df02f2095\"" Apr 21 10:14:25.259739 containerd[1463]: time="2026-04-21T10:14:25.259697649Z" level=info msg="Container to stop \"d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:14:25.261903 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48addf0078507c12c3c1d5508fe4be6f98c50e7d034559068524d83df02f2095-shm.mount: Deactivated successfully. Apr 21 10:14:25.267796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78-rootfs.mount: Deactivated successfully. Apr 21 10:14:25.269683 systemd[1]: cri-containerd-48addf0078507c12c3c1d5508fe4be6f98c50e7d034559068524d83df02f2095.scope: Deactivated successfully. Apr 21 10:14:25.285233 containerd[1463]: time="2026-04-21T10:14:25.284578768Z" level=info msg="shim disconnected" id=f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78 namespace=k8s.io Apr 21 10:14:25.285233 containerd[1463]: time="2026-04-21T10:14:25.284620987Z" level=warning msg="cleaning up after shim disconnected" id=f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78 namespace=k8s.io Apr 21 10:14:25.285233 containerd[1463]: time="2026-04-21T10:14:25.284635243Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:14:25.289519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48addf0078507c12c3c1d5508fe4be6f98c50e7d034559068524d83df02f2095-rootfs.mount: Deactivated successfully. Apr 21 10:14:25.298069 containerd[1463]: time="2026-04-21T10:14:25.298001687Z" level=info msg="shim disconnected" id=48addf0078507c12c3c1d5508fe4be6f98c50e7d034559068524d83df02f2095 namespace=k8s.io Apr 21 10:14:25.298069 containerd[1463]: time="2026-04-21T10:14:25.298051699Z" level=warning msg="cleaning up after shim disconnected" id=48addf0078507c12c3c1d5508fe4be6f98c50e7d034559068524d83df02f2095 namespace=k8s.io Apr 21 10:14:25.298069 containerd[1463]: time="2026-04-21T10:14:25.298062638Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:14:25.301430 containerd[1463]: time="2026-04-21T10:14:25.301390347Z" level=info msg="StopContainer for \"f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78\" returns successfully" Apr 21 10:14:25.302289 containerd[1463]: time="2026-04-21T10:14:25.302236347Z" level=info msg="StopPodSandbox for \"1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93\"" Apr 21 10:14:25.302289 containerd[1463]: time="2026-04-21T10:14:25.302299005Z" level=info msg="Container to stop \"9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:14:25.302289 containerd[1463]: time="2026-04-21T10:14:25.302308180Z" level=info msg="Container to stop \"f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:14:25.302550 containerd[1463]: time="2026-04-21T10:14:25.302314627Z" level=info msg="Container to stop \"9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:14:25.302550 containerd[1463]: time="2026-04-21T10:14:25.302321402Z" level=info msg="Container to stop \"4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:14:25.302550 containerd[1463]: time="2026-04-21T10:14:25.302329182Z" level=info msg="Container to stop \"34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:14:25.308662 systemd[1]: cri-containerd-1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93.scope: Deactivated successfully. Apr 21 10:14:25.320418 containerd[1463]: time="2026-04-21T10:14:25.320316201Z" level=info msg="TearDown network for sandbox \"48addf0078507c12c3c1d5508fe4be6f98c50e7d034559068524d83df02f2095\" successfully" Apr 21 10:14:25.320418 containerd[1463]: time="2026-04-21T10:14:25.320395700Z" level=info msg="StopPodSandbox for \"48addf0078507c12c3c1d5508fe4be6f98c50e7d034559068524d83df02f2095\" returns successfully" Apr 21 10:14:25.335128 containerd[1463]: time="2026-04-21T10:14:25.335010567Z" level=info msg="shim disconnected" id=1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93 namespace=k8s.io Apr 21 10:14:25.335426 containerd[1463]: time="2026-04-21T10:14:25.335064932Z" level=warning msg="cleaning up after shim disconnected" id=1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93 namespace=k8s.io Apr 21 10:14:25.335426 containerd[1463]: time="2026-04-21T10:14:25.335212713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:14:25.349319 containerd[1463]: time="2026-04-21T10:14:25.349176740Z" level=info msg="TearDown network for sandbox \"1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93\" successfully" Apr 21 10:14:25.349319 containerd[1463]: time="2026-04-21T10:14:25.349203992Z" level=info msg="StopPodSandbox for \"1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93\" returns successfully" Apr 21 10:14:25.484534 kubelet[2512]: I0421 10:14:25.484449 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-cilium-run\") pod \"5685b68a-829a-423f-864f-f0ad638578f5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " Apr 21 10:14:25.484534 kubelet[2512]: I0421 10:14:25.484525 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-lib-modules\") pod \"5685b68a-829a-423f-864f-f0ad638578f5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " Apr 21 10:14:25.484534 kubelet[2512]: I0421 10:14:25.484557 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5685b68a-829a-423f-864f-f0ad638578f5-cilium-config-path\") pod \"5685b68a-829a-423f-864f-f0ad638578f5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " Apr 21 10:14:25.484534 kubelet[2512]: I0421 10:14:25.484577 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cf9b5cd-e317-47c3-b873-421b5120912f-cilium-config-path\") pod \"6cf9b5cd-e317-47c3-b873-421b5120912f\" (UID: \"6cf9b5cd-e317-47c3-b873-421b5120912f\") " Apr 21 10:14:25.484534 kubelet[2512]: I0421 10:14:25.484592 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-host-proc-sys-kernel\") pod \"5685b68a-829a-423f-864f-f0ad638578f5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " Apr 21 10:14:25.485202 kubelet[2512]: I0421 10:14:25.484610 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5685b68a-829a-423f-864f-f0ad638578f5-hubble-tls\") pod \"5685b68a-829a-423f-864f-f0ad638578f5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " Apr 21 10:14:25.485202 kubelet[2512]: I0421 10:14:25.484627 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfmrt\" (UniqueName: \"kubernetes.io/projected/5685b68a-829a-423f-864f-f0ad638578f5-kube-api-access-lfmrt\") pod \"5685b68a-829a-423f-864f-f0ad638578f5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " Apr 21 10:14:25.485202 kubelet[2512]: I0421 10:14:25.484639 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-bpf-maps\") pod \"5685b68a-829a-423f-864f-f0ad638578f5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " Apr 21 10:14:25.485202 kubelet[2512]: I0421 10:14:25.484679 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-cni-path\") pod \"5685b68a-829a-423f-864f-f0ad638578f5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " Apr 21 10:14:25.485202 kubelet[2512]: I0421 10:14:25.484693 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-xtables-lock\") pod \"5685b68a-829a-423f-864f-f0ad638578f5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " Apr 21 10:14:25.485202 kubelet[2512]: I0421 10:14:25.484708 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5685b68a-829a-423f-864f-f0ad638578f5-clustermesh-secrets\") pod \"5685b68a-829a-423f-864f-f0ad638578f5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " Apr 21 10:14:25.485412 kubelet[2512]: I0421 10:14:25.484721 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-host-proc-sys-net\") pod \"5685b68a-829a-423f-864f-f0ad638578f5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " Apr 21 10:14:25.485412 kubelet[2512]: I0421 10:14:25.484734 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-hostproc\") pod \"5685b68a-829a-423f-864f-f0ad638578f5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " Apr 21 10:14:25.485412 kubelet[2512]: I0421 10:14:25.484750 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5wpd\" (UniqueName: \"kubernetes.io/projected/6cf9b5cd-e317-47c3-b873-421b5120912f-kube-api-access-x5wpd\") pod \"6cf9b5cd-e317-47c3-b873-421b5120912f\" (UID: \"6cf9b5cd-e317-47c3-b873-421b5120912f\") " Apr 21 10:14:25.485412 kubelet[2512]: I0421 10:14:25.484763 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-etc-cni-netd\") pod \"5685b68a-829a-423f-864f-f0ad638578f5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " Apr 21 10:14:25.485412 kubelet[2512]: I0421 10:14:25.484776 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-cilium-cgroup\") pod \"5685b68a-829a-423f-864f-f0ad638578f5\" (UID: \"5685b68a-829a-423f-864f-f0ad638578f5\") " Apr 21 10:14:25.485412 kubelet[2512]: I0421 10:14:25.484829 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5685b68a-829a-423f-864f-f0ad638578f5" (UID: "5685b68a-829a-423f-864f-f0ad638578f5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:14:25.485539 kubelet[2512]: I0421 10:14:25.484830 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5685b68a-829a-423f-864f-f0ad638578f5" (UID: "5685b68a-829a-423f-864f-f0ad638578f5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:14:25.485539 kubelet[2512]: I0421 10:14:25.484834 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5685b68a-829a-423f-864f-f0ad638578f5" (UID: "5685b68a-829a-423f-864f-f0ad638578f5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:14:25.485539 kubelet[2512]: I0421 10:14:25.484874 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-cni-path" (OuterVolumeSpecName: "cni-path") pod "5685b68a-829a-423f-864f-f0ad638578f5" (UID: "5685b68a-829a-423f-864f-f0ad638578f5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:14:25.485539 kubelet[2512]: I0421 10:14:25.485134 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5685b68a-829a-423f-864f-f0ad638578f5" (UID: "5685b68a-829a-423f-864f-f0ad638578f5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:14:25.485539 kubelet[2512]: I0421 10:14:25.485179 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5685b68a-829a-423f-864f-f0ad638578f5" (UID: "5685b68a-829a-423f-864f-f0ad638578f5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:14:25.485640 kubelet[2512]: I0421 10:14:25.485205 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-hostproc" (OuterVolumeSpecName: "hostproc") pod "5685b68a-829a-423f-864f-f0ad638578f5" (UID: "5685b68a-829a-423f-864f-f0ad638578f5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:14:25.487402 kubelet[2512]: I0421 10:14:25.487381 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5685b68a-829a-423f-864f-f0ad638578f5" (UID: "5685b68a-829a-423f-864f-f0ad638578f5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:14:25.487592 kubelet[2512]: I0421 10:14:25.487387 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5685b68a-829a-423f-864f-f0ad638578f5" (UID: "5685b68a-829a-423f-864f-f0ad638578f5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:14:25.487592 kubelet[2512]: I0421 10:14:25.487477 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cf9b5cd-e317-47c3-b873-421b5120912f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6cf9b5cd-e317-47c3-b873-421b5120912f" (UID: "6cf9b5cd-e317-47c3-b873-421b5120912f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:14:25.487592 kubelet[2512]: I0421 10:14:25.487491 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5685b68a-829a-423f-864f-f0ad638578f5" (UID: "5685b68a-829a-423f-864f-f0ad638578f5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:14:25.488164 kubelet[2512]: I0421 10:14:25.488141 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cf9b5cd-e317-47c3-b873-421b5120912f-kube-api-access-x5wpd" (OuterVolumeSpecName: "kube-api-access-x5wpd") pod "6cf9b5cd-e317-47c3-b873-421b5120912f" (UID: "6cf9b5cd-e317-47c3-b873-421b5120912f"). InnerVolumeSpecName "kube-api-access-x5wpd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:14:25.488331 kubelet[2512]: I0421 10:14:25.488240 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5685b68a-829a-423f-864f-f0ad638578f5-kube-api-access-lfmrt" (OuterVolumeSpecName: "kube-api-access-lfmrt") pod "5685b68a-829a-423f-864f-f0ad638578f5" (UID: "5685b68a-829a-423f-864f-f0ad638578f5"). InnerVolumeSpecName "kube-api-access-lfmrt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:14:25.488677 kubelet[2512]: I0421 10:14:25.488630 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5685b68a-829a-423f-864f-f0ad638578f5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5685b68a-829a-423f-864f-f0ad638578f5" (UID: "5685b68a-829a-423f-864f-f0ad638578f5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:14:25.489631 kubelet[2512]: I0421 10:14:25.489593 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5685b68a-829a-423f-864f-f0ad638578f5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5685b68a-829a-423f-864f-f0ad638578f5" (UID: "5685b68a-829a-423f-864f-f0ad638578f5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:14:25.489943 kubelet[2512]: I0421 10:14:25.489923 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5685b68a-829a-423f-864f-f0ad638578f5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5685b68a-829a-423f-864f-f0ad638578f5" (UID: "5685b68a-829a-423f-864f-f0ad638578f5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:14:25.585999 kubelet[2512]: I0421 10:14:25.585914 2512 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 21 10:14:25.585999 kubelet[2512]: I0421 10:14:25.585963 2512 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x5wpd\" (UniqueName: \"kubernetes.io/projected/6cf9b5cd-e317-47c3-b873-421b5120912f-kube-api-access-x5wpd\") on node \"localhost\" DevicePath \"\"" Apr 21 10:14:25.585999 kubelet[2512]: I0421 10:14:25.585973 2512 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 21 10:14:25.585999 kubelet[2512]: I0421 10:14:25.585984 2512 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 21 10:14:25.585999 kubelet[2512]: I0421 10:14:25.585992 2512 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 21 10:14:25.585999 kubelet[2512]: I0421 10:14:25.585999 2512 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 21 10:14:25.585999 kubelet[2512]: I0421 10:14:25.586006 2512 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5685b68a-829a-423f-864f-f0ad638578f5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 21 10:14:25.585999 kubelet[2512]: I0421 10:14:25.586020 2512 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cf9b5cd-e317-47c3-b873-421b5120912f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 21 10:14:25.586533 kubelet[2512]: I0421 10:14:25.586027 2512 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 21 10:14:25.586533 kubelet[2512]: I0421 10:14:25.586036 2512 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5685b68a-829a-423f-864f-f0ad638578f5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 21 10:14:25.586533 kubelet[2512]: I0421 10:14:25.586042 2512 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lfmrt\" (UniqueName: \"kubernetes.io/projected/5685b68a-829a-423f-864f-f0ad638578f5-kube-api-access-lfmrt\") on node \"localhost\" DevicePath \"\"" Apr 21 10:14:25.586533 kubelet[2512]: I0421 10:14:25.586047 2512 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 21 10:14:25.586533 kubelet[2512]: I0421 10:14:25.586053 2512 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 21 10:14:25.586533 kubelet[2512]: I0421 10:14:25.586059 2512 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 21 10:14:25.586533 kubelet[2512]: I0421 10:14:25.586064 2512 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5685b68a-829a-423f-864f-f0ad638578f5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 21 10:14:25.586533 kubelet[2512]: I0421 10:14:25.586069 2512 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5685b68a-829a-423f-864f-f0ad638578f5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 21 10:14:25.912468 systemd[1]: Removed slice kubepods-besteffort-pod6cf9b5cd_e317_47c3_b873_421b5120912f.slice - libcontainer container kubepods-besteffort-pod6cf9b5cd_e317_47c3_b873_421b5120912f.slice. Apr 21 10:14:25.913691 systemd[1]: Removed slice kubepods-burstable-pod5685b68a_829a_423f_864f_f0ad638578f5.slice - libcontainer container kubepods-burstable-pod5685b68a_829a_423f_864f_f0ad638578f5.slice. Apr 21 10:14:25.913808 systemd[1]: kubepods-burstable-pod5685b68a_829a_423f_864f_f0ad638578f5.slice: Consumed 6.384s CPU time. Apr 21 10:14:26.198778 systemd[1]: var-lib-kubelet-pods-6cf9b5cd\x2de317\x2d47c3\x2db873\x2d421b5120912f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx5wpd.mount: Deactivated successfully. Apr 21 10:14:26.198917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93-rootfs.mount: Deactivated successfully. Apr 21 10:14:26.198959 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1acda848a1148af21bc3d1c24db83a7c80e2bfc0d4a1b6ca981ad9f23379da93-shm.mount: Deactivated successfully. Apr 21 10:14:26.198999 systemd[1]: var-lib-kubelet-pods-5685b68a\x2d829a\x2d423f\x2d864f\x2df0ad638578f5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlfmrt.mount: Deactivated successfully. Apr 21 10:14:26.199057 systemd[1]: var-lib-kubelet-pods-5685b68a\x2d829a\x2d423f\x2d864f\x2df0ad638578f5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 21 10:14:26.199097 systemd[1]: var-lib-kubelet-pods-5685b68a\x2d829a\x2d423f\x2d864f\x2df0ad638578f5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 21 10:14:26.213201 kubelet[2512]: I0421 10:14:26.213137 2512 scope.go:117] "RemoveContainer" containerID="f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78" Apr 21 10:14:26.216836 containerd[1463]: time="2026-04-21T10:14:26.216803048Z" level=info msg="RemoveContainer for \"f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78\"" Apr 21 10:14:26.220902 containerd[1463]: time="2026-04-21T10:14:26.220866326Z" level=info msg="RemoveContainer for \"f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78\" returns successfully" Apr 21 10:14:26.221098 kubelet[2512]: I0421 10:14:26.221070 2512 scope.go:117] "RemoveContainer" containerID="9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2" Apr 21 10:14:26.221997 containerd[1463]: time="2026-04-21T10:14:26.221969725Z" level=info msg="RemoveContainer for \"9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2\"" Apr 21 10:14:26.225311 containerd[1463]: time="2026-04-21T10:14:26.225281215Z" level=info msg="RemoveContainer for \"9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2\" returns successfully" Apr 21 10:14:26.225969 kubelet[2512]: I0421 10:14:26.225902 2512 scope.go:117] "RemoveContainer" containerID="9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993" Apr 21 10:14:26.228025 containerd[1463]: time="2026-04-21T10:14:26.227995698Z" level=info msg="RemoveContainer for \"9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993\"" Apr 21 10:14:26.231062 containerd[1463]: time="2026-04-21T10:14:26.231030720Z" level=info msg="RemoveContainer for \"9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993\" returns successfully" Apr 21 10:14:26.231196 kubelet[2512]: I0421 10:14:26.231169 2512 scope.go:117] "RemoveContainer" containerID="4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b" Apr 21 10:14:26.232215 containerd[1463]: time="2026-04-21T10:14:26.232189484Z" level=info msg="RemoveContainer for \"4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b\"" Apr 21 10:14:26.234854 containerd[1463]: time="2026-04-21T10:14:26.234828313Z" level=info msg="RemoveContainer for \"4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b\" returns successfully" Apr 21 10:14:26.235056 kubelet[2512]: I0421 10:14:26.235035 2512 scope.go:117] "RemoveContainer" containerID="34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b" Apr 21 10:14:26.235905 containerd[1463]: time="2026-04-21T10:14:26.235881085Z" level=info msg="RemoveContainer for \"34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b\"" Apr 21 10:14:26.240741 containerd[1463]: time="2026-04-21T10:14:26.240709809Z" level=info msg="RemoveContainer for \"34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b\" returns successfully" Apr 21 10:14:26.240949 kubelet[2512]: I0421 10:14:26.240867 2512 scope.go:117] "RemoveContainer" containerID="f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78" Apr 21 10:14:26.243915 containerd[1463]: time="2026-04-21T10:14:26.243862136Z" level=error msg="ContainerStatus for \"f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78\": not found" Apr 21 10:14:26.249428 kubelet[2512]: E0421 10:14:26.249395 2512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78\": not found" containerID="f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78" Apr 21 10:14:26.249486 kubelet[2512]: I0421 10:14:26.249442 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78"} err="failed to get container status \"f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78\": rpc error: code = NotFound desc = an error occurred when try to find container \"f28c64b527cbdb9f081516dccf19caebea8c11d7ae374c82d52ca13c77e68e78\": not found" Apr 21 10:14:26.249486 kubelet[2512]: I0421 10:14:26.249474 2512 scope.go:117] "RemoveContainer" containerID="9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2" Apr 21 10:14:26.249669 containerd[1463]: time="2026-04-21T10:14:26.249635393Z" level=error msg="ContainerStatus for \"9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2\": not found" Apr 21 10:14:26.249805 kubelet[2512]: E0421 10:14:26.249748 2512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2\": not found" containerID="9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2" Apr 21 10:14:26.249805 kubelet[2512]: I0421 10:14:26.249760 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2"} err="failed to get container status \"9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"9de36df50add3a0417742697f73a7af48770d6bda4d26baf276122ed3d5bd8f2\": not found" Apr 21 10:14:26.249805 kubelet[2512]: I0421 10:14:26.249769 2512 scope.go:117] "RemoveContainer" containerID="9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993" Apr 21 10:14:26.249923 containerd[1463]: time="2026-04-21T10:14:26.249895207Z" level=error msg="ContainerStatus for \"9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993\": not found" Apr 21 10:14:26.250028 kubelet[2512]: E0421 10:14:26.250007 2512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993\": not found" containerID="9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993" Apr 21 10:14:26.250069 kubelet[2512]: I0421 10:14:26.250031 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993"} err="failed to get container status \"9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ba0c5e44d4397f6861e94578385176e5fe37bc89d29f5e68799d1d0e0c64993\": not found" Apr 21 10:14:26.250069 kubelet[2512]: I0421 10:14:26.250043 2512 scope.go:117] "RemoveContainer" containerID="4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b" Apr 21 10:14:26.250266 containerd[1463]: time="2026-04-21T10:14:26.250187920Z" level=error msg="ContainerStatus for \"4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b\": not found" Apr 21 10:14:26.250447 kubelet[2512]: E0421 10:14:26.250427 2512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b\": not found" containerID="4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b" Apr 21 10:14:26.250481 kubelet[2512]: I0421 10:14:26.250452 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b"} err="failed to get container status \"4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a337b8b5b2549a9c2d31e2601ae5a5f6f96d3a6041e8044a7271ec70ba8935b\": not found" Apr 21 10:14:26.250481 kubelet[2512]: I0421 10:14:26.250468 2512 scope.go:117] "RemoveContainer" containerID="34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b" Apr 21 10:14:26.250681 containerd[1463]: time="2026-04-21T10:14:26.250650936Z" level=error msg="ContainerStatus for \"34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b\": not found" Apr 21 10:14:26.250751 kubelet[2512]: E0421 10:14:26.250741 2512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b\": not found" containerID="34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b" Apr 21 10:14:26.250772 kubelet[2512]: I0421 10:14:26.250754 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b"} err="failed to get container status \"34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b\": rpc error: code = NotFound desc = an error occurred when try to find container \"34d76351a3656e99894123a6834c42e900aeaa0d392c96fffdab858aa489e41b\": not found" Apr 21 10:14:26.250772 kubelet[2512]: I0421 10:14:26.250765 2512 scope.go:117] "RemoveContainer" containerID="d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f" Apr 21 10:14:26.251593 containerd[1463]: time="2026-04-21T10:14:26.251571662Z" level=info msg="RemoveContainer for \"d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f\"" Apr 21 10:14:26.253834 containerd[1463]: time="2026-04-21T10:14:26.253799485Z" level=info msg="RemoveContainer for \"d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f\" returns successfully" Apr 21 10:14:26.253988 kubelet[2512]: I0421 10:14:26.253947 2512 scope.go:117] "RemoveContainer" containerID="d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f" Apr 21 10:14:26.254130 containerd[1463]: time="2026-04-21T10:14:26.254106360Z" level=error msg="ContainerStatus for \"d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f\": not found" Apr 21 10:14:26.254261 kubelet[2512]: E0421 10:14:26.254213 2512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f\": not found" containerID="d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f" Apr 21 10:14:26.254291 kubelet[2512]: I0421 10:14:26.254258 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f"} err="failed to get container status \"d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7bda6e458362747d18e26befa91c4e16c02993af322d2ea010a298eb883ca8f\": not found" Apr 21 10:14:26.906405 kubelet[2512]: E0421 10:14:26.906335 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:14:27.153537 sshd[4169]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:27.163441 systemd[1]: sshd@23-10.0.0.21:22-10.0.0.1:37496.service: Deactivated successfully. Apr 21 10:14:27.164667 systemd[1]: session-24.scope: Deactivated successfully. Apr 21 10:14:27.165799 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. Apr 21 10:14:27.166814 systemd[1]: Started sshd@24-10.0.0.21:22-10.0.0.1:37504.service - OpenSSH per-connection server daemon (10.0.0.1:37504). Apr 21 10:14:27.167460 systemd-logind[1443]: Removed session 24. Apr 21 10:14:27.203266 sshd[4336]: Accepted publickey for core from 10.0.0.1 port 37504 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:14:27.204763 sshd[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:27.209599 systemd-logind[1443]: New session 25 of user core. Apr 21 10:14:27.219541 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 21 10:14:27.810759 sshd[4336]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:27.820113 systemd[1]: sshd@24-10.0.0.21:22-10.0.0.1:37504.service: Deactivated successfully. Apr 21 10:14:27.822520 systemd[1]: session-25.scope: Deactivated successfully. Apr 21 10:14:27.825124 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. Apr 21 10:14:27.834181 systemd[1]: Started sshd@25-10.0.0.21:22-10.0.0.1:37512.service - OpenSSH per-connection server daemon (10.0.0.1:37512). Apr 21 10:14:27.837143 systemd-logind[1443]: Removed session 25. Apr 21 10:14:27.849165 systemd[1]: Created slice kubepods-burstable-podf878e634_e137_46b1_8e95_12bd7333d740.slice - libcontainer container kubepods-burstable-podf878e634_e137_46b1_8e95_12bd7333d740.slice. Apr 21 10:14:27.867387 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 37512 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:14:27.868720 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:27.872108 systemd-logind[1443]: New session 26 of user core. Apr 21 10:14:27.878533 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 21 10:14:27.901469 kubelet[2512]: I0421 10:14:27.901436 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f878e634-e137-46b1-8e95-12bd7333d740-cilium-config-path\") pod \"cilium-cfwlw\" (UID: \"f878e634-e137-46b1-8e95-12bd7333d740\") " pod="kube-system/cilium-cfwlw" Apr 21 10:14:27.901469 kubelet[2512]: I0421 10:14:27.901475 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f878e634-e137-46b1-8e95-12bd7333d740-cilium-ipsec-secrets\") pod \"cilium-cfwlw\" (UID: \"f878e634-e137-46b1-8e95-12bd7333d740\") " pod="kube-system/cilium-cfwlw" Apr 21 10:14:27.901584 kubelet[2512]: I0421 10:14:27.901495 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f878e634-e137-46b1-8e95-12bd7333d740-etc-cni-netd\") pod \"cilium-cfwlw\" (UID: \"f878e634-e137-46b1-8e95-12bd7333d740\") " pod="kube-system/cilium-cfwlw" Apr 21 10:14:27.901584 kubelet[2512]: I0421 10:14:27.901511 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtpgh\" (UniqueName: \"kubernetes.io/projected/f878e634-e137-46b1-8e95-12bd7333d740-kube-api-access-wtpgh\") pod \"cilium-cfwlw\" (UID: \"f878e634-e137-46b1-8e95-12bd7333d740\") " pod="kube-system/cilium-cfwlw" Apr 21 10:14:27.901584 kubelet[2512]: I0421 10:14:27.901529 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f878e634-e137-46b1-8e95-12bd7333d740-cilium-run\") pod \"cilium-cfwlw\" (UID: \"f878e634-e137-46b1-8e95-12bd7333d740\") " pod="kube-system/cilium-cfwlw" Apr 21 10:14:27.901584 kubelet[2512]: I0421 10:14:27.901542 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f878e634-e137-46b1-8e95-12bd7333d740-hostproc\") pod \"cilium-cfwlw\" (UID: \"f878e634-e137-46b1-8e95-12bd7333d740\") " pod="kube-system/cilium-cfwlw" Apr 21 10:14:27.901584 kubelet[2512]: I0421 10:14:27.901556 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f878e634-e137-46b1-8e95-12bd7333d740-clustermesh-secrets\") pod \"cilium-cfwlw\" (UID: \"f878e634-e137-46b1-8e95-12bd7333d740\") " pod="kube-system/cilium-cfwlw" Apr 21 10:14:27.901668 kubelet[2512]: I0421 10:14:27.901612 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f878e634-e137-46b1-8e95-12bd7333d740-hubble-tls\") pod \"cilium-cfwlw\" (UID: \"f878e634-e137-46b1-8e95-12bd7333d740\") " pod="kube-system/cilium-cfwlw" Apr 21 10:14:27.901668 kubelet[2512]: I0421 10:14:27.901626 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f878e634-e137-46b1-8e95-12bd7333d740-cni-path\") pod \"cilium-cfwlw\" (UID: \"f878e634-e137-46b1-8e95-12bd7333d740\") " pod="kube-system/cilium-cfwlw" Apr 21 10:14:27.901668 kubelet[2512]: I0421 10:14:27.901638 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f878e634-e137-46b1-8e95-12bd7333d740-bpf-maps\") pod \"cilium-cfwlw\" (UID: \"f878e634-e137-46b1-8e95-12bd7333d740\") " pod="kube-system/cilium-cfwlw" Apr 21 10:14:27.901668 kubelet[2512]: I0421 10:14:27.901650 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f878e634-e137-46b1-8e95-12bd7333d740-cilium-cgroup\") pod \"cilium-cfwlw\" (UID: \"f878e634-e137-46b1-8e95-12bd7333d740\") " pod="kube-system/cilium-cfwlw" Apr 21 10:14:27.901668 kubelet[2512]: I0421 10:14:27.901661 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f878e634-e137-46b1-8e95-12bd7333d740-lib-modules\") pod \"cilium-cfwlw\" (UID: \"f878e634-e137-46b1-8e95-12bd7333d740\") " pod="kube-system/cilium-cfwlw" Apr 21 10:14:27.901744 kubelet[2512]: I0421 10:14:27.901672 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f878e634-e137-46b1-8e95-12bd7333d740-xtables-lock\") pod \"cilium-cfwlw\" (UID: \"f878e634-e137-46b1-8e95-12bd7333d740\") " pod="kube-system/cilium-cfwlw" Apr 21 10:14:27.901744 kubelet[2512]: I0421 10:14:27.901684 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f878e634-e137-46b1-8e95-12bd7333d740-host-proc-sys-net\") pod \"cilium-cfwlw\" (UID: \"f878e634-e137-46b1-8e95-12bd7333d740\") " pod="kube-system/cilium-cfwlw" Apr 21 10:14:27.901744 kubelet[2512]: I0421 10:14:27.901698 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f878e634-e137-46b1-8e95-12bd7333d740-host-proc-sys-kernel\") pod \"cilium-cfwlw\" (UID: \"f878e634-e137-46b1-8e95-12bd7333d740\") " pod="kube-system/cilium-cfwlw" Apr 21 10:14:27.907903 kubelet[2512]: I0421 10:14:27.907838 2512 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5685b68a-829a-423f-864f-f0ad638578f5" path="/var/lib/kubelet/pods/5685b68a-829a-423f-864f-f0ad638578f5/volumes" Apr 21 10:14:27.908348 kubelet[2512]: I0421 10:14:27.908302 2512 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cf9b5cd-e317-47c3-b873-421b5120912f" path="/var/lib/kubelet/pods/6cf9b5cd-e317-47c3-b873-421b5120912f/volumes" Apr 21 10:14:27.928018 sshd[4349]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:27.940454 systemd[1]: sshd@25-10.0.0.21:22-10.0.0.1:37512.service: Deactivated successfully. Apr 21 10:14:27.941644 systemd[1]: session-26.scope: Deactivated successfully. Apr 21 10:14:27.942679 systemd-logind[1443]: Session 26 logged out. Waiting for processes to exit. Apr 21 10:14:27.955584 systemd[1]: Started sshd@26-10.0.0.21:22-10.0.0.1:37514.service - OpenSSH per-connection server daemon (10.0.0.1:37514). Apr 21 10:14:27.956463 systemd-logind[1443]: Removed session 26. Apr 21 10:14:27.959807 kubelet[2512]: E0421 10:14:27.959780 2512 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 10:14:27.987819 sshd[4357]: Accepted publickey for core from 10.0.0.1 port 37514 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:14:27.989150 sshd[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:27.992608 systemd-logind[1443]: New session 27 of user core. Apr 21 10:14:27.997508 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 21 10:14:28.162109 kubelet[2512]: E0421 10:14:28.161945 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:14:28.162451 containerd[1463]: time="2026-04-21T10:14:28.162338995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cfwlw,Uid:f878e634-e137-46b1-8e95-12bd7333d740,Namespace:kube-system,Attempt:0,}" Apr 21 10:14:28.187080 containerd[1463]: time="2026-04-21T10:14:28.186905636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:14:28.187080 containerd[1463]: time="2026-04-21T10:14:28.187032092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:14:28.187080 containerd[1463]: time="2026-04-21T10:14:28.187049956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:14:28.187222 containerd[1463]: time="2026-04-21T10:14:28.187125811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:14:28.204526 systemd[1]: Started cri-containerd-878241a5aeb41651beb34897e28b7c17ca60bbbe4ea58ce8fae5dd02a644a2a3.scope - libcontainer container 878241a5aeb41651beb34897e28b7c17ca60bbbe4ea58ce8fae5dd02a644a2a3. Apr 21 10:14:28.229724 containerd[1463]: time="2026-04-21T10:14:28.229657731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cfwlw,Uid:f878e634-e137-46b1-8e95-12bd7333d740,Namespace:kube-system,Attempt:0,} returns sandbox id \"878241a5aeb41651beb34897e28b7c17ca60bbbe4ea58ce8fae5dd02a644a2a3\"" Apr 21 10:14:28.235150 kubelet[2512]: E0421 10:14:28.235120 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:14:28.240802 containerd[1463]: time="2026-04-21T10:14:28.240763133Z" level=info msg="CreateContainer within sandbox \"878241a5aeb41651beb34897e28b7c17ca60bbbe4ea58ce8fae5dd02a644a2a3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 10:14:28.252215 containerd[1463]: time="2026-04-21T10:14:28.252135525Z" level=info msg="CreateContainer within sandbox \"878241a5aeb41651beb34897e28b7c17ca60bbbe4ea58ce8fae5dd02a644a2a3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"71d0930f4479a96c18750671058447e8e8f70fbb09ef636287b67dc7cc5acc8c\"" Apr 21 10:14:28.253059 containerd[1463]: time="2026-04-21T10:14:28.253040998Z" level=info msg="StartContainer for \"71d0930f4479a96c18750671058447e8e8f70fbb09ef636287b67dc7cc5acc8c\"" Apr 21 10:14:28.288593 systemd[1]: Started cri-containerd-71d0930f4479a96c18750671058447e8e8f70fbb09ef636287b67dc7cc5acc8c.scope - libcontainer container 71d0930f4479a96c18750671058447e8e8f70fbb09ef636287b67dc7cc5acc8c. Apr 21 10:14:28.309560 containerd[1463]: time="2026-04-21T10:14:28.309494312Z" level=info msg="StartContainer for \"71d0930f4479a96c18750671058447e8e8f70fbb09ef636287b67dc7cc5acc8c\" returns successfully" Apr 21 10:14:28.317720 systemd[1]: cri-containerd-71d0930f4479a96c18750671058447e8e8f70fbb09ef636287b67dc7cc5acc8c.scope: Deactivated successfully. Apr 21 10:14:28.344281 containerd[1463]: time="2026-04-21T10:14:28.344210556Z" level=info msg="shim disconnected" id=71d0930f4479a96c18750671058447e8e8f70fbb09ef636287b67dc7cc5acc8c namespace=k8s.io Apr 21 10:14:28.344281 containerd[1463]: time="2026-04-21T10:14:28.344267440Z" level=warning msg="cleaning up after shim disconnected" id=71d0930f4479a96c18750671058447e8e8f70fbb09ef636287b67dc7cc5acc8c namespace=k8s.io Apr 21 10:14:28.344281 containerd[1463]: time="2026-04-21T10:14:28.344274943Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:14:29.232726 kubelet[2512]: E0421 10:14:29.232675 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:14:29.238452 containerd[1463]: time="2026-04-21T10:14:29.238400528Z" level=info msg="CreateContainer within sandbox \"878241a5aeb41651beb34897e28b7c17ca60bbbe4ea58ce8fae5dd02a644a2a3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 10:14:29.254302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734859808.mount: Deactivated successfully. Apr 21 10:14:29.256806 containerd[1463]: time="2026-04-21T10:14:29.256765316Z" level=info msg="CreateContainer within sandbox \"878241a5aeb41651beb34897e28b7c17ca60bbbe4ea58ce8fae5dd02a644a2a3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"577ee3cd1a26c716cf2d35cadeaf299bba8b20f7f4f9770eee4c2031c9497e0e\"" Apr 21 10:14:29.257323 containerd[1463]: time="2026-04-21T10:14:29.257274386Z" level=info msg="StartContainer for \"577ee3cd1a26c716cf2d35cadeaf299bba8b20f7f4f9770eee4c2031c9497e0e\"" Apr 21 10:14:29.287587 systemd[1]: Started cri-containerd-577ee3cd1a26c716cf2d35cadeaf299bba8b20f7f4f9770eee4c2031c9497e0e.scope - libcontainer container 577ee3cd1a26c716cf2d35cadeaf299bba8b20f7f4f9770eee4c2031c9497e0e. Apr 21 10:14:29.308948 containerd[1463]: time="2026-04-21T10:14:29.308913986Z" level=info msg="StartContainer for \"577ee3cd1a26c716cf2d35cadeaf299bba8b20f7f4f9770eee4c2031c9497e0e\" returns successfully" Apr 21 10:14:29.312749 systemd[1]: cri-containerd-577ee3cd1a26c716cf2d35cadeaf299bba8b20f7f4f9770eee4c2031c9497e0e.scope: Deactivated successfully. Apr 21 10:14:29.338203 containerd[1463]: time="2026-04-21T10:14:29.338124507Z" level=info msg="shim disconnected" id=577ee3cd1a26c716cf2d35cadeaf299bba8b20f7f4f9770eee4c2031c9497e0e namespace=k8s.io Apr 21 10:14:29.338203 containerd[1463]: time="2026-04-21T10:14:29.338186297Z" level=warning msg="cleaning up after shim disconnected" id=577ee3cd1a26c716cf2d35cadeaf299bba8b20f7f4f9770eee4c2031c9497e0e namespace=k8s.io Apr 21 10:14:29.338203 containerd[1463]: time="2026-04-21T10:14:29.338193543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:14:29.744162 kubelet[2512]: I0421 10:14:29.744066 2512 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-21T10:14:29Z","lastTransitionTime":"2026-04-21T10:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 21 10:14:30.005965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-577ee3cd1a26c716cf2d35cadeaf299bba8b20f7f4f9770eee4c2031c9497e0e-rootfs.mount: Deactivated successfully. Apr 21 10:14:30.244346 kubelet[2512]: E0421 10:14:30.244300 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:14:30.250392 containerd[1463]: time="2026-04-21T10:14:30.250303729Z" level=info msg="CreateContainer within sandbox \"878241a5aeb41651beb34897e28b7c17ca60bbbe4ea58ce8fae5dd02a644a2a3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 10:14:30.265595 containerd[1463]: time="2026-04-21T10:14:30.265490077Z" level=info msg="CreateContainer within sandbox \"878241a5aeb41651beb34897e28b7c17ca60bbbe4ea58ce8fae5dd02a644a2a3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a5e8b5813916f504da8569319d22964663fb10c841e58b4c94e57398435477a0\"" Apr 21 10:14:30.266042 containerd[1463]: time="2026-04-21T10:14:30.266003646Z" level=info msg="StartContainer for \"a5e8b5813916f504da8569319d22964663fb10c841e58b4c94e57398435477a0\"" Apr 21 10:14:30.300617 systemd[1]: Started cri-containerd-a5e8b5813916f504da8569319d22964663fb10c841e58b4c94e57398435477a0.scope - libcontainer container a5e8b5813916f504da8569319d22964663fb10c841e58b4c94e57398435477a0. Apr 21 10:14:30.321177 systemd[1]: cri-containerd-a5e8b5813916f504da8569319d22964663fb10c841e58b4c94e57398435477a0.scope: Deactivated successfully. Apr 21 10:14:30.322787 containerd[1463]: time="2026-04-21T10:14:30.322747012Z" level=info msg="StartContainer for \"a5e8b5813916f504da8569319d22964663fb10c841e58b4c94e57398435477a0\" returns successfully" Apr 21 10:14:30.345820 containerd[1463]: time="2026-04-21T10:14:30.345291499Z" level=info msg="shim disconnected" id=a5e8b5813916f504da8569319d22964663fb10c841e58b4c94e57398435477a0 namespace=k8s.io Apr 21 10:14:30.345820 containerd[1463]: time="2026-04-21T10:14:30.345788408Z" level=warning msg="cleaning up after shim disconnected" id=a5e8b5813916f504da8569319d22964663fb10c841e58b4c94e57398435477a0 namespace=k8s.io Apr 21 10:14:30.345820 containerd[1463]: time="2026-04-21T10:14:30.345818827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:14:31.006683 systemd[1]: run-containerd-runc-k8s.io-a5e8b5813916f504da8569319d22964663fb10c841e58b4c94e57398435477a0-runc.9eRMLC.mount: Deactivated successfully. Apr 21 10:14:31.006801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5e8b5813916f504da8569319d22964663fb10c841e58b4c94e57398435477a0-rootfs.mount: Deactivated successfully. Apr 21 10:14:31.249163 kubelet[2512]: E0421 10:14:31.249105 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:14:31.260544 containerd[1463]: time="2026-04-21T10:14:31.257583197Z" level=info msg="CreateContainer within sandbox \"878241a5aeb41651beb34897e28b7c17ca60bbbe4ea58ce8fae5dd02a644a2a3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 10:14:31.279477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552859519.mount: Deactivated successfully. Apr 21 10:14:31.282417 containerd[1463]: time="2026-04-21T10:14:31.282338137Z" level=info msg="CreateContainer within sandbox \"878241a5aeb41651beb34897e28b7c17ca60bbbe4ea58ce8fae5dd02a644a2a3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"48d3bbcb0dcf7f304548651293fcd9c97e16c866cf4bca53fae24a27ce39c1f3\"" Apr 21 10:14:31.283030 containerd[1463]: time="2026-04-21T10:14:31.282967658Z" level=info msg="StartContainer for \"48d3bbcb0dcf7f304548651293fcd9c97e16c866cf4bca53fae24a27ce39c1f3\"" Apr 21 10:14:31.312892 systemd[1]: Started cri-containerd-48d3bbcb0dcf7f304548651293fcd9c97e16c866cf4bca53fae24a27ce39c1f3.scope - libcontainer container 48d3bbcb0dcf7f304548651293fcd9c97e16c866cf4bca53fae24a27ce39c1f3. Apr 21 10:14:31.334605 systemd[1]: cri-containerd-48d3bbcb0dcf7f304548651293fcd9c97e16c866cf4bca53fae24a27ce39c1f3.scope: Deactivated successfully. Apr 21 10:14:31.338007 containerd[1463]: time="2026-04-21T10:14:31.337973838Z" level=info msg="StartContainer for \"48d3bbcb0dcf7f304548651293fcd9c97e16c866cf4bca53fae24a27ce39c1f3\" returns successfully" Apr 21 10:14:31.362593 containerd[1463]: time="2026-04-21T10:14:31.362526993Z" level=info msg="shim disconnected" id=48d3bbcb0dcf7f304548651293fcd9c97e16c866cf4bca53fae24a27ce39c1f3 namespace=k8s.io Apr 21 10:14:31.362593 containerd[1463]: time="2026-04-21T10:14:31.362630127Z" level=warning msg="cleaning up after shim disconnected" id=48d3bbcb0dcf7f304548651293fcd9c97e16c866cf4bca53fae24a27ce39c1f3 namespace=k8s.io Apr 21 10:14:31.362593 containerd[1463]: time="2026-04-21T10:14:31.362640559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:14:32.007940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48d3bbcb0dcf7f304548651293fcd9c97e16c866cf4bca53fae24a27ce39c1f3-rootfs.mount: Deactivated successfully. Apr 21 10:14:32.258905 kubelet[2512]: E0421 10:14:32.258144 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:14:32.266216 containerd[1463]: time="2026-04-21T10:14:32.266161538Z" level=info msg="CreateContainer within sandbox \"878241a5aeb41651beb34897e28b7c17ca60bbbe4ea58ce8fae5dd02a644a2a3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 10:14:32.286713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2450625682.mount: Deactivated successfully. Apr 21 10:14:32.290672 containerd[1463]: time="2026-04-21T10:14:32.290576984Z" level=info msg="CreateContainer within sandbox \"878241a5aeb41651beb34897e28b7c17ca60bbbe4ea58ce8fae5dd02a644a2a3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c3f4d36e76a09be6898da570c5d3ee66bdfa33cea43fd0a57bd857830f523aff\"" Apr 21 10:14:32.291793 containerd[1463]: time="2026-04-21T10:14:32.291749482Z" level=info msg="StartContainer for \"c3f4d36e76a09be6898da570c5d3ee66bdfa33cea43fd0a57bd857830f523aff\"" Apr 21 10:14:32.329582 systemd[1]: Started cri-containerd-c3f4d36e76a09be6898da570c5d3ee66bdfa33cea43fd0a57bd857830f523aff.scope - libcontainer container c3f4d36e76a09be6898da570c5d3ee66bdfa33cea43fd0a57bd857830f523aff. Apr 21 10:14:32.365109 containerd[1463]: time="2026-04-21T10:14:32.365071079Z" level=info msg="StartContainer for \"c3f4d36e76a09be6898da570c5d3ee66bdfa33cea43fd0a57bd857830f523aff\" returns successfully" Apr 21 10:14:32.684640 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 21 10:14:32.906054 kubelet[2512]: E0421 10:14:32.906022 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:14:33.262891 kubelet[2512]: E0421 10:14:33.262839 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:14:33.276654 kubelet[2512]: I0421 10:14:33.276588 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cfwlw" podStartSLOduration=6.276575055 podStartE2EDuration="6.276575055s" podCreationTimestamp="2026-04-21 10:14:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:14:33.276283273 +0000 UTC m=+85.555039117" watchObservedRunningTime="2026-04-21 10:14:33.276575055 +0000 UTC m=+85.555330898" Apr 21 10:14:34.264547 kubelet[2512]: E0421 10:14:34.264515 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:14:34.905782 kubelet[2512]: E0421 10:14:34.905742 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:14:35.650079 systemd-networkd[1406]: lxc_health: Link UP Apr 21 10:14:35.662632 systemd-networkd[1406]: lxc_health: Gained carrier Apr 21 10:14:36.163674 kubelet[2512]: E0421 10:14:36.163631 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:14:36.271880 kubelet[2512]: E0421 10:14:36.271818 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:14:36.923963 systemd-networkd[1406]: lxc_health: Gained IPv6LL Apr 21 10:14:37.274622 kubelet[2512]: E0421 10:14:37.274571 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:14:39.906199 kubelet[2512]: E0421 10:14:39.906041 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:14:43.224921 sshd[4357]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:43.229842 systemd[1]: sshd@26-10.0.0.21:22-10.0.0.1:37514.service: Deactivated successfully. Apr 21 10:14:43.231674 systemd[1]: session-27.scope: Deactivated successfully. Apr 21 10:14:43.232252 systemd-logind[1443]: Session 27 logged out. Waiting for processes to exit. Apr 21 10:14:43.233150 systemd-logind[1443]: Removed session 27. Apr 21 10:14:44.907178 kubelet[2512]: E0421 10:14:44.907080 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"