Apr 24 23:41:59.870087 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:41:59.870104 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:41:59.870113 kernel: BIOS-provided physical RAM map: Apr 24 23:41:59.870119 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 24 23:41:59.870124 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 24 23:41:59.870129 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 24 23:41:59.870158 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 24 23:41:59.870164 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 24 23:41:59.870170 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 24 23:41:59.870176 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 24 23:41:59.870182 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 24 23:41:59.870187 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 24 23:41:59.870192 kernel: NX (Execute Disable) protection: active Apr 24 23:41:59.870197 kernel: APIC: Static calls initialized Apr 24 23:41:59.870204 kernel: SMBIOS 2.8 present. Apr 24 23:41:59.870211 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 24 23:41:59.870216 kernel: Hypervisor detected: KVM Apr 24 23:41:59.870222 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 23:41:59.870228 kernel: kvm-clock: using sched offset of 3154685687 cycles Apr 24 23:41:59.870233 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 23:41:59.870239 kernel: tsc: Detected 2793.438 MHz processor Apr 24 23:41:59.870245 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:41:59.870304 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:41:59.870309 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 24 23:41:59.870317 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 24 23:41:59.870323 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:41:59.870329 kernel: Using GB pages for direct mapping Apr 24 23:41:59.870334 kernel: ACPI: Early table checksum verification disabled Apr 24 23:41:59.870340 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 24 23:41:59.870346 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:41:59.870352 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:41:59.870357 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:41:59.870363 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 24 23:41:59.870370 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:41:59.870375 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:41:59.870381 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:41:59.870386 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:41:59.870392 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 24 23:41:59.870398 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 24 23:41:59.870404 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 24 23:41:59.870411 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 24 23:41:59.870419 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 24 23:41:59.870424 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 24 23:41:59.870430 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 24 23:41:59.870436 kernel: No NUMA configuration found Apr 24 23:41:59.870442 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 24 23:41:59.870448 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 24 23:41:59.870455 kernel: Zone ranges: Apr 24 23:41:59.870461 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:41:59.870467 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 24 23:41:59.870473 kernel: Normal empty Apr 24 23:41:59.870478 kernel: Movable zone start for each node Apr 24 23:41:59.870483 kernel: Early memory node ranges Apr 24 23:41:59.870488 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 24 23:41:59.870493 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 24 23:41:59.870498 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 24 23:41:59.870503 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:41:59.870509 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 24 23:41:59.870514 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 24 23:41:59.870519 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 24 23:41:59.870524 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 23:41:59.870529 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 24 23:41:59.870533 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 24 23:41:59.870538 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 23:41:59.870543 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:41:59.870548 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 23:41:59.870554 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 23:41:59.870559 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:41:59.870564 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 23:41:59.870569 kernel: TSC deadline timer available Apr 24 23:41:59.870574 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 24 23:41:59.870579 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 23:41:59.870584 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 24 23:41:59.870589 kernel: kvm-guest: setup PV sched yield Apr 24 23:41:59.870594 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 24 23:41:59.870599 kernel: Booting paravirtualized kernel on KVM Apr 24 23:41:59.870605 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:41:59.870610 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 24 23:41:59.870615 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 24 23:41:59.870620 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 24 23:41:59.870625 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 24 23:41:59.870630 kernel: kvm-guest: PV spinlocks enabled Apr 24 23:41:59.870635 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 23:41:59.870640 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:41:59.870647 kernel: random: crng init done Apr 24 23:41:59.870652 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:41:59.870657 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 23:41:59.870662 kernel: Fallback order for Node 0: 0 Apr 24 23:41:59.870667 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 24 23:41:59.870671 kernel: Policy zone: DMA32 Apr 24 23:41:59.870676 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:41:59.870682 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137896K reserved, 0K cma-reserved) Apr 24 23:41:59.870687 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 24 23:41:59.870693 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:41:59.870698 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:41:59.870703 kernel: Dynamic Preempt: voluntary Apr 24 23:41:59.870708 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:41:59.870713 kernel: rcu: RCU event tracing is enabled. Apr 24 23:41:59.870718 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 24 23:41:59.870723 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:41:59.870728 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:41:59.870733 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:41:59.870740 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:41:59.870745 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 24 23:41:59.870750 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 24 23:41:59.870755 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:41:59.870759 kernel: Console: colour VGA+ 80x25 Apr 24 23:41:59.870764 kernel: printk: console [ttyS0] enabled Apr 24 23:41:59.870769 kernel: ACPI: Core revision 20230628 Apr 24 23:41:59.870774 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 24 23:41:59.870779 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:41:59.870786 kernel: x2apic enabled Apr 24 23:41:59.870791 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 23:41:59.870796 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 24 23:41:59.870801 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 24 23:41:59.870806 kernel: kvm-guest: setup PV IPIs Apr 24 23:41:59.870837 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 24 23:41:59.870842 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 24 23:41:59.870854 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 24 23:41:59.870860 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 24 23:41:59.870865 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 24 23:41:59.870871 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 24 23:41:59.870876 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:41:59.870883 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 23:41:59.870889 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 23:41:59.870894 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 23:41:59.870900 kernel: RETBleed: Vulnerable Apr 24 23:41:59.870907 kernel: Speculative Store Bypass: Vulnerable Apr 24 23:41:59.870912 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:41:59.870918 kernel: GDS: Unknown: Dependent on hypervisor status Apr 24 23:41:59.870923 kernel: active return thunk: its_return_thunk Apr 24 23:41:59.870929 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 23:41:59.870935 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:41:59.870940 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:41:59.870946 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:41:59.870951 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 23:41:59.870959 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 23:41:59.870964 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 23:41:59.870970 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:41:59.870975 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 24 23:41:59.870981 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 24 23:41:59.870986 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 24 23:41:59.870992 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 24 23:41:59.870997 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:41:59.871003 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:41:59.871010 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:41:59.871015 kernel: landlock: Up and running. Apr 24 23:41:59.871021 kernel: SELinux: Initializing. Apr 24 23:41:59.871026 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:41:59.871032 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:41:59.871038 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 24 23:41:59.871043 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 23:41:59.871049 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 23:41:59.871054 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 23:41:59.871061 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 24 23:41:59.871067 kernel: signal: max sigframe size: 3632 Apr 24 23:41:59.871072 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:41:59.871078 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:41:59.871083 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 23:41:59.871089 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:41:59.871094 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:41:59.871100 kernel: .... node #0, CPUs: #1 #2 #3 Apr 24 23:41:59.871105 kernel: smp: Brought up 1 node, 4 CPUs Apr 24 23:41:59.871113 kernel: smpboot: Max logical packages: 1 Apr 24 23:41:59.871118 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 24 23:41:59.871124 kernel: devtmpfs: initialized Apr 24 23:41:59.871129 kernel: x86/mm: Memory block size: 128MB Apr 24 23:41:59.871134 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:41:59.871140 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 24 23:41:59.871145 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:41:59.871151 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:41:59.871156 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:41:59.871163 kernel: audit: type=2000 audit(1777074118.465:1): state=initialized audit_enabled=0 res=1 Apr 24 23:41:59.871169 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:41:59.871174 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:41:59.871179 kernel: cpuidle: using governor menu Apr 24 23:41:59.871185 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:41:59.871190 kernel: dca service started, version 1.12.1 Apr 24 23:41:59.871196 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 24 23:41:59.871201 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 24 23:41:59.871207 kernel: PCI: Using configuration type 1 for base access Apr 24 23:41:59.871214 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:41:59.871219 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:41:59.871225 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:41:59.871230 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:41:59.871236 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:41:59.871241 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:41:59.871289 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:41:59.871361 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:41:59.871367 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:41:59.871375 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:41:59.871381 kernel: ACPI: Interpreter enabled Apr 24 23:41:59.871386 kernel: ACPI: PM: (supports S0 S3 S5) Apr 24 23:41:59.871392 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:41:59.871397 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:41:59.871403 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 23:41:59.871408 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 24 23:41:59.871413 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 23:41:59.871519 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 23:41:59.871582 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 24 23:41:59.871636 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 24 23:41:59.871643 kernel: PCI host bridge to bus 0000:00 Apr 24 23:41:59.871700 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 23:41:59.871748 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 23:41:59.871797 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 23:41:59.871877 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 24 23:41:59.871926 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 24 23:41:59.871974 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 24 23:41:59.872022 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 23:41:59.872088 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 24 23:41:59.872148 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 24 23:41:59.872403 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 24 23:41:59.872459 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 24 23:41:59.872518 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 24 23:41:59.872572 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 23:41:59.872633 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 24 23:41:59.872689 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 24 23:41:59.872745 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 24 23:41:59.872804 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 24 23:41:59.872905 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 24 23:41:59.872961 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 24 23:41:59.873015 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 24 23:41:59.873070 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 24 23:41:59.873128 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 24 23:41:59.873182 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 24 23:41:59.873239 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 24 23:41:59.873349 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 24 23:41:59.873405 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 24 23:41:59.873465 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 24 23:41:59.873520 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 24 23:41:59.873579 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 24 23:41:59.873634 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 24 23:41:59.873691 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 24 23:41:59.873752 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 24 23:41:59.873807 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 24 23:41:59.873844 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 23:41:59.873850 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 23:41:59.873856 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 23:41:59.873861 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 23:41:59.873867 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 24 23:41:59.873874 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 24 23:41:59.873879 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 24 23:41:59.873885 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 24 23:41:59.873890 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 24 23:41:59.873896 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 24 23:41:59.873901 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 24 23:41:59.873906 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 24 23:41:59.873912 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 24 23:41:59.873917 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 24 23:41:59.873924 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 24 23:41:59.873929 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 24 23:41:59.873935 kernel: iommu: Default domain type: Translated Apr 24 23:41:59.873940 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:41:59.873946 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:41:59.873951 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 23:41:59.873957 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 24 23:41:59.873962 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 24 23:41:59.874018 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 24 23:41:59.874075 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 24 23:41:59.874189 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 23:41:59.874197 kernel: vgaarb: loaded Apr 24 23:41:59.874203 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 24 23:41:59.874208 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 24 23:41:59.874214 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 23:41:59.874219 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:41:59.874225 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:41:59.874232 kernel: pnp: PnP ACPI init Apr 24 23:41:59.874375 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 24 23:41:59.874384 kernel: pnp: PnP ACPI: found 6 devices Apr 24 23:41:59.874390 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:41:59.874395 kernel: NET: Registered PF_INET protocol family Apr 24 23:41:59.874401 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:41:59.874407 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 23:41:59.874412 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:41:59.874420 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 23:41:59.874426 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 23:41:59.874431 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 23:41:59.874437 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:41:59.874442 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:41:59.874448 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:41:59.874454 kernel: NET: Registered PF_XDP protocol family Apr 24 23:41:59.874553 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 23:41:59.874632 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 23:41:59.874683 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 23:41:59.874732 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 24 23:41:59.874781 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 24 23:41:59.874867 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 24 23:41:59.874875 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:41:59.874881 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 23:41:59.874887 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 24 23:41:59.874892 kernel: Initialise system trusted keyrings Apr 24 23:41:59.874900 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 23:41:59.874906 kernel: Key type asymmetric registered Apr 24 23:41:59.874911 kernel: Asymmetric key parser 'x509' registered Apr 24 23:41:59.874916 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:41:59.874922 kernel: io scheduler mq-deadline registered Apr 24 23:41:59.874927 kernel: io scheduler kyber registered Apr 24 23:41:59.874933 kernel: io scheduler bfq registered Apr 24 23:41:59.874938 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:41:59.874944 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 24 23:41:59.874951 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 24 23:41:59.874957 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 24 23:41:59.874963 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:41:59.874968 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:41:59.874973 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 23:41:59.874979 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 23:41:59.874984 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 23:41:59.875043 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 24 23:41:59.875051 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 23:41:59.875102 kernel: rtc_cmos 00:04: registered as rtc0 Apr 24 23:41:59.875153 kernel: rtc_cmos 00:04: setting system clock to 2026-04-24T23:41:59 UTC (1777074119) Apr 24 23:41:59.875202 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 24 23:41:59.875209 kernel: intel_pstate: CPU model not supported Apr 24 23:41:59.875215 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:41:59.875220 kernel: Segment Routing with IPv6 Apr 24 23:41:59.875226 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:41:59.875231 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:41:59.875238 kernel: Key type dns_resolver registered Apr 24 23:41:59.875243 kernel: IPI shorthand broadcast: enabled Apr 24 23:41:59.875341 kernel: sched_clock: Marking stable (655007954, 178670484)->(925775469, -92097031) Apr 24 23:41:59.875347 kernel: registered taskstats version 1 Apr 24 23:41:59.875352 kernel: Loading compiled-in X.509 certificates Apr 24 23:41:59.875358 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:41:59.875363 kernel: Key type .fscrypt registered Apr 24 23:41:59.875369 kernel: Key type fscrypt-provisioning registered Apr 24 23:41:59.875374 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:41:59.875381 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:41:59.875387 kernel: ima: No architecture policies found Apr 24 23:41:59.875392 kernel: clk: Disabling unused clocks Apr 24 23:41:59.875397 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:41:59.875403 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:41:59.875408 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:41:59.875414 kernel: Run /init as init process Apr 24 23:41:59.875419 kernel: with arguments: Apr 24 23:41:59.875425 kernel: /init Apr 24 23:41:59.875430 kernel: with environment: Apr 24 23:41:59.875437 kernel: HOME=/ Apr 24 23:41:59.875442 kernel: TERM=linux Apr 24 23:41:59.875450 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:41:59.875457 systemd[1]: Detected virtualization kvm. Apr 24 23:41:59.875463 systemd[1]: Detected architecture x86-64. Apr 24 23:41:59.875469 systemd[1]: Running in initrd. Apr 24 23:41:59.875475 systemd[1]: No hostname configured, using default hostname. Apr 24 23:41:59.875482 systemd[1]: Hostname set to . Apr 24 23:41:59.875488 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:41:59.875493 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:41:59.875499 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:41:59.875505 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:41:59.875512 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:41:59.875517 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:41:59.875523 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:41:59.875531 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:41:59.875546 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:41:59.875553 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:41:59.875559 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:41:59.875565 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:41:59.875572 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:41:59.875578 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:41:59.875584 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:41:59.875590 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:41:59.875596 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:41:59.875602 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:41:59.875608 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:41:59.875614 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:41:59.875621 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:41:59.875627 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:41:59.875633 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:41:59.875639 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:41:59.875645 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:41:59.875651 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:41:59.875657 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:41:59.875663 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:41:59.875669 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:41:59.875676 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:41:59.875682 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:41:59.875688 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:41:59.875707 systemd-journald[193]: Collecting audit messages is disabled. Apr 24 23:41:59.875723 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:41:59.875730 systemd-journald[193]: Journal started Apr 24 23:41:59.875747 systemd-journald[193]: Runtime Journal (/run/log/journal/82007ef00f2a43539e74424699fb8d89) is 6.0M, max 48.4M, 42.3M free. Apr 24 23:41:59.880337 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:41:59.881947 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:41:59.890377 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:41:59.978153 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:41:59.978173 kernel: Bridge firewalling registered Apr 24 23:41:59.891340 systemd-modules-load[194]: Inserted module 'overlay' Apr 24 23:41:59.913375 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 24 23:41:59.982372 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:41:59.984958 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:41:59.987338 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:41:59.990233 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:41:59.992039 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:41:59.995067 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:41:59.999325 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:42:00.009188 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:42:00.009878 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:42:00.010886 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:42:00.012449 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:42:00.013335 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:42:00.016420 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:42:00.035620 dracut-cmdline[232]: dracut-dracut-053 Apr 24 23:42:00.038067 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:42:00.042335 systemd-resolved[230]: Positive Trust Anchors: Apr 24 23:42:00.042340 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:42:00.042365 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:42:00.044218 systemd-resolved[230]: Defaulting to hostname 'linux'. Apr 24 23:42:00.044920 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:42:00.048960 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:42:00.137289 kernel: SCSI subsystem initialized Apr 24 23:42:00.145285 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:42:00.154285 kernel: iscsi: registered transport (tcp) Apr 24 23:42:00.171302 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:42:00.171319 kernel: QLogic iSCSI HBA Driver Apr 24 23:42:00.200158 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:42:00.212411 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:42:00.236315 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:42:00.236364 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:42:00.238466 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:42:00.278307 kernel: raid6: avx512x4 gen() 44566 MB/s Apr 24 23:42:00.296319 kernel: raid6: avx512x2 gen() 42757 MB/s Apr 24 23:42:00.314323 kernel: raid6: avx512x1 gen() 43541 MB/s Apr 24 23:42:00.332309 kernel: raid6: avx2x4 gen() 36462 MB/s Apr 24 23:42:00.349315 kernel: raid6: avx2x2 gen() 37513 MB/s Apr 24 23:42:00.366786 kernel: raid6: avx2x1 gen() 28617 MB/s Apr 24 23:42:00.367041 kernel: raid6: using algorithm avx512x4 gen() 44566 MB/s Apr 24 23:42:00.384785 kernel: raid6: .... xor() 10318 MB/s, rmw enabled Apr 24 23:42:00.384835 kernel: raid6: using avx512x2 recovery algorithm Apr 24 23:42:00.402288 kernel: xor: automatically using best checksumming function avx Apr 24 23:42:00.523327 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:42:00.532544 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:42:00.548443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:42:00.557079 systemd-udevd[416]: Using default interface naming scheme 'v255'. Apr 24 23:42:00.559789 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:42:00.571408 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:42:00.581557 dracut-pre-trigger[431]: rd.md=0: removing MD RAID activation Apr 24 23:42:00.603204 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:42:00.615379 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:42:00.647655 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:42:00.657489 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:42:00.671364 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:42:00.676047 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:42:00.681113 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:42:00.687568 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:42:00.683804 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:42:00.697370 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 24 23:42:00.697355 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:42:00.715917 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:42:00.715932 kernel: AES CTR mode by8 optimization enabled Apr 24 23:42:00.715940 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 24 23:42:00.716034 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 23:42:00.716043 kernel: GPT:9289727 != 19775487 Apr 24 23:42:00.716050 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 23:42:00.716061 kernel: GPT:9289727 != 19775487 Apr 24 23:42:00.716067 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:42:00.716074 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:42:00.718944 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:42:00.719103 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:42:00.721121 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:42:00.722671 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:42:00.738578 kernel: libata version 3.00 loaded. Apr 24 23:42:00.722893 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:42:00.743331 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Apr 24 23:42:00.725861 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:42:00.747670 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (464) Apr 24 23:42:00.751549 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:42:00.754993 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:42:00.764404 kernel: ahci 0000:00:1f.2: version 3.0 Apr 24 23:42:00.764532 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 24 23:42:00.765712 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 24 23:42:00.847836 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 24 23:42:00.848000 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 24 23:42:00.848071 kernel: scsi host0: ahci Apr 24 23:42:00.848149 kernel: scsi host1: ahci Apr 24 23:42:00.848221 kernel: scsi host2: ahci Apr 24 23:42:00.848331 kernel: scsi host3: ahci Apr 24 23:42:00.848400 kernel: scsi host4: ahci Apr 24 23:42:00.848466 kernel: scsi host5: ahci Apr 24 23:42:00.848530 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 24 23:42:00.848538 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 24 23:42:00.848545 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 24 23:42:00.848552 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 24 23:42:00.848559 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 24 23:42:00.848566 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 24 23:42:00.851049 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:42:00.855215 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 24 23:42:00.862223 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 24 23:42:00.862761 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 24 23:42:00.872103 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 24 23:42:00.885631 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:42:00.888848 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:42:00.893199 disk-uuid[572]: Primary Header is updated. Apr 24 23:42:00.893199 disk-uuid[572]: Secondary Entries is updated. Apr 24 23:42:00.893199 disk-uuid[572]: Secondary Header is updated. Apr 24 23:42:00.898284 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:42:00.901281 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:42:00.911520 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:42:01.082287 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 24 23:42:01.082412 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 24 23:42:01.084273 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 24 23:42:01.086290 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 24 23:42:01.086305 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 24 23:42:01.087287 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 24 23:42:01.088528 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 24 23:42:01.088538 kernel: ata3.00: applying bridge limits Apr 24 23:42:01.089304 kernel: ata3.00: configured for UDMA/100 Apr 24 23:42:01.092285 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 24 23:42:01.136411 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 24 23:42:01.136562 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 23:42:01.149282 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 24 23:42:01.903283 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:42:01.903512 disk-uuid[573]: The operation has completed successfully. Apr 24 23:42:01.924660 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:42:01.924750 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:42:01.944388 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:42:01.948304 sh[597]: Success Apr 24 23:42:01.959303 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 24 23:42:01.982549 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:42:01.994310 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:42:01.998000 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:42:02.006043 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 24 23:42:02.006068 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:42:02.006077 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:42:02.008339 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:42:02.008352 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:42:02.013140 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:42:02.015946 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:42:02.027392 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:42:02.028421 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:42:02.041709 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:42:02.041737 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:42:02.041745 kernel: BTRFS info (device vda6): using free space tree Apr 24 23:42:02.046279 kernel: BTRFS info (device vda6): auto enabling async discard Apr 24 23:42:02.051402 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:42:02.053786 kernel: BTRFS info (device vda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:42:02.058754 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:42:02.066588 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:42:02.100869 ignition[701]: Ignition 2.19.0 Apr 24 23:42:02.100881 ignition[701]: Stage: fetch-offline Apr 24 23:42:02.100904 ignition[701]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:42:02.100910 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:42:02.100974 ignition[701]: parsed url from cmdline: "" Apr 24 23:42:02.100976 ignition[701]: no config URL provided Apr 24 23:42:02.100979 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:42:02.100984 ignition[701]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:42:02.101004 ignition[701]: op(1): [started] loading QEMU firmware config module Apr 24 23:42:02.101007 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 24 23:42:02.115229 ignition[701]: op(1): [finished] loading QEMU firmware config module Apr 24 23:42:02.118296 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:42:02.133383 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:42:02.149016 systemd-networkd[785]: lo: Link UP Apr 24 23:42:02.149033 systemd-networkd[785]: lo: Gained carrier Apr 24 23:42:02.149895 systemd-networkd[785]: Enumeration completed Apr 24 23:42:02.150317 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:42:02.151144 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:42:02.151146 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:42:02.152042 systemd-networkd[785]: eth0: Link UP Apr 24 23:42:02.152044 systemd-networkd[785]: eth0: Gained carrier Apr 24 23:42:02.152049 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:42:02.155395 systemd[1]: Reached target network.target - Network. Apr 24 23:42:02.221596 ignition[701]: parsing config with SHA512: 1c5caa330294e6fa0c6467d5545f37715df833488df0d4d6e510f2eeb1b23db54b3e946f1384dc492f50f9356f1e625e4f77fce33247c2d3df7c5c495d57be36 Apr 24 23:42:02.225136 unknown[701]: fetched base config from "system" Apr 24 23:42:02.225144 unknown[701]: fetched user config from "qemu" Apr 24 23:42:02.226574 ignition[701]: fetch-offline: fetch-offline passed Apr 24 23:42:02.228042 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:42:02.226641 ignition[701]: Ignition finished successfully Apr 24 23:42:02.228303 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 24 23:42:02.230744 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 24 23:42:02.242508 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:42:02.252749 ignition[789]: Ignition 2.19.0 Apr 24 23:42:02.252762 ignition[789]: Stage: kargs Apr 24 23:42:02.252941 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:42:02.252948 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:42:02.253581 ignition[789]: kargs: kargs passed Apr 24 23:42:02.253608 ignition[789]: Ignition finished successfully Apr 24 23:42:02.257344 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:42:02.264439 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:42:02.273097 ignition[797]: Ignition 2.19.0 Apr 24 23:42:02.273111 ignition[797]: Stage: disks Apr 24 23:42:02.273222 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:42:02.273228 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:42:02.273927 ignition[797]: disks: disks passed Apr 24 23:42:02.273953 ignition[797]: Ignition finished successfully Apr 24 23:42:02.278205 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:42:02.281358 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:42:02.281906 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:42:02.284552 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:42:02.289491 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:42:02.290018 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:42:02.308429 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:42:02.320698 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 24 23:42:02.324681 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:42:02.333358 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:42:02.403161 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:42:02.405180 kernel: EXT4-fs (vda9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 24 23:42:02.404116 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:42:02.427397 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:42:02.429511 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:42:02.435839 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Apr 24 23:42:02.432087 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 23:42:02.440987 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:42:02.441004 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:42:02.441013 kernel: BTRFS info (device vda6): using free space tree Apr 24 23:42:02.441020 kernel: BTRFS info (device vda6): auto enabling async discard Apr 24 23:42:02.432121 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:42:02.432137 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:42:02.436543 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:42:02.445040 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:42:02.455419 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:42:02.482681 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:42:02.486946 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:42:02.490877 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:42:02.493612 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:42:02.551023 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:42:02.563369 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:42:02.564479 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:42:02.574284 kernel: BTRFS info (device vda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:42:02.584655 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:42:02.592172 ignition[931]: INFO : Ignition 2.19.0 Apr 24 23:42:02.592172 ignition[931]: INFO : Stage: mount Apr 24 23:42:02.594196 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:42:02.594196 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:42:02.594196 ignition[931]: INFO : mount: mount passed Apr 24 23:42:02.594196 ignition[931]: INFO : Ignition finished successfully Apr 24 23:42:02.599777 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:42:02.607406 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:42:03.005159 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:42:03.013660 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:42:03.020267 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Apr 24 23:42:03.022998 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:42:03.023150 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:42:03.023159 kernel: BTRFS info (device vda6): using free space tree Apr 24 23:42:03.027273 kernel: BTRFS info (device vda6): auto enabling async discard Apr 24 23:42:03.027969 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:42:03.046875 ignition[960]: INFO : Ignition 2.19.0 Apr 24 23:42:03.046875 ignition[960]: INFO : Stage: files Apr 24 23:42:03.049025 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:42:03.049025 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:42:03.049025 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:42:03.049025 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:42:03.049025 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:42:03.057380 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:42:03.057380 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:42:03.057380 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:42:03.057380 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 24 23:42:03.057380 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 24 23:42:03.057380 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:42:03.057380 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 23:42:03.051558 unknown[960]: wrote ssh authorized keys file for user: core Apr 24 23:42:03.123273 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 24 23:42:03.216562 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:42:03.216562 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 24 23:42:03.221319 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 24 23:42:03.276625 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 24 23:42:03.357867 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 24 23:42:03.357867 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:42:03.362591 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:42:03.362591 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:42:03.362591 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:42:03.362591 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:42:03.362591 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:42:03.373372 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:42:03.375508 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:42:03.377885 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:42:03.380198 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:42:03.382392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:42:03.385536 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:42:03.388610 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:42:03.391258 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 24 23:42:03.621240 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 24 23:42:04.057470 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:42:04.057470 ignition[960]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 24 23:42:04.062162 ignition[960]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 24 23:42:04.062162 ignition[960]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 24 23:42:04.062162 ignition[960]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 24 23:42:04.062162 ignition[960]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 24 23:42:04.062162 ignition[960]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:42:04.062162 ignition[960]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:42:04.062162 ignition[960]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 24 23:42:04.062162 ignition[960]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Apr 24 23:42:04.062162 ignition[960]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 24 23:42:04.062162 ignition[960]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 24 23:42:04.062162 ignition[960]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Apr 24 23:42:04.062162 ignition[960]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Apr 24 23:42:04.091411 ignition[960]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 24 23:42:04.091411 ignition[960]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 24 23:42:04.091411 ignition[960]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Apr 24 23:42:04.091411 ignition[960]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:42:04.091411 ignition[960]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:42:04.091411 ignition[960]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:42:04.091411 ignition[960]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:42:04.091411 ignition[960]: INFO : files: files passed Apr 24 23:42:04.091411 ignition[960]: INFO : Ignition finished successfully Apr 24 23:42:04.080865 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:42:04.096438 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:42:04.099336 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:42:04.102047 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:42:04.129268 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Apr 24 23:42:04.102126 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:42:04.132489 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:42:04.132489 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:42:04.107457 systemd-networkd[785]: eth0: Gained IPv6LL Apr 24 23:42:04.138278 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:42:04.110597 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:42:04.111211 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:42:04.121454 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:42:04.149765 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:42:04.149864 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:42:04.152615 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:42:04.155237 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:42:04.157670 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:42:04.160914 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:42:04.173615 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:42:04.176796 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:42:04.191931 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:42:04.194901 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:42:04.195799 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:42:04.198530 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:42:04.198633 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:42:04.202744 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:42:04.203587 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:42:04.206888 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:42:04.209015 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:42:04.211614 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:42:04.216581 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:42:04.217165 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:42:04.219627 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:42:04.224717 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:42:04.225673 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:42:04.229376 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:42:04.229523 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:42:04.233141 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:42:04.233708 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:42:04.238661 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:42:04.239980 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:42:04.240570 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:42:04.240670 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:42:04.245924 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:42:04.246086 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:42:04.248895 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:42:04.253081 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:42:04.257435 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:42:04.258155 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:42:04.261699 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:42:04.263782 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:42:04.263860 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:42:04.265994 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:42:04.266069 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:42:04.268845 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:42:04.268945 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:42:04.271452 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:42:04.271517 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:42:04.286482 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:42:04.287344 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:42:04.287521 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:42:04.293718 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:42:04.294501 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:42:04.294586 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:42:04.298961 ignition[1014]: INFO : Ignition 2.19.0 Apr 24 23:42:04.298961 ignition[1014]: INFO : Stage: umount Apr 24 23:42:04.298961 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:42:04.298961 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:42:04.298961 ignition[1014]: INFO : umount: umount passed Apr 24 23:42:04.298961 ignition[1014]: INFO : Ignition finished successfully Apr 24 23:42:04.301981 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:42:04.302067 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:42:04.306799 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:42:04.306887 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:42:04.308827 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:42:04.308923 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:42:04.310119 systemd[1]: Stopped target network.target - Network. Apr 24 23:42:04.312683 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:42:04.312730 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:42:04.314781 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:42:04.314809 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:42:04.319565 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:42:04.321518 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:42:04.325022 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:42:04.325066 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:42:04.332276 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:42:04.333089 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:42:04.333305 systemd-networkd[785]: eth0: DHCPv6 lease lost Apr 24 23:42:04.337614 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:42:04.338008 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:42:04.338089 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:42:04.340046 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:42:04.340089 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:42:04.350551 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:42:04.350978 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:42:04.351030 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:42:04.356940 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:42:04.372439 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:42:04.372532 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:42:04.373397 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:42:04.373449 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:42:04.377325 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:42:04.377396 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:42:04.383575 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:42:04.383659 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:42:04.385538 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:42:04.385571 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:42:04.387944 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:42:04.387975 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:42:04.390966 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:42:04.390995 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:42:04.393786 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:42:04.393898 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:42:04.396578 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:42:04.396622 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:42:04.398952 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:42:04.398977 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:42:04.401540 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:42:04.401568 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:42:04.406681 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:42:04.406725 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:42:04.411350 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:42:04.411397 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:42:04.426378 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:42:04.427804 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:42:04.431087 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:42:04.434346 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:42:04.434385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:42:04.438797 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:42:04.438904 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:42:04.442962 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:42:04.459473 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:42:04.467060 systemd[1]: Switching root. Apr 24 23:42:04.509641 systemd-journald[193]: Journal stopped Apr 24 23:42:05.206919 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 24 23:42:05.206975 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:42:05.206988 kernel: SELinux: policy capability open_perms=1 Apr 24 23:42:05.206996 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:42:05.207007 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:42:05.207014 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:42:05.207022 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:42:05.207029 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:42:05.207036 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:42:05.207046 kernel: audit: type=1403 audit(1777074124.660:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:42:05.207060 systemd[1]: Successfully loaded SELinux policy in 32.549ms. Apr 24 23:42:05.207076 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.606ms. Apr 24 23:42:05.207086 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:42:05.207094 systemd[1]: Detected virtualization kvm. Apr 24 23:42:05.207102 systemd[1]: Detected architecture x86-64. Apr 24 23:42:05.207109 systemd[1]: Detected first boot. Apr 24 23:42:05.207117 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:42:05.207125 zram_generator::config[1075]: No configuration found. Apr 24 23:42:05.207135 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:42:05.207143 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:42:05.207151 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 24 23:42:05.207161 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:42:05.207168 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:42:05.207176 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:42:05.207184 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:42:05.207192 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:42:05.207200 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:42:05.207209 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:42:05.207217 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:42:05.207226 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:42:05.207235 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:42:05.207242 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:42:05.207324 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:42:05.207334 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:42:05.207342 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:42:05.207350 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 23:42:05.207357 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:42:05.207365 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:42:05.207374 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:42:05.207383 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:42:05.207390 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:42:05.207398 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:42:05.207406 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:42:05.207414 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:42:05.207421 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:42:05.207429 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:42:05.207438 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:42:05.207446 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:42:05.207455 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:42:05.207464 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:42:05.207472 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:42:05.207480 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:42:05.207488 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:42:05.207496 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:42:05.207504 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:42:05.207513 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:42:05.207521 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:42:05.207529 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:42:05.207536 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:42:05.207544 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:42:05.207552 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:42:05.207559 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:42:05.207566 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:42:05.207574 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:42:05.207583 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:42:05.207590 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:42:05.207598 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:42:05.207607 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 24 23:42:05.207616 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 24 23:42:05.207624 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:42:05.207632 kernel: fuse: init (API version 7.39) Apr 24 23:42:05.207639 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:42:05.207647 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:42:05.207656 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:42:05.207663 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:42:05.207671 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:42:05.207680 kernel: ACPI: bus type drm_connector registered Apr 24 23:42:05.207687 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:42:05.207694 kernel: loop: module loaded Apr 24 23:42:05.207704 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:42:05.207711 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:42:05.207734 systemd-journald[1167]: Collecting audit messages is disabled. Apr 24 23:42:05.207751 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:42:05.207760 systemd-journald[1167]: Journal started Apr 24 23:42:05.207775 systemd-journald[1167]: Runtime Journal (/run/log/journal/82007ef00f2a43539e74424699fb8d89) is 6.0M, max 48.4M, 42.3M free. Apr 24 23:42:05.211280 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:42:05.212680 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:42:05.214158 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:42:05.215594 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:42:05.217305 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:42:05.219026 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:42:05.219142 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:42:05.220789 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:42:05.220920 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:42:05.222503 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:42:05.222615 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:42:05.224112 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:42:05.224222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:42:05.225941 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:42:05.226048 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:42:05.227566 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:42:05.227692 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:42:05.229495 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:42:05.231235 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:42:05.233037 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:42:05.240874 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:42:05.245397 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:42:05.247585 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:42:05.249068 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:42:05.250446 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:42:05.252594 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:42:05.254447 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:42:05.255465 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:42:05.257130 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:42:05.259352 systemd-journald[1167]: Time spent on flushing to /var/log/journal/82007ef00f2a43539e74424699fb8d89 is 13.249ms for 941 entries. Apr 24 23:42:05.259352 systemd-journald[1167]: System Journal (/var/log/journal/82007ef00f2a43539e74424699fb8d89) is 8.0M, max 195.6M, 187.6M free. Apr 24 23:42:05.287726 systemd-journald[1167]: Received client request to flush runtime journal. Apr 24 23:42:05.258639 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:42:05.262225 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:42:05.265013 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:42:05.266821 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:42:05.269439 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:42:05.275373 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:42:05.277061 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:42:05.279097 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:42:05.288630 udevadm[1219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 24 23:42:05.289406 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:42:05.290802 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Apr 24 23:42:05.290810 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Apr 24 23:42:05.294420 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:42:05.296300 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:42:05.308382 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:42:05.325004 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:42:05.337401 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:42:05.349609 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Apr 24 23:42:05.349632 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Apr 24 23:42:05.352469 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:42:05.583340 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:42:05.598538 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:42:05.616092 systemd-udevd[1242]: Using default interface naming scheme 'v255'. Apr 24 23:42:05.630344 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:42:05.638754 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:42:05.648445 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:42:05.668295 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1257) Apr 24 23:42:05.680117 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:42:05.684163 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 24 23:42:05.710863 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 24 23:42:05.722214 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 24 23:42:05.731861 kernel: ACPI: button: Power Button [PWRF] Apr 24 23:42:05.737722 systemd-networkd[1249]: lo: Link UP Apr 24 23:42:05.737731 systemd-networkd[1249]: lo: Gained carrier Apr 24 23:42:05.738549 systemd-networkd[1249]: Enumeration completed Apr 24 23:42:05.738687 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:42:05.739985 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:42:05.740056 systemd-networkd[1249]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:42:05.742588 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 24 23:42:05.742761 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 24 23:42:05.742858 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 24 23:42:05.740962 systemd-networkd[1249]: eth0: Link UP Apr 24 23:42:05.740965 systemd-networkd[1249]: eth0: Gained carrier Apr 24 23:42:05.740979 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:42:05.749377 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 24 23:42:05.751428 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:42:05.754758 systemd-networkd[1249]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 24 23:42:05.757388 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 23:42:05.771359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:42:05.885589 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:42:05.898324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:42:05.912418 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:42:05.919161 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:42:05.940307 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:42:05.942156 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:42:05.953338 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:42:05.958190 lvm[1290]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:42:05.986349 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:42:05.988151 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:42:05.989817 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:42:05.989840 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:42:05.991205 systemd[1]: Reached target machines.target - Containers. Apr 24 23:42:05.993204 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:42:06.007359 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:42:06.009793 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:42:06.011218 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:42:06.011846 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:42:06.014167 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:42:06.017352 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:42:06.018447 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:42:06.022183 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:42:06.030737 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:42:06.032034 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:42:06.038286 kernel: loop0: detected capacity change from 0 to 142488 Apr 24 23:42:06.051402 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:42:06.076033 kernel: loop1: detected capacity change from 0 to 140768 Apr 24 23:42:06.101477 kernel: loop2: detected capacity change from 0 to 228704 Apr 24 23:42:06.125291 kernel: loop3: detected capacity change from 0 to 142488 Apr 24 23:42:06.134284 kernel: loop4: detected capacity change from 0 to 140768 Apr 24 23:42:06.145269 kernel: loop5: detected capacity change from 0 to 228704 Apr 24 23:42:06.149141 (sd-merge)[1310]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 24 23:42:06.149497 (sd-merge)[1310]: Merged extensions into '/usr'. Apr 24 23:42:06.152614 systemd[1]: Reloading requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:42:06.152633 systemd[1]: Reloading... Apr 24 23:42:06.182330 zram_generator::config[1339]: No configuration found. Apr 24 23:42:06.197655 ldconfig[1295]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:42:06.262199 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:42:06.300541 systemd[1]: Reloading finished in 147 ms. Apr 24 23:42:06.319483 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:42:06.321238 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:42:06.334356 systemd[1]: Starting ensure-sysext.service... Apr 24 23:42:06.336062 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:42:06.339140 systemd[1]: Reloading requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:42:06.339159 systemd[1]: Reloading... Apr 24 23:42:06.350666 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:42:06.350874 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:42:06.351419 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:42:06.351587 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Apr 24 23:42:06.351632 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Apr 24 23:42:06.353720 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:42:06.353735 systemd-tmpfiles[1383]: Skipping /boot Apr 24 23:42:06.358858 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:42:06.358879 systemd-tmpfiles[1383]: Skipping /boot Apr 24 23:42:06.373279 zram_generator::config[1414]: No configuration found. Apr 24 23:42:06.445581 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:42:06.481776 systemd[1]: Reloading finished in 142 ms. Apr 24 23:42:06.495519 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:42:06.506510 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:42:06.508954 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:42:06.511272 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:42:06.514756 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:42:06.517542 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:42:06.524042 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:42:06.524159 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:42:06.524917 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:42:06.529418 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:42:06.531836 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:42:06.533558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:42:06.533710 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:42:06.534227 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:42:06.537638 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:42:06.538202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:42:06.540398 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:42:06.540522 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:42:06.542838 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:42:06.543015 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:42:06.549083 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:42:06.549466 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:42:06.550453 augenrules[1487]: No rules Apr 24 23:42:06.557488 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:42:06.562353 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:42:06.564632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:42:06.566140 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:42:06.567095 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:42:06.569896 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:42:06.570897 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:42:06.572964 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:42:06.574364 systemd-resolved[1461]: Positive Trust Anchors: Apr 24 23:42:06.574374 systemd-resolved[1461]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:42:06.574398 systemd-resolved[1461]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:42:06.575017 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:42:06.576991 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:42:06.577096 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:42:06.579205 systemd-resolved[1461]: Defaulting to hostname 'linux'. Apr 24 23:42:06.579858 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:42:06.579987 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:42:06.581867 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:42:06.584023 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:42:06.584147 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:42:06.586228 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:42:06.596404 systemd[1]: Reached target network.target - Network. Apr 24 23:42:06.597641 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:42:06.599184 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:42:06.599346 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:42:06.614370 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:42:06.616431 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:42:06.618463 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:42:06.620602 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:42:06.622009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:42:06.622059 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:42:06.622076 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:42:06.622491 systemd[1]: Finished ensure-sysext.service. Apr 24 23:42:06.623837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:42:06.623959 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:42:06.625693 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:42:06.625853 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:42:06.627472 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:42:06.627576 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:42:06.629319 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:42:06.629487 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:42:06.633665 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:42:06.633719 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:42:06.634898 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 24 23:42:06.675739 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 24 23:42:06.677466 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:42:06.678195 systemd-timesyncd[1529]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 24 23:42:06.678231 systemd-timesyncd[1529]: Initial clock synchronization to Fri 2026-04-24 23:42:06.444873 UTC. Apr 24 23:42:06.679014 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:42:06.680569 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:42:06.682150 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:42:06.683720 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:42:06.683750 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:42:06.684880 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:42:06.686279 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:42:06.687666 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:42:06.689231 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:42:06.691281 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:42:06.693960 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:42:06.696057 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:42:06.710311 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:42:06.711806 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:42:06.713057 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:42:06.714377 systemd[1]: System is tainted: cgroupsv1 Apr 24 23:42:06.714422 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:42:06.714438 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:42:06.715422 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:42:06.717590 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:42:06.719529 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:42:06.721731 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:42:06.722504 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:42:06.723379 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:42:06.726535 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:42:06.730076 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:42:06.734625 jq[1535]: false Apr 24 23:42:06.736479 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:42:06.741392 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:42:06.742964 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:42:06.746429 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:42:06.747714 extend-filesystems[1537]: Found loop3 Apr 24 23:42:06.747714 extend-filesystems[1537]: Found loop4 Apr 24 23:42:06.747714 extend-filesystems[1537]: Found loop5 Apr 24 23:42:06.747714 extend-filesystems[1537]: Found sr0 Apr 24 23:42:06.747714 extend-filesystems[1537]: Found vda Apr 24 23:42:06.747714 extend-filesystems[1537]: Found vda1 Apr 24 23:42:06.747714 extend-filesystems[1537]: Found vda2 Apr 24 23:42:06.747714 extend-filesystems[1537]: Found vda3 Apr 24 23:42:06.747714 extend-filesystems[1537]: Found usr Apr 24 23:42:06.747714 extend-filesystems[1537]: Found vda4 Apr 24 23:42:06.747714 extend-filesystems[1537]: Found vda6 Apr 24 23:42:06.747714 extend-filesystems[1537]: Found vda7 Apr 24 23:42:06.747714 extend-filesystems[1537]: Found vda9 Apr 24 23:42:06.747714 extend-filesystems[1537]: Checking size of /dev/vda9 Apr 24 23:42:06.749233 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:42:06.751860 dbus-daemon[1534]: [system] SELinux support is enabled Apr 24 23:42:06.753594 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:42:06.773602 update_engine[1552]: I20260424 23:42:06.758747 1552 main.cc:92] Flatcar Update Engine starting Apr 24 23:42:06.773602 update_engine[1552]: I20260424 23:42:06.764297 1552 update_check_scheduler.cc:74] Next update check in 10m5s Apr 24 23:42:06.768481 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:42:06.774344 jq[1557]: true Apr 24 23:42:06.769470 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:42:06.769651 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:42:06.775440 extend-filesystems[1537]: Resized partition /dev/vda9 Apr 24 23:42:06.783442 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1252) Apr 24 23:42:06.769787 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:42:06.783511 extend-filesystems[1565]: resize2fs 1.47.1 (20-May-2024) Apr 24 23:42:06.793556 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 24 23:42:06.774714 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:42:06.774864 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:42:06.796927 (ntainerd)[1572]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:42:06.802144 jq[1566]: true Apr 24 23:42:06.803750 tar[1564]: linux-amd64/LICENSE Apr 24 23:42:06.804015 tar[1564]: linux-amd64/helm Apr 24 23:42:06.812094 systemd-logind[1549]: Watching system buttons on /dev/input/event1 (Power Button) Apr 24 23:42:06.812314 systemd-logind[1549]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 23:42:06.812754 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:42:06.813201 systemd-logind[1549]: New seat seat0. Apr 24 23:42:06.815216 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:42:06.817773 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:42:06.818076 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:42:06.819932 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:42:06.820015 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:42:06.822187 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:42:06.829081 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:42:06.856376 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 24 23:42:06.858537 locksmithd[1585]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:42:06.874878 extend-filesystems[1565]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 24 23:42:06.874878 extend-filesystems[1565]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 24 23:42:06.874878 extend-filesystems[1565]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 24 23:42:06.883593 extend-filesystems[1537]: Resized filesystem in /dev/vda9 Apr 24 23:42:06.888206 bash[1595]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:42:06.875954 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:42:06.876215 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:42:06.885649 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:42:06.887721 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 24 23:42:06.901968 sshd_keygen[1559]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:42:06.919899 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:42:06.928479 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:42:06.933179 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:42:06.933456 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:42:06.940445 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:42:06.949592 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:42:06.957519 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:42:06.959543 containerd[1572]: time="2026-04-24T23:42:06.959211123Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:42:06.960181 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 23:42:06.962436 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:42:06.978598 containerd[1572]: time="2026-04-24T23:42:06.978559068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:42:06.979769 containerd[1572]: time="2026-04-24T23:42:06.979720810Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:42:06.979769 containerd[1572]: time="2026-04-24T23:42:06.979751416Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:42:06.979769 containerd[1572]: time="2026-04-24T23:42:06.979763006Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:42:06.979891 containerd[1572]: time="2026-04-24T23:42:06.979874622Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:42:06.979929 containerd[1572]: time="2026-04-24T23:42:06.979894723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:42:06.979973 containerd[1572]: time="2026-04-24T23:42:06.979956746Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:42:06.979988 containerd[1572]: time="2026-04-24T23:42:06.979973262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:42:06.980138 containerd[1572]: time="2026-04-24T23:42:06.980119796Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:42:06.980155 containerd[1572]: time="2026-04-24T23:42:06.980139202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:42:06.980155 containerd[1572]: time="2026-04-24T23:42:06.980149250Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:42:06.980184 containerd[1572]: time="2026-04-24T23:42:06.980156474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:42:06.980216 containerd[1572]: time="2026-04-24T23:42:06.980202024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:42:06.980387 containerd[1572]: time="2026-04-24T23:42:06.980370154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:42:06.980510 containerd[1572]: time="2026-04-24T23:42:06.980493489Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:42:06.980527 containerd[1572]: time="2026-04-24T23:42:06.980510925Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:42:06.980577 containerd[1572]: time="2026-04-24T23:42:06.980564596Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:42:06.980614 containerd[1572]: time="2026-04-24T23:42:06.980600273Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:42:06.985768 containerd[1572]: time="2026-04-24T23:42:06.985722257Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:42:06.985808 containerd[1572]: time="2026-04-24T23:42:06.985786391Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:42:06.985808 containerd[1572]: time="2026-04-24T23:42:06.985799204Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:42:06.985851 containerd[1572]: time="2026-04-24T23:42:06.985810788Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:42:06.985851 containerd[1572]: time="2026-04-24T23:42:06.985821422Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:42:06.985960 containerd[1572]: time="2026-04-24T23:42:06.985932760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:42:06.986289 containerd[1572]: time="2026-04-24T23:42:06.986236374Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:42:06.986377 containerd[1572]: time="2026-04-24T23:42:06.986360320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:42:06.986403 containerd[1572]: time="2026-04-24T23:42:06.986381391Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:42:06.986403 containerd[1572]: time="2026-04-24T23:42:06.986391103Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:42:06.986403 containerd[1572]: time="2026-04-24T23:42:06.986401604Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:42:06.986458 containerd[1572]: time="2026-04-24T23:42:06.986410891Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:42:06.986458 containerd[1572]: time="2026-04-24T23:42:06.986420294Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:42:06.986458 containerd[1572]: time="2026-04-24T23:42:06.986430657Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:42:06.986458 containerd[1572]: time="2026-04-24T23:42:06.986440072Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:42:06.986458 containerd[1572]: time="2026-04-24T23:42:06.986449003Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:42:06.986458 containerd[1572]: time="2026-04-24T23:42:06.986456948Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:42:06.986533 containerd[1572]: time="2026-04-24T23:42:06.986464674Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:42:06.986533 containerd[1572]: time="2026-04-24T23:42:06.986478580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.986533 containerd[1572]: time="2026-04-24T23:42:06.986487834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.986533 containerd[1572]: time="2026-04-24T23:42:06.986496414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.986533 containerd[1572]: time="2026-04-24T23:42:06.986505576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.986533 containerd[1572]: time="2026-04-24T23:42:06.986514265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.986533 containerd[1572]: time="2026-04-24T23:42:06.986523700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.986533 containerd[1572]: time="2026-04-24T23:42:06.986532566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.986638 containerd[1572]: time="2026-04-24T23:42:06.986541910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.986638 containerd[1572]: time="2026-04-24T23:42:06.986551017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.986638 containerd[1572]: time="2026-04-24T23:42:06.986565810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.986638 containerd[1572]: time="2026-04-24T23:42:06.986574328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.986638 containerd[1572]: time="2026-04-24T23:42:06.986582046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.986638 containerd[1572]: time="2026-04-24T23:42:06.986590133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.986638 containerd[1572]: time="2026-04-24T23:42:06.986600210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:42:06.986638 containerd[1572]: time="2026-04-24T23:42:06.986613947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.986638 containerd[1572]: time="2026-04-24T23:42:06.986622422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.986638 containerd[1572]: time="2026-04-24T23:42:06.986629691Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:42:06.986756 containerd[1572]: time="2026-04-24T23:42:06.986663709Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:42:06.986756 containerd[1572]: time="2026-04-24T23:42:06.986674927Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:42:06.986756 containerd[1572]: time="2026-04-24T23:42:06.986682300Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:42:06.986756 containerd[1572]: time="2026-04-24T23:42:06.986690704Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:42:06.986756 containerd[1572]: time="2026-04-24T23:42:06.986697118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.986756 containerd[1572]: time="2026-04-24T23:42:06.986705702Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:42:06.986756 containerd[1572]: time="2026-04-24T23:42:06.986712584Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:42:06.986756 containerd[1572]: time="2026-04-24T23:42:06.986719400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:42:06.987222 containerd[1572]: time="2026-04-24T23:42:06.987039940Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:42:06.987222 containerd[1572]: time="2026-04-24T23:42:06.987120767Z" level=info msg="Connect containerd service" Apr 24 23:42:06.987950 containerd[1572]: time="2026-04-24T23:42:06.987165045Z" level=info msg="using legacy CRI server" Apr 24 23:42:06.987950 containerd[1572]: time="2026-04-24T23:42:06.987291778Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:42:06.987950 containerd[1572]: time="2026-04-24T23:42:06.987393043Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:42:06.988291 containerd[1572]: time="2026-04-24T23:42:06.988245109Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:42:06.988398 containerd[1572]: time="2026-04-24T23:42:06.988381170Z" level=info msg="Start subscribing containerd event" Apr 24 23:42:06.988447 containerd[1572]: time="2026-04-24T23:42:06.988440116Z" level=info msg="Start recovering state" Apr 24 23:42:06.988759 containerd[1572]: time="2026-04-24T23:42:06.988536902Z" level=info msg="Start event monitor" Apr 24 23:42:06.988819 containerd[1572]: time="2026-04-24T23:42:06.988801754Z" level=info msg="Start snapshots syncer" Apr 24 23:42:06.988834 containerd[1572]: time="2026-04-24T23:42:06.988820326Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:42:06.988834 containerd[1572]: time="2026-04-24T23:42:06.988826411Z" level=info msg="Start streaming server" Apr 24 23:42:06.988925 containerd[1572]: time="2026-04-24T23:42:06.988702628Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:42:06.989018 containerd[1572]: time="2026-04-24T23:42:06.989004209Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:42:06.989066 containerd[1572]: time="2026-04-24T23:42:06.989053997Z" level=info msg="containerd successfully booted in 0.030402s" Apr 24 23:42:06.989122 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:42:07.115696 systemd-networkd[1249]: eth0: Gained IPv6LL Apr 24 23:42:07.117931 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:42:07.119965 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:42:07.132428 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 24 23:42:07.134935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:42:07.137137 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:42:07.152528 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:42:07.156402 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 24 23:42:07.156562 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 24 23:42:07.158643 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:42:07.221401 tar[1564]: linux-amd64/README.md Apr 24 23:42:07.234308 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:42:07.750144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:42:07.751914 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:42:07.753085 systemd[1]: Startup finished in 5.762s (kernel) + 3.124s (userspace) = 8.887s. Apr 24 23:42:07.753700 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:42:08.288728 kubelet[1671]: E0424 23:42:08.288624 1671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:42:08.291165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:42:08.291345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:42:12.498553 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:42:12.515446 systemd[1]: Started sshd@0-10.0.0.69:22-10.0.0.1:45476.service - OpenSSH per-connection server daemon (10.0.0.1:45476). Apr 24 23:42:12.548135 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 45476 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:42:12.549387 sshd[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:42:12.554739 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:42:12.568433 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:42:12.569609 systemd-logind[1549]: New session 1 of user core. Apr 24 23:42:12.576078 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:42:12.577626 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:42:12.582772 (systemd)[1690]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:42:12.642658 systemd[1690]: Queued start job for default target default.target. Apr 24 23:42:12.642888 systemd[1690]: Created slice app.slice - User Application Slice. Apr 24 23:42:12.642912 systemd[1690]: Reached target paths.target - Paths. Apr 24 23:42:12.642921 systemd[1690]: Reached target timers.target - Timers. Apr 24 23:42:12.659328 systemd[1690]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:42:12.663659 systemd[1690]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:42:12.663706 systemd[1690]: Reached target sockets.target - Sockets. Apr 24 23:42:12.663715 systemd[1690]: Reached target basic.target - Basic System. Apr 24 23:42:12.663737 systemd[1690]: Reached target default.target - Main User Target. Apr 24 23:42:12.663753 systemd[1690]: Startup finished in 76ms. Apr 24 23:42:12.663956 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:42:12.664910 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:42:12.716799 systemd[1]: Started sshd@1-10.0.0.69:22-10.0.0.1:45488.service - OpenSSH per-connection server daemon (10.0.0.1:45488). Apr 24 23:42:12.743729 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 45488 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:42:12.744656 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:42:12.747698 systemd-logind[1549]: New session 2 of user core. Apr 24 23:42:12.755459 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:42:12.803955 sshd[1702]: pam_unix(sshd:session): session closed for user core Apr 24 23:42:12.814479 systemd[1]: Started sshd@2-10.0.0.69:22-10.0.0.1:45500.service - OpenSSH per-connection server daemon (10.0.0.1:45500). Apr 24 23:42:12.814799 systemd[1]: sshd@1-10.0.0.69:22-10.0.0.1:45488.service: Deactivated successfully. Apr 24 23:42:12.815797 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 23:42:12.816218 systemd-logind[1549]: Session 2 logged out. Waiting for processes to exit. Apr 24 23:42:12.817232 systemd-logind[1549]: Removed session 2. Apr 24 23:42:12.837548 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 45500 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:42:12.838404 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:42:12.841190 systemd-logind[1549]: New session 3 of user core. Apr 24 23:42:12.854434 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:42:12.900030 sshd[1707]: pam_unix(sshd:session): session closed for user core Apr 24 23:42:12.908445 systemd[1]: Started sshd@3-10.0.0.69:22-10.0.0.1:45512.service - OpenSSH per-connection server daemon (10.0.0.1:45512). Apr 24 23:42:12.908775 systemd[1]: sshd@2-10.0.0.69:22-10.0.0.1:45500.service: Deactivated successfully. Apr 24 23:42:12.909754 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 23:42:12.910179 systemd-logind[1549]: Session 3 logged out. Waiting for processes to exit. Apr 24 23:42:12.911028 systemd-logind[1549]: Removed session 3. Apr 24 23:42:12.931418 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 45512 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:42:12.932230 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:42:12.934946 systemd-logind[1549]: New session 4 of user core. Apr 24 23:42:12.952406 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:42:13.000430 sshd[1715]: pam_unix(sshd:session): session closed for user core Apr 24 23:42:13.007429 systemd[1]: Started sshd@4-10.0.0.69:22-10.0.0.1:45518.service - OpenSSH per-connection server daemon (10.0.0.1:45518). Apr 24 23:42:13.007692 systemd[1]: sshd@3-10.0.0.69:22-10.0.0.1:45512.service: Deactivated successfully. Apr 24 23:42:13.008735 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:42:13.009155 systemd-logind[1549]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:42:13.010028 systemd-logind[1549]: Removed session 4. Apr 24 23:42:13.030629 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 45518 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:42:13.031818 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:42:13.034494 systemd-logind[1549]: New session 5 of user core. Apr 24 23:42:13.040425 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:42:13.093285 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:42:13.093503 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:42:13.113075 sudo[1730]: pam_unix(sudo:session): session closed for user root Apr 24 23:42:13.114448 sshd[1723]: pam_unix(sshd:session): session closed for user core Apr 24 23:42:13.123671 systemd[1]: Started sshd@5-10.0.0.69:22-10.0.0.1:45528.service - OpenSSH per-connection server daemon (10.0.0.1:45528). Apr 24 23:42:13.124073 systemd[1]: sshd@4-10.0.0.69:22-10.0.0.1:45518.service: Deactivated successfully. Apr 24 23:42:13.125491 systemd-logind[1549]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:42:13.126832 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:42:13.127513 systemd-logind[1549]: Removed session 5. Apr 24 23:42:13.151808 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 45528 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:42:13.152794 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:42:13.156323 systemd-logind[1549]: New session 6 of user core. Apr 24 23:42:13.162677 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:42:13.212437 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:42:13.212684 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:42:13.216209 sudo[1740]: pam_unix(sudo:session): session closed for user root Apr 24 23:42:13.220174 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:42:13.220418 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:42:13.242458 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:42:13.243757 auditctl[1743]: No rules Apr 24 23:42:13.244445 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:42:13.244631 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:42:13.245933 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:42:13.270773 augenrules[1762]: No rules Apr 24 23:42:13.271664 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:42:13.272585 sudo[1739]: pam_unix(sudo:session): session closed for user root Apr 24 23:42:13.274309 sshd[1732]: pam_unix(sshd:session): session closed for user core Apr 24 23:42:13.287469 systemd[1]: Started sshd@6-10.0.0.69:22-10.0.0.1:45534.service - OpenSSH per-connection server daemon (10.0.0.1:45534). Apr 24 23:42:13.287722 systemd[1]: sshd@5-10.0.0.69:22-10.0.0.1:45528.service: Deactivated successfully. Apr 24 23:42:13.288930 systemd-logind[1549]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:42:13.289332 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:42:13.290102 systemd-logind[1549]: Removed session 6. Apr 24 23:42:13.312674 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 45534 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:42:13.313537 sshd[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:42:13.316534 systemd-logind[1549]: New session 7 of user core. Apr 24 23:42:13.322493 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:42:13.371727 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:42:13.371930 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:42:13.941514 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:42:13.941592 (dockerd)[1794]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:42:14.147651 kernel: hrtimer: interrupt took 7834624 ns Apr 24 23:42:14.915176 dockerd[1794]: time="2026-04-24T23:42:14.915058153Z" level=info msg="Starting up" Apr 24 23:42:15.131611 dockerd[1794]: time="2026-04-24T23:42:15.131553632Z" level=info msg="Loading containers: start." Apr 24 23:42:15.433283 kernel: Initializing XFRM netlink socket Apr 24 23:42:15.501365 systemd-networkd[1249]: docker0: Link UP Apr 24 23:42:15.522335 dockerd[1794]: time="2026-04-24T23:42:15.522291561Z" level=info msg="Loading containers: done." Apr 24 23:42:15.544569 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck174072131-merged.mount: Deactivated successfully. Apr 24 23:42:15.546292 dockerd[1794]: time="2026-04-24T23:42:15.546217754Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:42:15.546413 dockerd[1794]: time="2026-04-24T23:42:15.546391474Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:42:15.546572 dockerd[1794]: time="2026-04-24T23:42:15.546541773Z" level=info msg="Daemon has completed initialization" Apr 24 23:42:15.584682 dockerd[1794]: time="2026-04-24T23:42:15.584564850Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:42:15.584780 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:42:16.224449 containerd[1572]: time="2026-04-24T23:42:16.224393245Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 24 23:42:16.613690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2804806959.mount: Deactivated successfully. Apr 24 23:42:17.582446 containerd[1572]: time="2026-04-24T23:42:17.582401647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:17.582897 containerd[1572]: time="2026-04-24T23:42:17.582845634Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 24 23:42:17.583691 containerd[1572]: time="2026-04-24T23:42:17.583657551Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:17.586310 containerd[1572]: time="2026-04-24T23:42:17.586227651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:17.587339 containerd[1572]: time="2026-04-24T23:42:17.587294790Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.362862573s" Apr 24 23:42:17.587367 containerd[1572]: time="2026-04-24T23:42:17.587339593Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 24 23:42:17.589197 containerd[1572]: time="2026-04-24T23:42:17.589173128Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 24 23:42:18.498608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:42:18.538565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:42:18.638147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:42:18.641902 (kubelet)[2013]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:42:18.752326 kubelet[2013]: E0424 23:42:18.752175 2013 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:42:18.755451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:42:18.755587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:42:18.873557 containerd[1572]: time="2026-04-24T23:42:18.873508242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:18.874073 containerd[1572]: time="2026-04-24T23:42:18.874037756Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 24 23:42:18.875224 containerd[1572]: time="2026-04-24T23:42:18.875186014Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:18.877669 containerd[1572]: time="2026-04-24T23:42:18.877631723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:18.878625 containerd[1572]: time="2026-04-24T23:42:18.878581447Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.289373577s" Apr 24 23:42:18.878625 containerd[1572]: time="2026-04-24T23:42:18.878612397Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 24 23:42:18.879822 containerd[1572]: time="2026-04-24T23:42:18.879793249Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 24 23:42:20.031132 containerd[1572]: time="2026-04-24T23:42:20.030923158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:20.031786 containerd[1572]: time="2026-04-24T23:42:20.031599442Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 24 23:42:20.032397 containerd[1572]: time="2026-04-24T23:42:20.032358752Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:20.034757 containerd[1572]: time="2026-04-24T23:42:20.034712982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:20.035834 containerd[1572]: time="2026-04-24T23:42:20.035794963Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.155965158s" Apr 24 23:42:20.035834 containerd[1572]: time="2026-04-24T23:42:20.035833144Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 24 23:42:20.036869 containerd[1572]: time="2026-04-24T23:42:20.036846784Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 24 23:42:20.891048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3288433599.mount: Deactivated successfully. Apr 24 23:42:21.361180 containerd[1572]: time="2026-04-24T23:42:21.361060595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:21.361679 containerd[1572]: time="2026-04-24T23:42:21.361622213Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 24 23:42:21.362435 containerd[1572]: time="2026-04-24T23:42:21.362392586Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:21.364003 containerd[1572]: time="2026-04-24T23:42:21.363960377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:21.364505 containerd[1572]: time="2026-04-24T23:42:21.364468662Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.327588682s" Apr 24 23:42:21.364505 containerd[1572]: time="2026-04-24T23:42:21.364502677Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 24 23:42:21.365706 containerd[1572]: time="2026-04-24T23:42:21.365681890Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 24 23:42:21.739761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2320896183.mount: Deactivated successfully. Apr 24 23:42:22.753016 containerd[1572]: time="2026-04-24T23:42:22.752916064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:22.753568 containerd[1572]: time="2026-04-24T23:42:22.753358289Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 24 23:42:22.754929 containerd[1572]: time="2026-04-24T23:42:22.754869183Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:22.761382 containerd[1572]: time="2026-04-24T23:42:22.761357045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:22.762797 containerd[1572]: time="2026-04-24T23:42:22.762739491Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.397028378s" Apr 24 23:42:22.762797 containerd[1572]: time="2026-04-24T23:42:22.762795122Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 24 23:42:22.764232 containerd[1572]: time="2026-04-24T23:42:22.764134252Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 24 23:42:23.082299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3438257208.mount: Deactivated successfully. Apr 24 23:42:23.087011 containerd[1572]: time="2026-04-24T23:42:23.086977187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:23.087439 containerd[1572]: time="2026-04-24T23:42:23.087406844Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 24 23:42:23.088284 containerd[1572]: time="2026-04-24T23:42:23.088225715Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:23.089954 containerd[1572]: time="2026-04-24T23:42:23.089918973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:23.090952 containerd[1572]: time="2026-04-24T23:42:23.090922422Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 326.754341ms" Apr 24 23:42:23.090952 containerd[1572]: time="2026-04-24T23:42:23.090949638Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 24 23:42:23.092454 containerd[1572]: time="2026-04-24T23:42:23.092264634Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 24 23:42:23.454063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2297348523.mount: Deactivated successfully. Apr 24 23:42:24.457279 containerd[1572]: time="2026-04-24T23:42:24.457165442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:24.457745 containerd[1572]: time="2026-04-24T23:42:24.457596484Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 24 23:42:24.458495 containerd[1572]: time="2026-04-24T23:42:24.458470669Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:24.460854 containerd[1572]: time="2026-04-24T23:42:24.460828923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:24.461798 containerd[1572]: time="2026-04-24T23:42:24.461776040Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.369396506s" Apr 24 23:42:24.461832 containerd[1572]: time="2026-04-24T23:42:24.461803407Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 24 23:42:26.862176 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:42:26.871451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:42:26.889513 systemd[1]: Reloading requested from client PID 2187 ('systemctl') (unit session-7.scope)... Apr 24 23:42:26.889531 systemd[1]: Reloading... Apr 24 23:42:26.929328 zram_generator::config[2229]: No configuration found. Apr 24 23:42:27.005431 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:42:27.049871 systemd[1]: Reloading finished in 160 ms. Apr 24 23:42:27.089814 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 23:42:27.089887 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 23:42:27.090071 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:42:27.091734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:42:27.185399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:42:27.188529 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:42:27.236647 kubelet[2287]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:42:27.236647 kubelet[2287]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:42:27.236647 kubelet[2287]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:42:27.236937 kubelet[2287]: I0424 23:42:27.236670 2287 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:42:27.386056 kubelet[2287]: I0424 23:42:27.386006 2287 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:42:27.386056 kubelet[2287]: I0424 23:42:27.386039 2287 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:42:27.386254 kubelet[2287]: I0424 23:42:27.386221 2287 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:42:27.407934 kubelet[2287]: E0424 23:42:27.407886 2287 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:42:27.408295 kubelet[2287]: I0424 23:42:27.408272 2287 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:42:27.416067 kubelet[2287]: E0424 23:42:27.416019 2287 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:42:27.416067 kubelet[2287]: I0424 23:42:27.416068 2287 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:42:27.419433 kubelet[2287]: I0424 23:42:27.419382 2287 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:42:27.419705 kubelet[2287]: I0424 23:42:27.419671 2287 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:42:27.419861 kubelet[2287]: I0424 23:42:27.419708 2287 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 24 23:42:27.419939 kubelet[2287]: I0424 23:42:27.419869 2287 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:42:27.419939 kubelet[2287]: I0424 23:42:27.419877 2287 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:42:27.420019 kubelet[2287]: I0424 23:42:27.420007 2287 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:42:27.424732 kubelet[2287]: I0424 23:42:27.424700 2287 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:42:27.424773 kubelet[2287]: I0424 23:42:27.424746 2287 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:42:27.425417 kubelet[2287]: I0424 23:42:27.425382 2287 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:42:27.427144 kubelet[2287]: I0424 23:42:27.427077 2287 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:42:27.429942 kubelet[2287]: I0424 23:42:27.429915 2287 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:42:27.430301 kubelet[2287]: E0424 23:42:27.430213 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:42:27.430395 kubelet[2287]: I0424 23:42:27.430376 2287 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:42:27.431387 kubelet[2287]: E0424 23:42:27.431348 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:42:27.431885 kubelet[2287]: W0424 23:42:27.431857 2287 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:42:27.434857 kubelet[2287]: I0424 23:42:27.434842 2287 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:42:27.562496 kubelet[2287]: E0424 23:42:27.560487 2287 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.69:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.69:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a96f8591627cb8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-24 23:42:27.434855608 +0000 UTC m=+0.240488793,LastTimestamp:2026-04-24 23:42:27.434855608 +0000 UTC m=+0.240488793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 24 23:42:27.563157 kubelet[2287]: I0424 23:42:27.562914 2287 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:42:27.563230 kubelet[2287]: I0424 23:42:27.563075 2287 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:42:27.563399 kubelet[2287]: I0424 23:42:27.563214 2287 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:42:27.564438 kubelet[2287]: I0424 23:42:27.562919 2287 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:42:27.564758 kubelet[2287]: I0424 23:42:27.564739 2287 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:42:27.564919 kubelet[2287]: I0424 23:42:27.564870 2287 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:42:27.565213 kubelet[2287]: I0424 23:42:27.565185 2287 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:42:27.565445 kubelet[2287]: I0424 23:42:27.565365 2287 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:42:27.566644 kubelet[2287]: E0424 23:42:27.566441 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:42:27.566818 kubelet[2287]: E0424 23:42:27.566803 2287 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:42:27.567152 kubelet[2287]: I0424 23:42:27.563016 2287 server.go:1289] "Started kubelet" Apr 24 23:42:27.567699 kubelet[2287]: E0424 23:42:27.567446 2287 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:42:27.567699 kubelet[2287]: I0424 23:42:27.567516 2287 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:42:27.579115 kubelet[2287]: I0424 23:42:27.576543 2287 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:42:27.579115 kubelet[2287]: I0424 23:42:27.576558 2287 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:42:27.579115 kubelet[2287]: I0424 23:42:27.576653 2287 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:42:27.579766 kubelet[2287]: E0424 23:42:27.579726 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="200ms" Apr 24 23:42:27.597946 kubelet[2287]: I0424 23:42:27.597907 2287 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:42:27.598485 kubelet[2287]: I0424 23:42:27.598468 2287 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:42:27.598485 kubelet[2287]: I0424 23:42:27.598482 2287 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:42:27.598563 kubelet[2287]: I0424 23:42:27.598502 2287 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:42:27.599345 kubelet[2287]: I0424 23:42:27.599280 2287 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:42:27.599345 kubelet[2287]: I0424 23:42:27.599327 2287 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:42:27.601535 kubelet[2287]: I0424 23:42:27.600801 2287 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:42:27.601535 kubelet[2287]: I0424 23:42:27.600831 2287 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:42:27.601535 kubelet[2287]: E0424 23:42:27.600918 2287 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:42:27.601701 kubelet[2287]: E0424 23:42:27.601654 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:42:27.667156 kubelet[2287]: E0424 23:42:27.667106 2287 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:42:27.681053 kubelet[2287]: I0424 23:42:27.681024 2287 policy_none.go:49] "None policy: Start" Apr 24 23:42:27.681138 kubelet[2287]: I0424 23:42:27.681069 2287 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:42:27.681138 kubelet[2287]: I0424 23:42:27.681092 2287 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:42:27.688497 kubelet[2287]: E0424 23:42:27.688456 2287 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:42:27.688813 kubelet[2287]: I0424 23:42:27.688787 2287 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:42:27.688949 kubelet[2287]: I0424 23:42:27.688826 2287 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:42:27.689773 kubelet[2287]: I0424 23:42:27.689748 2287 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:42:27.691033 kubelet[2287]: E0424 23:42:27.691001 2287 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:42:27.691076 kubelet[2287]: E0424 23:42:27.691055 2287 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 24 23:42:27.708792 kubelet[2287]: E0424 23:42:27.708747 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:42:27.711497 kubelet[2287]: E0424 23:42:27.711478 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:42:27.714926 kubelet[2287]: E0424 23:42:27.714894 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:42:27.769844 kubelet[2287]: E0424 23:42:27.769758 2287 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.69:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.69:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a96f8591627cb8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-24 23:42:27.434855608 +0000 UTC m=+0.240488793,LastTimestamp:2026-04-24 23:42:27.434855608 +0000 UTC m=+0.240488793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 24 23:42:27.780373 kubelet[2287]: E0424 23:42:27.780322 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="400ms" Apr 24 23:42:27.790496 kubelet[2287]: I0424 23:42:27.790070 2287 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 23:42:27.790496 kubelet[2287]: E0424 23:42:27.790474 2287 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Apr 24 23:42:27.867129 kubelet[2287]: I0424 23:42:27.867005 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48f478274c6c57a477653897b65770dd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"48f478274c6c57a477653897b65770dd\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:42:27.867129 kubelet[2287]: I0424 23:42:27.867041 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:42:27.867129 kubelet[2287]: I0424 23:42:27.867053 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:42:27.867129 kubelet[2287]: I0424 23:42:27.867091 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:42:27.867129 kubelet[2287]: I0424 23:42:27.867119 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:42:27.867359 kubelet[2287]: I0424 23:42:27.867166 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 24 23:42:27.867359 kubelet[2287]: I0424 23:42:27.867189 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48f478274c6c57a477653897b65770dd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"48f478274c6c57a477653897b65770dd\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:42:27.867359 kubelet[2287]: I0424 23:42:27.867212 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48f478274c6c57a477653897b65770dd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"48f478274c6c57a477653897b65770dd\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:42:27.867359 kubelet[2287]: I0424 23:42:27.867224 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:42:27.993896 kubelet[2287]: I0424 23:42:27.993845 2287 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 23:42:27.994345 kubelet[2287]: E0424 23:42:27.994236 2287 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Apr 24 23:42:28.009938 kubelet[2287]: E0424 23:42:28.009851 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:28.011203 containerd[1572]: time="2026-04-24T23:42:28.011138552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:48f478274c6c57a477653897b65770dd,Namespace:kube-system,Attempt:0,}" Apr 24 23:42:28.012498 kubelet[2287]: E0424 23:42:28.012476 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:28.013302 containerd[1572]: time="2026-04-24T23:42:28.013189990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 24 23:42:28.015863 kubelet[2287]: E0424 23:42:28.015817 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:28.016208 containerd[1572]: time="2026-04-24T23:42:28.016173252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 24 23:42:28.182015 kubelet[2287]: E0424 23:42:28.181815 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="800ms" Apr 24 23:42:28.336549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2364813164.mount: Deactivated successfully. Apr 24 23:42:28.350176 containerd[1572]: time="2026-04-24T23:42:28.350111024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:42:28.350790 containerd[1572]: time="2026-04-24T23:42:28.350686992Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 24 23:42:28.352975 containerd[1572]: time="2026-04-24T23:42:28.352887411Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:42:28.353874 containerd[1572]: time="2026-04-24T23:42:28.353769130Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:42:28.354631 containerd[1572]: time="2026-04-24T23:42:28.354560205Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:42:28.355423 containerd[1572]: time="2026-04-24T23:42:28.355322394Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:42:28.356144 containerd[1572]: time="2026-04-24T23:42:28.356113293Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:42:28.357032 containerd[1572]: time="2026-04-24T23:42:28.356986823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:42:28.358443 containerd[1572]: time="2026-04-24T23:42:28.358336327Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 347.017777ms" Apr 24 23:42:28.358993 containerd[1572]: time="2026-04-24T23:42:28.358893924Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 345.660541ms" Apr 24 23:42:28.361500 containerd[1572]: time="2026-04-24T23:42:28.361476732Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 345.173415ms" Apr 24 23:42:28.398059 kubelet[2287]: I0424 23:42:28.397943 2287 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 23:42:28.398493 kubelet[2287]: E0424 23:42:28.398382 2287 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Apr 24 23:42:28.578507 kubelet[2287]: E0424 23:42:28.578358 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:42:28.658238 containerd[1572]: time="2026-04-24T23:42:28.658145967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:42:28.658238 containerd[1572]: time="2026-04-24T23:42:28.658188904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:42:28.658238 containerd[1572]: time="2026-04-24T23:42:28.658198475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:42:28.658516 containerd[1572]: time="2026-04-24T23:42:28.658315460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:42:28.658601 containerd[1572]: time="2026-04-24T23:42:28.658346312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:42:28.658601 containerd[1572]: time="2026-04-24T23:42:28.658583465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:42:28.658678 containerd[1572]: time="2026-04-24T23:42:28.658592934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:42:28.658922 containerd[1572]: time="2026-04-24T23:42:28.658874633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:42:28.659125 containerd[1572]: time="2026-04-24T23:42:28.659059960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:42:28.659125 containerd[1572]: time="2026-04-24T23:42:28.659098255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:42:28.659352 containerd[1572]: time="2026-04-24T23:42:28.659106524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:42:28.659352 containerd[1572]: time="2026-04-24T23:42:28.659192967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:42:28.793966 kubelet[2287]: E0424 23:42:28.793898 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:42:28.850003 containerd[1572]: time="2026-04-24T23:42:28.849836949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:48f478274c6c57a477653897b65770dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9a44e5b4eec75c2b595b0017faf611f4823fed1f549f957d81b617e860ff2c9\"" Apr 24 23:42:28.852969 containerd[1572]: time="2026-04-24T23:42:28.852941940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3e10d8414408574535c395dccffc55c57c689b389e9f510ca50ebc9c53f467f\"" Apr 24 23:42:28.854775 kubelet[2287]: E0424 23:42:28.854735 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:28.855426 kubelet[2287]: E0424 23:42:28.855386 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:28.860049 containerd[1572]: time="2026-04-24T23:42:28.860010623Z" level=info msg="CreateContainer within sandbox \"a9a44e5b4eec75c2b595b0017faf611f4823fed1f549f957d81b617e860ff2c9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:42:28.864507 containerd[1572]: time="2026-04-24T23:42:28.864335809Z" level=info msg="CreateContainer within sandbox \"b3e10d8414408574535c395dccffc55c57c689b389e9f510ca50ebc9c53f467f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:42:28.872958 containerd[1572]: time="2026-04-24T23:42:28.872203754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"07e88f4f10cb48b096e505d09b7ee79242614a18c5a07a9f0cdc5014c5956b8d\"" Apr 24 23:42:28.873818 kubelet[2287]: E0424 23:42:28.873788 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:28.878739 containerd[1572]: time="2026-04-24T23:42:28.878695734Z" level=info msg="CreateContainer within sandbox \"07e88f4f10cb48b096e505d09b7ee79242614a18c5a07a9f0cdc5014c5956b8d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:42:28.878894 containerd[1572]: time="2026-04-24T23:42:28.878868552Z" level=info msg="CreateContainer within sandbox \"a9a44e5b4eec75c2b595b0017faf611f4823fed1f549f957d81b617e860ff2c9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c6bee1807795b87a6e18f432c8c7736afeefdf7cc2811a75cef95e3e7b476ea2\"" Apr 24 23:42:28.879898 containerd[1572]: time="2026-04-24T23:42:28.879862054Z" level=info msg="StartContainer for \"c6bee1807795b87a6e18f432c8c7736afeefdf7cc2811a75cef95e3e7b476ea2\"" Apr 24 23:42:28.881164 containerd[1572]: time="2026-04-24T23:42:28.881137018Z" level=info msg="CreateContainer within sandbox \"b3e10d8414408574535c395dccffc55c57c689b389e9f510ca50ebc9c53f467f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"af9bd1860ee694ca90257da66b23b2ec8446684c72bfaf3d64015c1958c48782\"" Apr 24 23:42:28.881537 containerd[1572]: time="2026-04-24T23:42:28.881514689Z" level=info msg="StartContainer for \"af9bd1860ee694ca90257da66b23b2ec8446684c72bfaf3d64015c1958c48782\"" Apr 24 23:42:28.882904 kubelet[2287]: E0424 23:42:28.882865 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:42:28.889848 containerd[1572]: time="2026-04-24T23:42:28.889808619Z" level=info msg="CreateContainer within sandbox \"07e88f4f10cb48b096e505d09b7ee79242614a18c5a07a9f0cdc5014c5956b8d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d04dabc4b6342dc738ad639809d142f5680972ce359e7d1e7e82bbc6c9507623\"" Apr 24 23:42:28.890485 containerd[1572]: time="2026-04-24T23:42:28.890187091Z" level=info msg="StartContainer for \"d04dabc4b6342dc738ad639809d142f5680972ce359e7d1e7e82bbc6c9507623\"" Apr 24 23:42:28.915730 kubelet[2287]: E0424 23:42:28.915665 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:42:28.958671 containerd[1572]: time="2026-04-24T23:42:28.958633331Z" level=info msg="StartContainer for \"c6bee1807795b87a6e18f432c8c7736afeefdf7cc2811a75cef95e3e7b476ea2\" returns successfully" Apr 24 23:42:28.963820 containerd[1572]: time="2026-04-24T23:42:28.963793604Z" level=info msg="StartContainer for \"af9bd1860ee694ca90257da66b23b2ec8446684c72bfaf3d64015c1958c48782\" returns successfully" Apr 24 23:42:28.971788 containerd[1572]: time="2026-04-24T23:42:28.971558066Z" level=info msg="StartContainer for \"d04dabc4b6342dc738ad639809d142f5680972ce359e7d1e7e82bbc6c9507623\" returns successfully" Apr 24 23:42:28.982957 kubelet[2287]: E0424 23:42:28.982749 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="1.6s" Apr 24 23:42:29.399736 kubelet[2287]: I0424 23:42:29.394891 2287 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 23:42:29.615942 kubelet[2287]: E0424 23:42:29.615905 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:42:29.616060 kubelet[2287]: E0424 23:42:29.616045 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:29.616207 kubelet[2287]: E0424 23:42:29.616060 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:42:29.616207 kubelet[2287]: E0424 23:42:29.616171 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:29.617743 kubelet[2287]: E0424 23:42:29.617701 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:42:29.617811 kubelet[2287]: E0424 23:42:29.617785 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:30.635077 kubelet[2287]: E0424 23:42:30.635021 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:42:30.635503 kubelet[2287]: E0424 23:42:30.635206 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:30.635503 kubelet[2287]: E0424 23:42:30.635454 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:42:30.635570 kubelet[2287]: E0424 23:42:30.635521 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:30.722169 kubelet[2287]: E0424 23:42:30.722091 2287 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 24 23:42:30.810774 kubelet[2287]: I0424 23:42:30.810686 2287 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 24 23:42:30.810774 kubelet[2287]: E0424 23:42:30.810765 2287 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 24 23:42:30.818986 kubelet[2287]: E0424 23:42:30.818822 2287 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:42:30.919634 kubelet[2287]: E0424 23:42:30.919469 2287 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:42:31.019793 kubelet[2287]: E0424 23:42:31.019566 2287 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:42:31.119988 kubelet[2287]: E0424 23:42:31.119928 2287 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:42:31.220997 kubelet[2287]: E0424 23:42:31.220938 2287 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:42:31.373287 kubelet[2287]: I0424 23:42:31.373032 2287 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:42:31.390221 kubelet[2287]: E0424 23:42:31.390073 2287 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 24 23:42:31.390221 kubelet[2287]: I0424 23:42:31.390154 2287 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:42:31.397757 kubelet[2287]: E0424 23:42:31.397591 2287 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:42:31.397757 kubelet[2287]: I0424 23:42:31.397705 2287 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 23:42:31.400095 kubelet[2287]: E0424 23:42:31.400026 2287 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 24 23:42:31.428930 kubelet[2287]: I0424 23:42:31.428855 2287 apiserver.go:52] "Watching apiserver" Apr 24 23:42:31.466675 kubelet[2287]: I0424 23:42:31.466403 2287 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:42:31.634757 kubelet[2287]: I0424 23:42:31.634621 2287 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:42:31.637450 kubelet[2287]: E0424 23:42:31.637429 2287 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 24 23:42:31.637732 kubelet[2287]: E0424 23:42:31.637701 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:32.678016 systemd[1]: Reloading requested from client PID 2570 ('systemctl') (unit session-7.scope)... Apr 24 23:42:32.678049 systemd[1]: Reloading... Apr 24 23:42:32.728285 zram_generator::config[2606]: No configuration found. Apr 24 23:42:32.816162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:42:32.865838 systemd[1]: Reloading finished in 187 ms. Apr 24 23:42:32.887452 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:42:32.887794 kubelet[2287]: I0424 23:42:32.887567 2287 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:42:32.912102 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:42:32.912395 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:42:32.920607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:42:33.205359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:42:33.209727 (kubelet)[2664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:42:33.249380 kubelet[2664]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:42:33.249380 kubelet[2664]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:42:33.249380 kubelet[2664]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:42:33.249723 kubelet[2664]: I0424 23:42:33.249427 2664 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:42:33.255701 kubelet[2664]: I0424 23:42:33.255676 2664 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:42:33.255701 kubelet[2664]: I0424 23:42:33.255692 2664 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:42:33.255873 kubelet[2664]: I0424 23:42:33.255858 2664 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:42:33.257304 kubelet[2664]: I0424 23:42:33.257288 2664 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:42:33.264478 kubelet[2664]: I0424 23:42:33.264445 2664 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:42:33.267004 kubelet[2664]: E0424 23:42:33.266979 2664 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:42:33.267004 kubelet[2664]: I0424 23:42:33.267002 2664 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:42:33.270373 kubelet[2664]: I0424 23:42:33.270346 2664 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:42:33.270780 kubelet[2664]: I0424 23:42:33.270742 2664 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:42:33.270930 kubelet[2664]: I0424 23:42:33.270773 2664 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 24 23:42:33.270930 kubelet[2664]: I0424 23:42:33.270928 2664 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:42:33.271031 kubelet[2664]: I0424 23:42:33.270950 2664 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:42:33.271031 kubelet[2664]: I0424 23:42:33.270986 2664 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:42:33.271157 kubelet[2664]: I0424 23:42:33.271133 2664 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:42:33.271157 kubelet[2664]: I0424 23:42:33.271149 2664 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:42:33.272431 kubelet[2664]: I0424 23:42:33.271289 2664 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:42:33.272431 kubelet[2664]: I0424 23:42:33.271304 2664 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:42:33.277270 kubelet[2664]: I0424 23:42:33.275004 2664 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:42:33.279375 kubelet[2664]: I0424 23:42:33.279352 2664 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:42:33.281819 kubelet[2664]: I0424 23:42:33.281800 2664 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:42:33.281819 kubelet[2664]: I0424 23:42:33.281836 2664 server.go:1289] "Started kubelet" Apr 24 23:42:33.282147 kubelet[2664]: I0424 23:42:33.282013 2664 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:42:33.282675 kubelet[2664]: I0424 23:42:33.282408 2664 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:42:33.282675 kubelet[2664]: I0424 23:42:33.282614 2664 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:42:33.284092 kubelet[2664]: I0424 23:42:33.283648 2664 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:42:33.284092 kubelet[2664]: I0424 23:42:33.283683 2664 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:42:33.284917 kubelet[2664]: I0424 23:42:33.284825 2664 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:42:33.286023 kubelet[2664]: I0424 23:42:33.285999 2664 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:42:33.286157 kubelet[2664]: I0424 23:42:33.286115 2664 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:42:33.286490 kubelet[2664]: E0424 23:42:33.286477 2664 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:42:33.286616 kubelet[2664]: I0424 23:42:33.286544 2664 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:42:33.286641 kubelet[2664]: I0424 23:42:33.286490 2664 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:42:33.286641 kubelet[2664]: I0424 23:42:33.286622 2664 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:42:33.287432 kubelet[2664]: I0424 23:42:33.287416 2664 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:42:33.301707 kubelet[2664]: I0424 23:42:33.301129 2664 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:42:33.302854 kubelet[2664]: I0424 23:42:33.302838 2664 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:42:33.302854 kubelet[2664]: I0424 23:42:33.302855 2664 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:42:33.302944 kubelet[2664]: I0424 23:42:33.302893 2664 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:42:33.302944 kubelet[2664]: I0424 23:42:33.302900 2664 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:42:33.302975 kubelet[2664]: E0424 23:42:33.302948 2664 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:42:33.321187 kubelet[2664]: I0424 23:42:33.321169 2664 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:42:33.321187 kubelet[2664]: I0424 23:42:33.321179 2664 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:42:33.321187 kubelet[2664]: I0424 23:42:33.321191 2664 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:42:33.321329 kubelet[2664]: I0424 23:42:33.321310 2664 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 23:42:33.321371 kubelet[2664]: I0424 23:42:33.321324 2664 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 23:42:33.321371 kubelet[2664]: I0424 23:42:33.321337 2664 policy_none.go:49] "None policy: Start" Apr 24 23:42:33.321371 kubelet[2664]: I0424 23:42:33.321344 2664 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:42:33.321371 kubelet[2664]: I0424 23:42:33.321371 2664 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:42:33.321458 kubelet[2664]: I0424 23:42:33.321442 2664 state_mem.go:75] "Updated machine memory state" Apr 24 23:42:33.323490 kubelet[2664]: E0424 23:42:33.322196 2664 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:42:33.323490 kubelet[2664]: I0424 23:42:33.322382 2664 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:42:33.323490 kubelet[2664]: I0424 23:42:33.322390 2664 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:42:33.323490 kubelet[2664]: I0424 23:42:33.322681 2664 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:42:33.324342 kubelet[2664]: E0424 23:42:33.324301 2664 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:42:33.404536 kubelet[2664]: I0424 23:42:33.404456 2664 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:42:33.404733 kubelet[2664]: I0424 23:42:33.404670 2664 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 23:42:33.404797 kubelet[2664]: I0424 23:42:33.404745 2664 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:42:33.426903 kubelet[2664]: I0424 23:42:33.426819 2664 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 23:42:33.437542 kubelet[2664]: I0424 23:42:33.437486 2664 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 24 23:42:33.437725 kubelet[2664]: I0424 23:42:33.437607 2664 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 24 23:42:33.487727 kubelet[2664]: I0424 23:42:33.487655 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 24 23:42:33.487727 kubelet[2664]: I0424 23:42:33.487728 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48f478274c6c57a477653897b65770dd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"48f478274c6c57a477653897b65770dd\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:42:33.487886 kubelet[2664]: I0424 23:42:33.487759 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48f478274c6c57a477653897b65770dd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"48f478274c6c57a477653897b65770dd\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:42:33.487886 kubelet[2664]: I0424 23:42:33.487776 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:42:33.487886 kubelet[2664]: I0424 23:42:33.487791 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:42:33.487886 kubelet[2664]: I0424 23:42:33.487805 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48f478274c6c57a477653897b65770dd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"48f478274c6c57a477653897b65770dd\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:42:33.487886 kubelet[2664]: I0424 23:42:33.487817 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:42:33.487986 kubelet[2664]: I0424 23:42:33.487828 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:42:33.487986 kubelet[2664]: I0424 23:42:33.487842 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:42:33.640627 sudo[2703]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 24 23:42:33.640844 sudo[2703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 24 23:42:33.710778 kubelet[2664]: E0424 23:42:33.710741 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:33.710866 kubelet[2664]: E0424 23:42:33.710807 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:33.710866 kubelet[2664]: E0424 23:42:33.710744 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:34.128758 sudo[2703]: pam_unix(sudo:session): session closed for user root Apr 24 23:42:34.274943 kubelet[2664]: I0424 23:42:34.273991 2664 apiserver.go:52] "Watching apiserver" Apr 24 23:42:34.352103 kubelet[2664]: I0424 23:42:34.351793 2664 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 23:42:34.355373 kubelet[2664]: I0424 23:42:34.352370 2664 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:42:34.355847 kubelet[2664]: I0424 23:42:34.355758 2664 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:42:34.361291 kubelet[2664]: E0424 23:42:34.361156 2664 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 24 23:42:34.363567 kubelet[2664]: E0424 23:42:34.361536 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:34.363567 kubelet[2664]: E0424 23:42:34.362468 2664 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:42:34.363567 kubelet[2664]: E0424 23:42:34.362601 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:34.363567 kubelet[2664]: E0424 23:42:34.362671 2664 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 24 23:42:34.363567 kubelet[2664]: E0424 23:42:34.362796 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:34.377869 kubelet[2664]: I0424 23:42:34.377822 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.377805899 podStartE2EDuration="1.377805899s" podCreationTimestamp="2026-04-24 23:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:42:34.376809987 +0000 UTC m=+1.159881590" watchObservedRunningTime="2026-04-24 23:42:34.377805899 +0000 UTC m=+1.160877507" Apr 24 23:42:34.383495 kubelet[2664]: I0424 23:42:34.383377 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.383366944 podStartE2EDuration="1.383366944s" podCreationTimestamp="2026-04-24 23:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:42:34.38188456 +0000 UTC m=+1.164956171" watchObservedRunningTime="2026-04-24 23:42:34.383366944 +0000 UTC m=+1.166438553" Apr 24 23:42:34.387605 kubelet[2664]: I0424 23:42:34.387146 2664 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:42:34.394289 kubelet[2664]: I0424 23:42:34.394142 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.394133077 podStartE2EDuration="1.394133077s" podCreationTimestamp="2026-04-24 23:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:42:34.388085085 +0000 UTC m=+1.171156687" watchObservedRunningTime="2026-04-24 23:42:34.394133077 +0000 UTC m=+1.177204687" Apr 24 23:42:35.354821 kubelet[2664]: E0424 23:42:35.354493 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:35.354821 kubelet[2664]: E0424 23:42:35.354570 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:35.354821 kubelet[2664]: E0424 23:42:35.354783 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:35.685824 sudo[1775]: pam_unix(sudo:session): session closed for user root Apr 24 23:42:35.687542 sshd[1768]: pam_unix(sshd:session): session closed for user core Apr 24 23:42:35.690277 systemd[1]: sshd@6-10.0.0.69:22-10.0.0.1:45534.service: Deactivated successfully. Apr 24 23:42:35.692204 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:42:35.692233 systemd-logind[1549]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:42:35.693063 systemd-logind[1549]: Removed session 7. Apr 24 23:42:36.357006 kubelet[2664]: E0424 23:42:36.356974 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:39.272620 kubelet[2664]: E0424 23:42:39.272587 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:39.363889 kubelet[2664]: E0424 23:42:39.363430 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:39.751478 kubelet[2664]: I0424 23:42:39.748902 2664 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:42:39.753747 containerd[1572]: time="2026-04-24T23:42:39.753630386Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:42:39.760288 kubelet[2664]: I0424 23:42:39.757750 2664 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:42:40.369960 kubelet[2664]: E0424 23:42:40.369669 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:40.396016 kubelet[2664]: E0424 23:42:40.395956 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:41.015692 kubelet[2664]: I0424 23:42:41.015564 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-xtables-lock\") pod \"cilium-59tq9\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " pod="kube-system/cilium-59tq9" Apr 24 23:42:41.015692 kubelet[2664]: I0424 23:42:41.015683 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89aadd07-4bff-4d36-ad4c-ff232e640d5d-clustermesh-secrets\") pod \"cilium-59tq9\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " pod="kube-system/cilium-59tq9" Apr 24 23:42:41.015940 kubelet[2664]: I0424 23:42:41.015757 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21eeaf5f-377e-432e-9465-47d1aade4299-lib-modules\") pod \"kube-proxy-8nzxl\" (UID: \"21eeaf5f-377e-432e-9465-47d1aade4299\") " pod="kube-system/kube-proxy-8nzxl" Apr 24 23:42:41.015940 kubelet[2664]: I0424 23:42:41.015782 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-cilium-run\") pod \"cilium-59tq9\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " pod="kube-system/cilium-59tq9" Apr 24 23:42:41.015940 kubelet[2664]: I0424 23:42:41.015844 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-bpf-maps\") pod \"cilium-59tq9\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " pod="kube-system/cilium-59tq9" Apr 24 23:42:41.015940 kubelet[2664]: I0424 23:42:41.015913 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-cilium-cgroup\") pod \"cilium-59tq9\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " pod="kube-system/cilium-59tq9" Apr 24 23:42:41.015940 kubelet[2664]: I0424 23:42:41.015927 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-host-proc-sys-net\") pod \"cilium-59tq9\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " pod="kube-system/cilium-59tq9" Apr 24 23:42:41.016183 kubelet[2664]: I0424 23:42:41.015943 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b13bdef4-3c13-4253-b936-2909f7d4c686-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-9hjqk\" (UID: \"b13bdef4-3c13-4253-b936-2909f7d4c686\") " pod="kube-system/cilium-operator-6c4d7847fc-9hjqk" Apr 24 23:42:41.016183 kubelet[2664]: I0424 23:42:41.015957 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89aadd07-4bff-4d36-ad4c-ff232e640d5d-hubble-tls\") pod \"cilium-59tq9\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " pod="kube-system/cilium-59tq9" Apr 24 23:42:41.016183 kubelet[2664]: I0424 23:42:41.015991 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-cni-path\") pod \"cilium-59tq9\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " pod="kube-system/cilium-59tq9" Apr 24 23:42:41.016183 kubelet[2664]: I0424 23:42:41.016053 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89aadd07-4bff-4d36-ad4c-ff232e640d5d-cilium-config-path\") pod \"cilium-59tq9\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " pod="kube-system/cilium-59tq9" Apr 24 23:42:41.016183 kubelet[2664]: I0424 23:42:41.016103 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn4q9\" (UniqueName: \"kubernetes.io/projected/89aadd07-4bff-4d36-ad4c-ff232e640d5d-kube-api-access-cn4q9\") pod \"cilium-59tq9\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " pod="kube-system/cilium-59tq9" Apr 24 23:42:41.016353 kubelet[2664]: I0424 23:42:41.016117 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/21eeaf5f-377e-432e-9465-47d1aade4299-kube-proxy\") pod \"kube-proxy-8nzxl\" (UID: \"21eeaf5f-377e-432e-9465-47d1aade4299\") " pod="kube-system/kube-proxy-8nzxl" Apr 24 23:42:41.016353 kubelet[2664]: I0424 23:42:41.016128 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-hostproc\") pod \"cilium-59tq9\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " pod="kube-system/cilium-59tq9" Apr 24 23:42:41.016353 kubelet[2664]: I0424 23:42:41.016140 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-host-proc-sys-kernel\") pod \"cilium-59tq9\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " pod="kube-system/cilium-59tq9" Apr 24 23:42:41.016353 kubelet[2664]: I0424 23:42:41.016160 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-etc-cni-netd\") pod \"cilium-59tq9\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " pod="kube-system/cilium-59tq9" Apr 24 23:42:41.016353 kubelet[2664]: I0424 23:42:41.016170 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-lib-modules\") pod \"cilium-59tq9\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " pod="kube-system/cilium-59tq9" Apr 24 23:42:41.016353 kubelet[2664]: I0424 23:42:41.016180 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21eeaf5f-377e-432e-9465-47d1aade4299-xtables-lock\") pod \"kube-proxy-8nzxl\" (UID: \"21eeaf5f-377e-432e-9465-47d1aade4299\") " pod="kube-system/kube-proxy-8nzxl" Apr 24 23:42:41.016445 kubelet[2664]: I0424 23:42:41.016193 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t9t6\" (UniqueName: \"kubernetes.io/projected/21eeaf5f-377e-432e-9465-47d1aade4299-kube-api-access-8t9t6\") pod \"kube-proxy-8nzxl\" (UID: \"21eeaf5f-377e-432e-9465-47d1aade4299\") " pod="kube-system/kube-proxy-8nzxl" Apr 24 23:42:41.016445 kubelet[2664]: I0424 23:42:41.016206 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jdh8\" (UniqueName: \"kubernetes.io/projected/b13bdef4-3c13-4253-b936-2909f7d4c686-kube-api-access-2jdh8\") pod \"cilium-operator-6c4d7847fc-9hjqk\" (UID: \"b13bdef4-3c13-4253-b936-2909f7d4c686\") " pod="kube-system/cilium-operator-6c4d7847fc-9hjqk" Apr 24 23:42:41.141285 kubelet[2664]: E0424 23:42:41.141226 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:41.142279 kubelet[2664]: E0424 23:42:41.141501 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:41.142397 containerd[1572]: time="2026-04-24T23:42:41.141758068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8nzxl,Uid:21eeaf5f-377e-432e-9465-47d1aade4299,Namespace:kube-system,Attempt:0,}" Apr 24 23:42:41.142397 containerd[1572]: time="2026-04-24T23:42:41.141976219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-59tq9,Uid:89aadd07-4bff-4d36-ad4c-ff232e640d5d,Namespace:kube-system,Attempt:0,}" Apr 24 23:42:41.169227 containerd[1572]: time="2026-04-24T23:42:41.169113680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:42:41.169227 containerd[1572]: time="2026-04-24T23:42:41.169214431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:42:41.169388 containerd[1572]: time="2026-04-24T23:42:41.169237074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:42:41.169443 containerd[1572]: time="2026-04-24T23:42:41.169360108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:42:41.170743 containerd[1572]: time="2026-04-24T23:42:41.170625900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:42:41.170743 containerd[1572]: time="2026-04-24T23:42:41.170678088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:42:41.170743 containerd[1572]: time="2026-04-24T23:42:41.170689821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:42:41.170891 containerd[1572]: time="2026-04-24T23:42:41.170837289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:42:41.217451 containerd[1572]: time="2026-04-24T23:42:41.217398402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-59tq9,Uid:89aadd07-4bff-4d36-ad4c-ff232e640d5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2\"" Apr 24 23:42:41.217610 containerd[1572]: time="2026-04-24T23:42:41.217452631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8nzxl,Uid:21eeaf5f-377e-432e-9465-47d1aade4299,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c6fc79015ebfa787ace102d03d0600e1feb725f50d9133acecda621ae540533\"" Apr 24 23:42:41.218751 kubelet[2664]: E0424 23:42:41.218723 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:41.218862 kubelet[2664]: E0424 23:42:41.218846 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:41.219853 containerd[1572]: time="2026-04-24T23:42:41.219825702Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 24 23:42:41.222601 containerd[1572]: time="2026-04-24T23:42:41.222571814Z" level=info msg="CreateContainer within sandbox \"2c6fc79015ebfa787ace102d03d0600e1feb725f50d9133acecda621ae540533\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 23:42:41.251840 kubelet[2664]: E0424 23:42:41.251220 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:41.254799 containerd[1572]: time="2026-04-24T23:42:41.254668061Z" level=info msg="CreateContainer within sandbox \"2c6fc79015ebfa787ace102d03d0600e1feb725f50d9133acecda621ae540533\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"459276cbee8b721a2ca25e56e3cb65670caebc2174c1741cc7d2d5ecff7247bd\"" Apr 24 23:42:41.254933 containerd[1572]: time="2026-04-24T23:42:41.254737566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9hjqk,Uid:b13bdef4-3c13-4253-b936-2909f7d4c686,Namespace:kube-system,Attempt:0,}" Apr 24 23:42:41.255655 containerd[1572]: time="2026-04-24T23:42:41.255479407Z" level=info msg="StartContainer for \"459276cbee8b721a2ca25e56e3cb65670caebc2174c1741cc7d2d5ecff7247bd\"" Apr 24 23:42:41.281755 containerd[1572]: time="2026-04-24T23:42:41.281382248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:42:41.281755 containerd[1572]: time="2026-04-24T23:42:41.281476808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:42:41.281755 containerd[1572]: time="2026-04-24T23:42:41.281486031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:42:41.281755 containerd[1572]: time="2026-04-24T23:42:41.281546868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:42:41.308291 containerd[1572]: time="2026-04-24T23:42:41.307664780Z" level=info msg="StartContainer for \"459276cbee8b721a2ca25e56e3cb65670caebc2174c1741cc7d2d5ecff7247bd\" returns successfully" Apr 24 23:42:41.337202 containerd[1572]: time="2026-04-24T23:42:41.337146145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9hjqk,Uid:b13bdef4-3c13-4253-b936-2909f7d4c686,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7f99c8eb652081179d96f5b7c940050941b0014cc29e30380a775842464eb97\"" Apr 24 23:42:41.337746 kubelet[2664]: E0424 23:42:41.337698 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:41.375159 kubelet[2664]: E0424 23:42:41.375126 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:41.375511 kubelet[2664]: E0424 23:42:41.375356 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:41.392505 kubelet[2664]: I0424 23:42:41.392455 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8nzxl" podStartSLOduration=1.392439101 podStartE2EDuration="1.392439101s" podCreationTimestamp="2026-04-24 23:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:42:41.39218242 +0000 UTC m=+8.175254030" watchObservedRunningTime="2026-04-24 23:42:41.392439101 +0000 UTC m=+8.175510711" Apr 24 23:42:44.617987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount344422008.mount: Deactivated successfully. Apr 24 23:42:45.450261 kubelet[2664]: E0424 23:42:45.450149 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:45.945468 containerd[1572]: time="2026-04-24T23:42:45.945310549Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:45.946848 containerd[1572]: time="2026-04-24T23:42:45.945770223Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 24 23:42:45.947460 containerd[1572]: time="2026-04-24T23:42:45.947331083Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:45.949660 containerd[1572]: time="2026-04-24T23:42:45.949509012Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.729614992s" Apr 24 23:42:45.949660 containerd[1572]: time="2026-04-24T23:42:45.949551397Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 24 23:42:45.953722 containerd[1572]: time="2026-04-24T23:42:45.953072706Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 24 23:42:45.958836 containerd[1572]: time="2026-04-24T23:42:45.958793272Z" level=info msg="CreateContainer within sandbox \"84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 24 23:42:45.971387 containerd[1572]: time="2026-04-24T23:42:45.971336683Z" level=info msg="CreateContainer within sandbox \"84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf\"" Apr 24 23:42:45.972117 containerd[1572]: time="2026-04-24T23:42:45.972085191Z" level=info msg="StartContainer for \"84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf\"" Apr 24 23:42:46.034706 containerd[1572]: time="2026-04-24T23:42:46.034665690Z" level=info msg="StartContainer for \"84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf\" returns successfully" Apr 24 23:42:46.108197 containerd[1572]: time="2026-04-24T23:42:46.106690322Z" level=info msg="shim disconnected" id=84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf namespace=k8s.io Apr 24 23:42:46.108197 containerd[1572]: time="2026-04-24T23:42:46.108172621Z" level=warning msg="cleaning up after shim disconnected" id=84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf namespace=k8s.io Apr 24 23:42:46.108197 containerd[1572]: time="2026-04-24T23:42:46.108187864Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:42:46.396659 kubelet[2664]: E0424 23:42:46.396389 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:46.403202 containerd[1572]: time="2026-04-24T23:42:46.403082284Z" level=info msg="CreateContainer within sandbox \"84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 24 23:42:46.419030 containerd[1572]: time="2026-04-24T23:42:46.418969566Z" level=info msg="CreateContainer within sandbox \"84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14\"" Apr 24 23:42:46.420184 containerd[1572]: time="2026-04-24T23:42:46.419924058Z" level=info msg="StartContainer for \"6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14\"" Apr 24 23:42:46.480235 containerd[1572]: time="2026-04-24T23:42:46.480174987Z" level=info msg="StartContainer for \"6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14\" returns successfully" Apr 24 23:42:46.490457 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:42:46.490661 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:42:46.490708 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:42:46.498844 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:42:46.543953 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:42:46.546073 containerd[1572]: time="2026-04-24T23:42:46.546034567Z" level=info msg="shim disconnected" id=6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14 namespace=k8s.io Apr 24 23:42:46.546140 containerd[1572]: time="2026-04-24T23:42:46.546075761Z" level=warning msg="cleaning up after shim disconnected" id=6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14 namespace=k8s.io Apr 24 23:42:46.546140 containerd[1572]: time="2026-04-24T23:42:46.546082035Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:42:46.968705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf-rootfs.mount: Deactivated successfully. Apr 24 23:42:47.310610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2504693754.mount: Deactivated successfully. Apr 24 23:42:47.399765 kubelet[2664]: E0424 23:42:47.399716 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:47.409886 containerd[1572]: time="2026-04-24T23:42:47.409822241Z" level=info msg="CreateContainer within sandbox \"84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 24 23:42:47.439277 containerd[1572]: time="2026-04-24T23:42:47.437667739Z" level=info msg="CreateContainer within sandbox \"84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9\"" Apr 24 23:42:47.449338 containerd[1572]: time="2026-04-24T23:42:47.446201037Z" level=info msg="StartContainer for \"500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9\"" Apr 24 23:42:47.512039 containerd[1572]: time="2026-04-24T23:42:47.511993218Z" level=info msg="StartContainer for \"500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9\" returns successfully" Apr 24 23:42:47.556307 containerd[1572]: time="2026-04-24T23:42:47.556110054Z" level=info msg="shim disconnected" id=500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9 namespace=k8s.io Apr 24 23:42:47.556307 containerd[1572]: time="2026-04-24T23:42:47.556236477Z" level=warning msg="cleaning up after shim disconnected" id=500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9 namespace=k8s.io Apr 24 23:42:47.556307 containerd[1572]: time="2026-04-24T23:42:47.556304550Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:42:47.694096 containerd[1572]: time="2026-04-24T23:42:47.693801850Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:47.694488 containerd[1572]: time="2026-04-24T23:42:47.694415539Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 24 23:42:47.695207 containerd[1572]: time="2026-04-24T23:42:47.695179144Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:42:47.696336 containerd[1572]: time="2026-04-24T23:42:47.696309527Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.743200807s" Apr 24 23:42:47.696417 containerd[1572]: time="2026-04-24T23:42:47.696340785Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 24 23:42:47.702794 containerd[1572]: time="2026-04-24T23:42:47.702725745Z" level=info msg="CreateContainer within sandbox \"a7f99c8eb652081179d96f5b7c940050941b0014cc29e30380a775842464eb97\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 24 23:42:47.712275 containerd[1572]: time="2026-04-24T23:42:47.712224517Z" level=info msg="CreateContainer within sandbox \"a7f99c8eb652081179d96f5b7c940050941b0014cc29e30380a775842464eb97\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed\"" Apr 24 23:42:47.713036 containerd[1572]: time="2026-04-24T23:42:47.713000264Z" level=info msg="StartContainer for \"da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed\"" Apr 24 23:42:47.767085 containerd[1572]: time="2026-04-24T23:42:47.767046052Z" level=info msg="StartContainer for \"da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed\" returns successfully" Apr 24 23:42:47.969491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9-rootfs.mount: Deactivated successfully. Apr 24 23:42:48.434921 kubelet[2664]: E0424 23:42:48.434761 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:48.437293 kubelet[2664]: E0424 23:42:48.435889 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:48.454680 containerd[1572]: time="2026-04-24T23:42:48.454320215Z" level=info msg="CreateContainer within sandbox \"84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 24 23:42:48.475685 containerd[1572]: time="2026-04-24T23:42:48.473999504Z" level=info msg="CreateContainer within sandbox \"84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001\"" Apr 24 23:42:48.479922 containerd[1572]: time="2026-04-24T23:42:48.479860451Z" level=info msg="StartContainer for \"2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001\"" Apr 24 23:42:48.491329 kubelet[2664]: I0424 23:42:48.489404 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-9hjqk" podStartSLOduration=2.132189175 podStartE2EDuration="8.489188684s" podCreationTimestamp="2026-04-24 23:42:40 +0000 UTC" firstStartedPulling="2026-04-24 23:42:41.340682479 +0000 UTC m=+8.123754070" lastFinishedPulling="2026-04-24 23:42:47.697681986 +0000 UTC m=+14.480753579" observedRunningTime="2026-04-24 23:42:48.460049982 +0000 UTC m=+15.243121575" watchObservedRunningTime="2026-04-24 23:42:48.489188684 +0000 UTC m=+15.272260412" Apr 24 23:42:48.603916 containerd[1572]: time="2026-04-24T23:42:48.603865557Z" level=info msg="StartContainer for \"2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001\" returns successfully" Apr 24 23:42:48.629504 containerd[1572]: time="2026-04-24T23:42:48.629426789Z" level=info msg="shim disconnected" id=2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001 namespace=k8s.io Apr 24 23:42:48.629504 containerd[1572]: time="2026-04-24T23:42:48.629494987Z" level=warning msg="cleaning up after shim disconnected" id=2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001 namespace=k8s.io Apr 24 23:42:48.629504 containerd[1572]: time="2026-04-24T23:42:48.629501704Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:42:48.968890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001-rootfs.mount: Deactivated successfully. Apr 24 23:42:49.444706 kubelet[2664]: E0424 23:42:49.440814 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:49.444706 kubelet[2664]: E0424 23:42:49.440814 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:49.453081 containerd[1572]: time="2026-04-24T23:42:49.452641211Z" level=info msg="CreateContainer within sandbox \"84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 24 23:42:49.485322 containerd[1572]: time="2026-04-24T23:42:49.485280054Z" level=info msg="CreateContainer within sandbox \"84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67\"" Apr 24 23:42:49.486152 containerd[1572]: time="2026-04-24T23:42:49.486121895Z" level=info msg="StartContainer for \"0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67\"" Apr 24 23:42:49.543955 containerd[1572]: time="2026-04-24T23:42:49.543906021Z" level=info msg="StartContainer for \"0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67\" returns successfully" Apr 24 23:42:49.708603 kubelet[2664]: I0424 23:42:49.708578 2664 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 24 23:42:49.800019 kubelet[2664]: I0424 23:42:49.799879 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnf8m\" (UniqueName: \"kubernetes.io/projected/389bdae9-cc94-4a0f-8677-3648983c2a5a-kube-api-access-mnf8m\") pod \"coredns-674b8bbfcf-69jtt\" (UID: \"389bdae9-cc94-4a0f-8677-3648983c2a5a\") " pod="kube-system/coredns-674b8bbfcf-69jtt" Apr 24 23:42:49.800019 kubelet[2664]: I0424 23:42:49.799914 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7badf70f-3c71-4c4e-973b-cd4d731e54c8-config-volume\") pod \"coredns-674b8bbfcf-grn5x\" (UID: \"7badf70f-3c71-4c4e-973b-cd4d731e54c8\") " pod="kube-system/coredns-674b8bbfcf-grn5x" Apr 24 23:42:49.800019 kubelet[2664]: I0424 23:42:49.799930 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/389bdae9-cc94-4a0f-8677-3648983c2a5a-config-volume\") pod \"coredns-674b8bbfcf-69jtt\" (UID: \"389bdae9-cc94-4a0f-8677-3648983c2a5a\") " pod="kube-system/coredns-674b8bbfcf-69jtt" Apr 24 23:42:49.800019 kubelet[2664]: I0424 23:42:49.799941 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94nzb\" (UniqueName: \"kubernetes.io/projected/7badf70f-3c71-4c4e-973b-cd4d731e54c8-kube-api-access-94nzb\") pod \"coredns-674b8bbfcf-grn5x\" (UID: \"7badf70f-3c71-4c4e-973b-cd4d731e54c8\") " pod="kube-system/coredns-674b8bbfcf-grn5x" Apr 24 23:42:50.050370 kubelet[2664]: E0424 23:42:50.049703 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:50.053961 kubelet[2664]: E0424 23:42:50.053819 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:50.055183 containerd[1572]: time="2026-04-24T23:42:50.055141579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-69jtt,Uid:389bdae9-cc94-4a0f-8677-3648983c2a5a,Namespace:kube-system,Attempt:0,}" Apr 24 23:42:50.055307 containerd[1572]: time="2026-04-24T23:42:50.055161260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-grn5x,Uid:7badf70f-3c71-4c4e-973b-cd4d731e54c8,Namespace:kube-system,Attempt:0,}" Apr 24 23:42:50.448500 kubelet[2664]: E0424 23:42:50.448351 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:50.460544 kubelet[2664]: I0424 23:42:50.460466 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-59tq9" podStartSLOduration=5.7271532910000005 podStartE2EDuration="10.460451822s" podCreationTimestamp="2026-04-24 23:42:40 +0000 UTC" firstStartedPulling="2026-04-24 23:42:41.219515712 +0000 UTC m=+8.002587305" lastFinishedPulling="2026-04-24 23:42:45.952814225 +0000 UTC m=+12.735885836" observedRunningTime="2026-04-24 23:42:50.459860253 +0000 UTC m=+17.242931860" watchObservedRunningTime="2026-04-24 23:42:50.460451822 +0000 UTC m=+17.243523430" Apr 24 23:42:51.458664 kubelet[2664]: E0424 23:42:51.458572 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:51.474925 systemd-networkd[1249]: cilium_host: Link UP Apr 24 23:42:51.481137 systemd-networkd[1249]: cilium_net: Link UP Apr 24 23:42:51.481141 systemd-networkd[1249]: cilium_net: Gained carrier Apr 24 23:42:51.481284 systemd-networkd[1249]: cilium_host: Gained carrier Apr 24 23:42:51.481404 systemd-networkd[1249]: cilium_host: Gained IPv6LL Apr 24 23:42:51.562032 systemd-networkd[1249]: cilium_vxlan: Link UP Apr 24 23:42:51.562044 systemd-networkd[1249]: cilium_vxlan: Gained carrier Apr 24 23:42:51.758308 kernel: NET: Registered PF_ALG protocol family Apr 24 23:42:51.865618 update_engine[1552]: I20260424 23:42:51.865440 1552 update_attempter.cc:509] Updating boot flags... Apr 24 23:42:51.886349 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3515) Apr 24 23:42:51.920489 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3515) Apr 24 23:42:52.280614 systemd-networkd[1249]: lxc_health: Link UP Apr 24 23:42:52.286402 systemd-networkd[1249]: lxc_health: Gained carrier Apr 24 23:42:52.427496 systemd-networkd[1249]: cilium_net: Gained IPv6LL Apr 24 23:42:52.458564 kubelet[2664]: E0424 23:42:52.458338 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:52.621096 systemd-networkd[1249]: lxc78272598ddb2: Link UP Apr 24 23:42:52.630466 kernel: eth0: renamed from tmp66f96 Apr 24 23:42:52.634458 systemd-networkd[1249]: lxcec5bf8850d33: Link UP Apr 24 23:42:52.635568 systemd-networkd[1249]: lxc78272598ddb2: Gained carrier Apr 24 23:42:52.646061 kernel: eth0: renamed from tmpc0678 Apr 24 23:42:52.652330 systemd-networkd[1249]: lxcec5bf8850d33: Gained carrier Apr 24 23:42:53.387488 systemd-networkd[1249]: cilium_vxlan: Gained IPv6LL Apr 24 23:42:53.452391 systemd-networkd[1249]: lxc_health: Gained IPv6LL Apr 24 23:42:53.459163 kubelet[2664]: E0424 23:42:53.459120 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:53.771407 systemd-networkd[1249]: lxcec5bf8850d33: Gained IPv6LL Apr 24 23:42:54.412482 systemd-networkd[1249]: lxc78272598ddb2: Gained IPv6LL Apr 24 23:42:55.630930 containerd[1572]: time="2026-04-24T23:42:55.630803750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:42:55.630930 containerd[1572]: time="2026-04-24T23:42:55.630858333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:42:55.630930 containerd[1572]: time="2026-04-24T23:42:55.630877594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:42:55.631446 containerd[1572]: time="2026-04-24T23:42:55.630976023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:42:55.642957 containerd[1572]: time="2026-04-24T23:42:55.642811749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:42:55.642957 containerd[1572]: time="2026-04-24T23:42:55.642867563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:42:55.643211 containerd[1572]: time="2026-04-24T23:42:55.643083463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:42:55.643474 containerd[1572]: time="2026-04-24T23:42:55.643429404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:42:55.655486 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:42:55.666308 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:42:55.684145 containerd[1572]: time="2026-04-24T23:42:55.684085651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-69jtt,Uid:389bdae9-cc94-4a0f-8677-3648983c2a5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"66f96c747a9c7fa57f7f45a091d3dea5e00362a06ec5b5306ea6f890ea30cb5e\"" Apr 24 23:42:55.685525 kubelet[2664]: E0424 23:42:55.685503 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:55.694369 containerd[1572]: time="2026-04-24T23:42:55.693862131Z" level=info msg="CreateContainer within sandbox \"66f96c747a9c7fa57f7f45a091d3dea5e00362a06ec5b5306ea6f890ea30cb5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:42:55.695535 containerd[1572]: time="2026-04-24T23:42:55.695237817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-grn5x,Uid:7badf70f-3c71-4c4e-973b-cd4d731e54c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0678b457fe8387dfc00157319727f1cf1af5b0764307987a17496ed8386e28a\"" Apr 24 23:42:55.696093 kubelet[2664]: E0424 23:42:55.696064 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:55.702759 containerd[1572]: time="2026-04-24T23:42:55.702348848Z" level=info msg="CreateContainer within sandbox \"c0678b457fe8387dfc00157319727f1cf1af5b0764307987a17496ed8386e28a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:42:55.717850 containerd[1572]: time="2026-04-24T23:42:55.717736119Z" level=info msg="CreateContainer within sandbox \"66f96c747a9c7fa57f7f45a091d3dea5e00362a06ec5b5306ea6f890ea30cb5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"274d13c6c7fa09eadc510b36ce86a8a2b4634e380c4ee883982bf6dcfbacb632\"" Apr 24 23:42:55.720385 containerd[1572]: time="2026-04-24T23:42:55.719543312Z" level=info msg="StartContainer for \"274d13c6c7fa09eadc510b36ce86a8a2b4634e380c4ee883982bf6dcfbacb632\"" Apr 24 23:42:55.720385 containerd[1572]: time="2026-04-24T23:42:55.719917098Z" level=info msg="CreateContainer within sandbox \"c0678b457fe8387dfc00157319727f1cf1af5b0764307987a17496ed8386e28a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ff441060ab1b7236cca9cdb8157cc3f472bd49c0282ea25ef59dca33fb21aec5\"" Apr 24 23:42:55.720588 containerd[1572]: time="2026-04-24T23:42:55.720477511Z" level=info msg="StartContainer for \"ff441060ab1b7236cca9cdb8157cc3f472bd49c0282ea25ef59dca33fb21aec5\"" Apr 24 23:42:55.776688 containerd[1572]: time="2026-04-24T23:42:55.776631845Z" level=info msg="StartContainer for \"ff441060ab1b7236cca9cdb8157cc3f472bd49c0282ea25ef59dca33fb21aec5\" returns successfully" Apr 24 23:42:55.776854 containerd[1572]: time="2026-04-24T23:42:55.776720885Z" level=info msg="StartContainer for \"274d13c6c7fa09eadc510b36ce86a8a2b4634e380c4ee883982bf6dcfbacb632\" returns successfully" Apr 24 23:42:56.469716 kubelet[2664]: E0424 23:42:56.469657 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:56.472011 kubelet[2664]: E0424 23:42:56.471986 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:56.490912 kubelet[2664]: I0424 23:42:56.490687 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-69jtt" podStartSLOduration=16.49066765 podStartE2EDuration="16.49066765s" podCreationTimestamp="2026-04-24 23:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:42:56.49004159 +0000 UTC m=+23.273113181" watchObservedRunningTime="2026-04-24 23:42:56.49066765 +0000 UTC m=+23.273739259" Apr 24 23:42:56.535729 kubelet[2664]: I0424 23:42:56.535595 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-grn5x" podStartSLOduration=16.535536836 podStartE2EDuration="16.535536836s" podCreationTimestamp="2026-04-24 23:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:42:56.514832831 +0000 UTC m=+23.297904436" watchObservedRunningTime="2026-04-24 23:42:56.535536836 +0000 UTC m=+23.318608427" Apr 24 23:42:57.476663 kubelet[2664]: E0424 23:42:57.476583 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:57.476663 kubelet[2664]: E0424 23:42:57.476583 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:58.451531 systemd[1]: Started sshd@7-10.0.0.69:22-10.0.0.1:37138.service - OpenSSH per-connection server daemon (10.0.0.1:37138). Apr 24 23:42:58.478188 kubelet[2664]: E0424 23:42:58.478165 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:58.478525 kubelet[2664]: E0424 23:42:58.478299 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:42:58.479400 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 37138 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:42:58.480594 sshd[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:42:58.485551 systemd-logind[1549]: New session 8 of user core. Apr 24 23:42:58.490496 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 23:42:58.760436 sshd[4070]: pam_unix(sshd:session): session closed for user core Apr 24 23:42:58.763422 systemd[1]: sshd@7-10.0.0.69:22-10.0.0.1:37138.service: Deactivated successfully. Apr 24 23:42:58.766795 systemd-logind[1549]: Session 8 logged out. Waiting for processes to exit. Apr 24 23:42:58.766967 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 23:42:58.768404 systemd-logind[1549]: Removed session 8. Apr 24 23:43:03.777482 systemd[1]: Started sshd@8-10.0.0.69:22-10.0.0.1:37150.service - OpenSSH per-connection server daemon (10.0.0.1:37150). Apr 24 23:43:03.802286 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 37150 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:03.803865 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:03.808447 systemd-logind[1549]: New session 9 of user core. Apr 24 23:43:03.814475 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 23:43:03.986530 sshd[4086]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:03.989422 systemd[1]: sshd@8-10.0.0.69:22-10.0.0.1:37150.service: Deactivated successfully. Apr 24 23:43:03.991471 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 23:43:03.991477 systemd-logind[1549]: Session 9 logged out. Waiting for processes to exit. Apr 24 23:43:03.992399 systemd-logind[1549]: Removed session 9. Apr 24 23:43:08.999944 systemd[1]: Started sshd@9-10.0.0.69:22-10.0.0.1:33322.service - OpenSSH per-connection server daemon (10.0.0.1:33322). Apr 24 23:43:09.033432 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 33322 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:09.035129 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:09.039606 systemd-logind[1549]: New session 10 of user core. Apr 24 23:43:09.048492 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 23:43:09.179434 sshd[4102]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:09.182219 systemd[1]: sshd@9-10.0.0.69:22-10.0.0.1:33322.service: Deactivated successfully. Apr 24 23:43:09.183947 systemd-logind[1549]: Session 10 logged out. Waiting for processes to exit. Apr 24 23:43:09.183996 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 23:43:09.185158 systemd-logind[1549]: Removed session 10. Apr 24 23:43:09.638030 kubelet[2664]: I0424 23:43:09.637874 2664 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:43:09.638903 kubelet[2664]: E0424 23:43:09.638855 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:43:09.735507 kubelet[2664]: E0424 23:43:09.735440 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:43:14.192467 systemd[1]: Started sshd@10-10.0.0.69:22-10.0.0.1:33334.service - OpenSSH per-connection server daemon (10.0.0.1:33334). Apr 24 23:43:14.219178 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 33334 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:14.220617 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:14.225198 systemd-logind[1549]: New session 11 of user core. Apr 24 23:43:14.237960 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 23:43:14.353012 sshd[4120]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:14.364511 systemd[1]: Started sshd@11-10.0.0.69:22-10.0.0.1:33342.service - OpenSSH per-connection server daemon (10.0.0.1:33342). Apr 24 23:43:14.365025 systemd[1]: sshd@10-10.0.0.69:22-10.0.0.1:33334.service: Deactivated successfully. Apr 24 23:43:14.368121 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 23:43:14.370003 systemd-logind[1549]: Session 11 logged out. Waiting for processes to exit. Apr 24 23:43:14.371009 systemd-logind[1549]: Removed session 11. Apr 24 23:43:14.390791 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 33342 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:14.391825 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:14.395314 systemd-logind[1549]: New session 12 of user core. Apr 24 23:43:14.402441 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 23:43:14.592239 sshd[4134]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:14.601933 systemd[1]: Started sshd@12-10.0.0.69:22-10.0.0.1:33346.service - OpenSSH per-connection server daemon (10.0.0.1:33346). Apr 24 23:43:14.602229 systemd[1]: sshd@11-10.0.0.69:22-10.0.0.1:33342.service: Deactivated successfully. Apr 24 23:43:14.607007 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 23:43:14.608882 systemd-logind[1549]: Session 12 logged out. Waiting for processes to exit. Apr 24 23:43:14.609829 systemd-logind[1549]: Removed session 12. Apr 24 23:43:14.638073 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 33346 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:14.639787 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:14.644613 systemd-logind[1549]: New session 13 of user core. Apr 24 23:43:14.654528 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 23:43:14.769071 sshd[4147]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:14.772094 systemd[1]: sshd@12-10.0.0.69:22-10.0.0.1:33346.service: Deactivated successfully. Apr 24 23:43:14.774094 systemd-logind[1549]: Session 13 logged out. Waiting for processes to exit. Apr 24 23:43:14.774164 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 23:43:14.775116 systemd-logind[1549]: Removed session 13. Apr 24 23:43:19.785488 systemd[1]: Started sshd@13-10.0.0.69:22-10.0.0.1:40792.service - OpenSSH per-connection server daemon (10.0.0.1:40792). Apr 24 23:43:19.810004 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 40792 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:19.811565 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:19.815642 systemd-logind[1549]: New session 14 of user core. Apr 24 23:43:19.827081 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 23:43:19.941887 sshd[4166]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:19.944657 systemd[1]: sshd@13-10.0.0.69:22-10.0.0.1:40792.service: Deactivated successfully. Apr 24 23:43:19.946514 systemd-logind[1549]: Session 14 logged out. Waiting for processes to exit. Apr 24 23:43:19.946580 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 23:43:19.947417 systemd-logind[1549]: Removed session 14. Apr 24 23:43:24.957466 systemd[1]: Started sshd@14-10.0.0.69:22-10.0.0.1:40806.service - OpenSSH per-connection server daemon (10.0.0.1:40806). Apr 24 23:43:24.986411 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 40806 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:24.987473 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:24.991654 systemd-logind[1549]: New session 15 of user core. Apr 24 23:43:25.001484 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 23:43:25.122211 sshd[4181]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:25.138491 systemd[1]: Started sshd@15-10.0.0.69:22-10.0.0.1:40816.service - OpenSSH per-connection server daemon (10.0.0.1:40816). Apr 24 23:43:25.138831 systemd[1]: sshd@14-10.0.0.69:22-10.0.0.1:40806.service: Deactivated successfully. Apr 24 23:43:25.140396 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 23:43:25.141675 systemd-logind[1549]: Session 15 logged out. Waiting for processes to exit. Apr 24 23:43:25.142792 systemd-logind[1549]: Removed session 15. Apr 24 23:43:25.166413 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 40816 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:25.167606 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:25.170863 systemd-logind[1549]: New session 16 of user core. Apr 24 23:43:25.178444 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 23:43:25.355469 sshd[4194]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:25.364583 systemd[1]: Started sshd@16-10.0.0.69:22-10.0.0.1:40820.service - OpenSSH per-connection server daemon (10.0.0.1:40820). Apr 24 23:43:25.364983 systemd[1]: sshd@15-10.0.0.69:22-10.0.0.1:40816.service: Deactivated successfully. Apr 24 23:43:25.367706 systemd-logind[1549]: Session 16 logged out. Waiting for processes to exit. Apr 24 23:43:25.368831 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 23:43:25.371079 systemd-logind[1549]: Removed session 16. Apr 24 23:43:25.392328 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 40820 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:25.393282 sshd[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:25.396445 systemd-logind[1549]: New session 17 of user core. Apr 24 23:43:25.405005 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 23:43:25.999505 sshd[4207]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:26.009565 systemd[1]: Started sshd@17-10.0.0.69:22-10.0.0.1:40200.service - OpenSSH per-connection server daemon (10.0.0.1:40200). Apr 24 23:43:26.011541 systemd[1]: sshd@16-10.0.0.69:22-10.0.0.1:40820.service: Deactivated successfully. Apr 24 23:43:26.015173 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 23:43:26.021414 systemd-logind[1549]: Session 17 logged out. Waiting for processes to exit. Apr 24 23:43:26.024101 systemd-logind[1549]: Removed session 17. Apr 24 23:43:26.047932 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 40200 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:26.049662 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:26.054861 systemd-logind[1549]: New session 18 of user core. Apr 24 23:43:26.065525 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 23:43:26.353242 sshd[4229]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:26.367848 systemd[1]: Started sshd@18-10.0.0.69:22-10.0.0.1:40206.service - OpenSSH per-connection server daemon (10.0.0.1:40206). Apr 24 23:43:26.368816 systemd[1]: sshd@17-10.0.0.69:22-10.0.0.1:40200.service: Deactivated successfully. Apr 24 23:43:26.377007 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 23:43:26.385429 systemd-logind[1549]: Session 18 logged out. Waiting for processes to exit. Apr 24 23:43:26.386809 systemd-logind[1549]: Removed session 18. Apr 24 23:43:26.463536 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 40206 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:26.465559 sshd[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:26.470550 systemd-logind[1549]: New session 19 of user core. Apr 24 23:43:26.478861 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 23:43:26.599528 sshd[4241]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:26.601936 systemd[1]: sshd@18-10.0.0.69:22-10.0.0.1:40206.service: Deactivated successfully. Apr 24 23:43:26.604714 systemd-logind[1549]: Session 19 logged out. Waiting for processes to exit. Apr 24 23:43:26.604785 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 23:43:26.605697 systemd-logind[1549]: Removed session 19. Apr 24 23:43:31.610566 systemd[1]: Started sshd@19-10.0.0.69:22-10.0.0.1:40210.service - OpenSSH per-connection server daemon (10.0.0.1:40210). Apr 24 23:43:31.635375 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 40210 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:31.636911 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:31.641366 systemd-logind[1549]: New session 20 of user core. Apr 24 23:43:31.651492 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 23:43:31.760270 sshd[4262]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:31.763437 systemd[1]: sshd@19-10.0.0.69:22-10.0.0.1:40210.service: Deactivated successfully. Apr 24 23:43:31.765473 systemd-logind[1549]: Session 20 logged out. Waiting for processes to exit. Apr 24 23:43:31.765533 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 23:43:31.766368 systemd-logind[1549]: Removed session 20. Apr 24 23:43:36.787915 systemd[1]: Started sshd@20-10.0.0.69:22-10.0.0.1:56680.service - OpenSSH per-connection server daemon (10.0.0.1:56680). Apr 24 23:43:36.816044 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 56680 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:36.817669 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:36.822156 systemd-logind[1549]: New session 21 of user core. Apr 24 23:43:36.837878 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 23:43:36.947914 sshd[4279]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:36.950970 systemd[1]: sshd@20-10.0.0.69:22-10.0.0.1:56680.service: Deactivated successfully. Apr 24 23:43:36.953171 systemd-logind[1549]: Session 21 logged out. Waiting for processes to exit. Apr 24 23:43:36.953231 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 23:43:36.954338 systemd-logind[1549]: Removed session 21. Apr 24 23:43:41.965120 systemd[1]: Started sshd@21-10.0.0.69:22-10.0.0.1:56682.service - OpenSSH per-connection server daemon (10.0.0.1:56682). Apr 24 23:43:41.991233 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 56682 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:41.992317 sshd[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:41.995980 systemd-logind[1549]: New session 22 of user core. Apr 24 23:43:42.007447 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 24 23:43:42.106210 sshd[4296]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:42.115463 systemd[1]: Started sshd@22-10.0.0.69:22-10.0.0.1:56692.service - OpenSSH per-connection server daemon (10.0.0.1:56692). Apr 24 23:43:42.115757 systemd[1]: sshd@21-10.0.0.69:22-10.0.0.1:56682.service: Deactivated successfully. Apr 24 23:43:42.117042 systemd[1]: session-22.scope: Deactivated successfully. Apr 24 23:43:42.117690 systemd-logind[1549]: Session 22 logged out. Waiting for processes to exit. Apr 24 23:43:42.118626 systemd-logind[1549]: Removed session 22. Apr 24 23:43:42.139488 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 56692 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:42.140415 sshd[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:42.143396 systemd-logind[1549]: New session 23 of user core. Apr 24 23:43:42.154446 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 24 23:43:43.490003 containerd[1572]: time="2026-04-24T23:43:43.489816328Z" level=info msg="StopContainer for \"da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed\" with timeout 30 (s)" Apr 24 23:43:43.491212 containerd[1572]: time="2026-04-24T23:43:43.491184666Z" level=info msg="Stop container \"da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed\" with signal terminated" Apr 24 23:43:43.531707 containerd[1572]: time="2026-04-24T23:43:43.531626521Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:43:43.536994 containerd[1572]: time="2026-04-24T23:43:43.536966166Z" level=info msg="StopContainer for \"0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67\" with timeout 2 (s)" Apr 24 23:43:43.537308 containerd[1572]: time="2026-04-24T23:43:43.537200978Z" level=info msg="Stop container \"0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67\" with signal terminated" Apr 24 23:43:43.539318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed-rootfs.mount: Deactivated successfully. Apr 24 23:43:43.543990 systemd-networkd[1249]: lxc_health: Link DOWN Apr 24 23:43:43.543999 systemd-networkd[1249]: lxc_health: Lost carrier Apr 24 23:43:43.545466 containerd[1572]: time="2026-04-24T23:43:43.545146065Z" level=info msg="shim disconnected" id=da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed namespace=k8s.io Apr 24 23:43:43.545466 containerd[1572]: time="2026-04-24T23:43:43.545316803Z" level=warning msg="cleaning up after shim disconnected" id=da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed namespace=k8s.io Apr 24 23:43:43.545466 containerd[1572]: time="2026-04-24T23:43:43.545324674Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:43:43.569095 containerd[1572]: time="2026-04-24T23:43:43.569018179Z" level=info msg="StopContainer for \"da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed\" returns successfully" Apr 24 23:43:43.574990 containerd[1572]: time="2026-04-24T23:43:43.574868784Z" level=info msg="StopPodSandbox for \"a7f99c8eb652081179d96f5b7c940050941b0014cc29e30380a775842464eb97\"" Apr 24 23:43:43.574990 containerd[1572]: time="2026-04-24T23:43:43.574998103Z" level=info msg="Container to stop \"da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:43:43.579522 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7f99c8eb652081179d96f5b7c940050941b0014cc29e30380a775842464eb97-shm.mount: Deactivated successfully. Apr 24 23:43:43.596022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67-rootfs.mount: Deactivated successfully. Apr 24 23:43:43.603343 containerd[1572]: time="2026-04-24T23:43:43.601215496Z" level=info msg="shim disconnected" id=0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67 namespace=k8s.io Apr 24 23:43:43.603343 containerd[1572]: time="2026-04-24T23:43:43.601404678Z" level=warning msg="cleaning up after shim disconnected" id=0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67 namespace=k8s.io Apr 24 23:43:43.603343 containerd[1572]: time="2026-04-24T23:43:43.601418830Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:43:43.606187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7f99c8eb652081179d96f5b7c940050941b0014cc29e30380a775842464eb97-rootfs.mount: Deactivated successfully. Apr 24 23:43:43.607634 containerd[1572]: time="2026-04-24T23:43:43.607024976Z" level=info msg="shim disconnected" id=a7f99c8eb652081179d96f5b7c940050941b0014cc29e30380a775842464eb97 namespace=k8s.io Apr 24 23:43:43.607634 containerd[1572]: time="2026-04-24T23:43:43.607173530Z" level=warning msg="cleaning up after shim disconnected" id=a7f99c8eb652081179d96f5b7c940050941b0014cc29e30380a775842464eb97 namespace=k8s.io Apr 24 23:43:43.607634 containerd[1572]: time="2026-04-24T23:43:43.607185079Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:43:43.622011 containerd[1572]: time="2026-04-24T23:43:43.620410322Z" level=info msg="TearDown network for sandbox \"a7f99c8eb652081179d96f5b7c940050941b0014cc29e30380a775842464eb97\" successfully" Apr 24 23:43:43.622011 containerd[1572]: time="2026-04-24T23:43:43.620437409Z" level=info msg="StopPodSandbox for \"a7f99c8eb652081179d96f5b7c940050941b0014cc29e30380a775842464eb97\" returns successfully" Apr 24 23:43:43.630486 containerd[1572]: time="2026-04-24T23:43:43.630449237Z" level=info msg="StopContainer for \"0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67\" returns successfully" Apr 24 23:43:43.630870 containerd[1572]: time="2026-04-24T23:43:43.630852338Z" level=info msg="StopPodSandbox for \"84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2\"" Apr 24 23:43:43.630984 containerd[1572]: time="2026-04-24T23:43:43.630955975Z" level=info msg="Container to stop \"6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:43:43.630984 containerd[1572]: time="2026-04-24T23:43:43.630974078Z" level=info msg="Container to stop \"2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:43:43.630984 containerd[1572]: time="2026-04-24T23:43:43.630981990Z" level=info msg="Container to stop \"0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:43:43.631118 containerd[1572]: time="2026-04-24T23:43:43.631010987Z" level=info msg="Container to stop \"84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:43:43.631118 containerd[1572]: time="2026-04-24T23:43:43.631019068Z" level=info msg="Container to stop \"500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:43:43.662157 containerd[1572]: time="2026-04-24T23:43:43.662047059Z" level=info msg="shim disconnected" id=84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2 namespace=k8s.io Apr 24 23:43:43.662457 containerd[1572]: time="2026-04-24T23:43:43.662397867Z" level=warning msg="cleaning up after shim disconnected" id=84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2 namespace=k8s.io Apr 24 23:43:43.662457 containerd[1572]: time="2026-04-24T23:43:43.662441341Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:43:43.678141 containerd[1572]: time="2026-04-24T23:43:43.678041899Z" level=info msg="TearDown network for sandbox \"84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2\" successfully" Apr 24 23:43:43.678141 containerd[1572]: time="2026-04-24T23:43:43.678113451Z" level=info msg="StopPodSandbox for \"84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2\" returns successfully" Apr 24 23:43:43.698310 kubelet[2664]: I0424 23:43:43.695629 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b13bdef4-3c13-4253-b936-2909f7d4c686-cilium-config-path\") pod \"b13bdef4-3c13-4253-b936-2909f7d4c686\" (UID: \"b13bdef4-3c13-4253-b936-2909f7d4c686\") " Apr 24 23:43:43.698310 kubelet[2664]: I0424 23:43:43.696332 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jdh8\" (UniqueName: \"kubernetes.io/projected/b13bdef4-3c13-4253-b936-2909f7d4c686-kube-api-access-2jdh8\") pod \"b13bdef4-3c13-4253-b936-2909f7d4c686\" (UID: \"b13bdef4-3c13-4253-b936-2909f7d4c686\") " Apr 24 23:43:43.698310 kubelet[2664]: I0424 23:43:43.697978 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b13bdef4-3c13-4253-b936-2909f7d4c686-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b13bdef4-3c13-4253-b936-2909f7d4c686" (UID: "b13bdef4-3c13-4253-b936-2909f7d4c686"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:43:43.709299 kubelet[2664]: I0424 23:43:43.706595 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b13bdef4-3c13-4253-b936-2909f7d4c686-kube-api-access-2jdh8" (OuterVolumeSpecName: "kube-api-access-2jdh8") pod "b13bdef4-3c13-4253-b936-2909f7d4c686" (UID: "b13bdef4-3c13-4253-b936-2909f7d4c686"). InnerVolumeSpecName "kube-api-access-2jdh8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:43:43.798152 kubelet[2664]: I0424 23:43:43.797716 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-bpf-maps\") pod \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " Apr 24 23:43:43.798152 kubelet[2664]: I0424 23:43:43.797960 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89aadd07-4bff-4d36-ad4c-ff232e640d5d-hubble-tls\") pod \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " Apr 24 23:43:43.798152 kubelet[2664]: I0424 23:43:43.797973 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-hostproc\") pod \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " Apr 24 23:43:43.798152 kubelet[2664]: I0424 23:43:43.798050 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-etc-cni-netd\") pod \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " Apr 24 23:43:43.798152 kubelet[2664]: I0424 23:43:43.798027 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "89aadd07-4bff-4d36-ad4c-ff232e640d5d" (UID: "89aadd07-4bff-4d36-ad4c-ff232e640d5d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:43:43.798152 kubelet[2664]: I0424 23:43:43.798071 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-lib-modules\") pod \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " Apr 24 23:43:43.798500 kubelet[2664]: I0424 23:43:43.798090 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-cni-path\") pod \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " Apr 24 23:43:43.798736 kubelet[2664]: I0424 23:43:43.798723 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-cilium-run\") pod \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " Apr 24 23:43:43.799417 kubelet[2664]: I0424 23:43:43.798792 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89aadd07-4bff-4d36-ad4c-ff232e640d5d-cilium-config-path\") pod \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " Apr 24 23:43:43.799417 kubelet[2664]: I0424 23:43:43.798814 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89aadd07-4bff-4d36-ad4c-ff232e640d5d-clustermesh-secrets\") pod \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " Apr 24 23:43:43.799417 kubelet[2664]: I0424 23:43:43.798839 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-cilium-cgroup\") pod \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " Apr 24 23:43:43.799417 kubelet[2664]: I0424 23:43:43.798879 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn4q9\" (UniqueName: \"kubernetes.io/projected/89aadd07-4bff-4d36-ad4c-ff232e640d5d-kube-api-access-cn4q9\") pod \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " Apr 24 23:43:43.799417 kubelet[2664]: I0424 23:43:43.798892 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-host-proc-sys-kernel\") pod \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " Apr 24 23:43:43.799417 kubelet[2664]: I0424 23:43:43.798912 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-xtables-lock\") pod \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " Apr 24 23:43:43.799557 kubelet[2664]: I0424 23:43:43.798926 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-host-proc-sys-net\") pod \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\" (UID: \"89aadd07-4bff-4d36-ad4c-ff232e640d5d\") " Apr 24 23:43:43.799557 kubelet[2664]: I0424 23:43:43.798992 2664 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b13bdef4-3c13-4253-b936-2909f7d4c686-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 24 23:43:43.799557 kubelet[2664]: I0424 23:43:43.799000 2664 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2jdh8\" (UniqueName: \"kubernetes.io/projected/b13bdef4-3c13-4253-b936-2909f7d4c686-kube-api-access-2jdh8\") on node \"localhost\" DevicePath \"\"" Apr 24 23:43:43.799557 kubelet[2664]: I0424 23:43:43.799012 2664 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 24 23:43:43.800354 kubelet[2664]: I0424 23:43:43.798162 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "89aadd07-4bff-4d36-ad4c-ff232e640d5d" (UID: "89aadd07-4bff-4d36-ad4c-ff232e640d5d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:43:43.800354 kubelet[2664]: I0424 23:43:43.798174 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "89aadd07-4bff-4d36-ad4c-ff232e640d5d" (UID: "89aadd07-4bff-4d36-ad4c-ff232e640d5d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:43:43.800413 kubelet[2664]: I0424 23:43:43.798194 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-hostproc" (OuterVolumeSpecName: "hostproc") pod "89aadd07-4bff-4d36-ad4c-ff232e640d5d" (UID: "89aadd07-4bff-4d36-ad4c-ff232e640d5d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:43:43.800413 kubelet[2664]: I0424 23:43:43.798216 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-cni-path" (OuterVolumeSpecName: "cni-path") pod "89aadd07-4bff-4d36-ad4c-ff232e640d5d" (UID: "89aadd07-4bff-4d36-ad4c-ff232e640d5d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:43:43.800413 kubelet[2664]: I0424 23:43:43.799040 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "89aadd07-4bff-4d36-ad4c-ff232e640d5d" (UID: "89aadd07-4bff-4d36-ad4c-ff232e640d5d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:43:43.800413 kubelet[2664]: I0424 23:43:43.800380 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "89aadd07-4bff-4d36-ad4c-ff232e640d5d" (UID: "89aadd07-4bff-4d36-ad4c-ff232e640d5d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:43:43.801572 kubelet[2664]: I0424 23:43:43.801496 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89aadd07-4bff-4d36-ad4c-ff232e640d5d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "89aadd07-4bff-4d36-ad4c-ff232e640d5d" (UID: "89aadd07-4bff-4d36-ad4c-ff232e640d5d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:43:43.801572 kubelet[2664]: I0424 23:43:43.801524 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "89aadd07-4bff-4d36-ad4c-ff232e640d5d" (UID: "89aadd07-4bff-4d36-ad4c-ff232e640d5d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:43:43.801572 kubelet[2664]: I0424 23:43:43.801537 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "89aadd07-4bff-4d36-ad4c-ff232e640d5d" (UID: "89aadd07-4bff-4d36-ad4c-ff232e640d5d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:43:43.801572 kubelet[2664]: I0424 23:43:43.801547 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "89aadd07-4bff-4d36-ad4c-ff232e640d5d" (UID: "89aadd07-4bff-4d36-ad4c-ff232e640d5d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:43:43.801766 kubelet[2664]: I0424 23:43:43.801754 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89aadd07-4bff-4d36-ad4c-ff232e640d5d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "89aadd07-4bff-4d36-ad4c-ff232e640d5d" (UID: "89aadd07-4bff-4d36-ad4c-ff232e640d5d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 23:43:43.801905 kubelet[2664]: I0424 23:43:43.801859 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89aadd07-4bff-4d36-ad4c-ff232e640d5d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "89aadd07-4bff-4d36-ad4c-ff232e640d5d" (UID: "89aadd07-4bff-4d36-ad4c-ff232e640d5d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:43:43.802391 kubelet[2664]: I0424 23:43:43.802362 2664 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89aadd07-4bff-4d36-ad4c-ff232e640d5d-kube-api-access-cn4q9" (OuterVolumeSpecName: "kube-api-access-cn4q9") pod "89aadd07-4bff-4d36-ad4c-ff232e640d5d" (UID: "89aadd07-4bff-4d36-ad4c-ff232e640d5d"). InnerVolumeSpecName "kube-api-access-cn4q9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:43:43.894541 kubelet[2664]: I0424 23:43:43.894451 2664 scope.go:117] "RemoveContainer" containerID="da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed" Apr 24 23:43:43.896756 containerd[1572]: time="2026-04-24T23:43:43.896085492Z" level=info msg="RemoveContainer for \"da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed\"" Apr 24 23:43:43.899693 kubelet[2664]: I0424 23:43:43.899654 2664 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89aadd07-4bff-4d36-ad4c-ff232e640d5d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 24 23:43:43.899693 kubelet[2664]: I0424 23:43:43.899684 2664 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 24 23:43:43.899693 kubelet[2664]: I0424 23:43:43.899691 2664 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cn4q9\" (UniqueName: \"kubernetes.io/projected/89aadd07-4bff-4d36-ad4c-ff232e640d5d-kube-api-access-cn4q9\") on node \"localhost\" DevicePath \"\"" Apr 24 23:43:43.899693 kubelet[2664]: I0424 23:43:43.899702 2664 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 24 23:43:43.899693 kubelet[2664]: I0424 23:43:43.899708 2664 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 24 23:43:43.899693 kubelet[2664]: I0424 23:43:43.899715 2664 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 24 23:43:43.900380 kubelet[2664]: I0424 23:43:43.899721 2664 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89aadd07-4bff-4d36-ad4c-ff232e640d5d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 24 23:43:43.900380 kubelet[2664]: I0424 23:43:43.899729 2664 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 24 23:43:43.900380 kubelet[2664]: I0424 23:43:43.899739 2664 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 24 23:43:43.900380 kubelet[2664]: I0424 23:43:43.899745 2664 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 24 23:43:43.900380 kubelet[2664]: I0424 23:43:43.899751 2664 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 24 23:43:43.900380 kubelet[2664]: I0424 23:43:43.899756 2664 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89aadd07-4bff-4d36-ad4c-ff232e640d5d-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 24 23:43:43.900380 kubelet[2664]: I0424 23:43:43.899767 2664 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89aadd07-4bff-4d36-ad4c-ff232e640d5d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 24 23:43:43.904062 containerd[1572]: time="2026-04-24T23:43:43.903995815Z" level=info msg="RemoveContainer for \"da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed\" returns successfully" Apr 24 23:43:43.905343 kubelet[2664]: I0424 23:43:43.905139 2664 scope.go:117] "RemoveContainer" containerID="da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed" Apr 24 23:43:43.906201 containerd[1572]: time="2026-04-24T23:43:43.905933600Z" level=error msg="ContainerStatus for \"da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed\": not found" Apr 24 23:43:43.914590 kubelet[2664]: E0424 23:43:43.914415 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed\": not found" containerID="da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed" Apr 24 23:43:43.914981 kubelet[2664]: I0424 23:43:43.914596 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed"} err="failed to get container status \"da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"da0de728830068c8b04d1035a4baf6d67cb56a081af7940688a8aa00fbcc86ed\": not found" Apr 24 23:43:43.914981 kubelet[2664]: I0424 23:43:43.914782 2664 scope.go:117] "RemoveContainer" containerID="0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67" Apr 24 23:43:43.916717 containerd[1572]: time="2026-04-24T23:43:43.916658266Z" level=info msg="RemoveContainer for \"0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67\"" Apr 24 23:43:43.919565 containerd[1572]: time="2026-04-24T23:43:43.919508218Z" level=info msg="RemoveContainer for \"0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67\" returns successfully" Apr 24 23:43:43.919688 kubelet[2664]: I0424 23:43:43.919665 2664 scope.go:117] "RemoveContainer" containerID="2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001" Apr 24 23:43:43.921432 containerd[1572]: time="2026-04-24T23:43:43.921377475Z" level=info msg="RemoveContainer for \"2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001\"" Apr 24 23:43:43.939915 containerd[1572]: time="2026-04-24T23:43:43.938710629Z" level=info msg="RemoveContainer for \"2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001\" returns successfully" Apr 24 23:43:43.985563 kubelet[2664]: I0424 23:43:43.985171 2664 scope.go:117] "RemoveContainer" containerID="500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9" Apr 24 23:43:43.989414 containerd[1572]: time="2026-04-24T23:43:43.989335949Z" level=info msg="RemoveContainer for \"500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9\"" Apr 24 23:43:43.995823 containerd[1572]: time="2026-04-24T23:43:43.995784619Z" level=info msg="RemoveContainer for \"500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9\" returns successfully" Apr 24 23:43:43.999059 kubelet[2664]: I0424 23:43:43.999039 2664 scope.go:117] "RemoveContainer" containerID="6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14" Apr 24 23:43:44.001144 containerd[1572]: time="2026-04-24T23:43:44.000544673Z" level=info msg="RemoveContainer for \"6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14\"" Apr 24 23:43:44.004324 containerd[1572]: time="2026-04-24T23:43:44.004295988Z" level=info msg="RemoveContainer for \"6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14\" returns successfully" Apr 24 23:43:44.004725 kubelet[2664]: I0424 23:43:44.004640 2664 scope.go:117] "RemoveContainer" containerID="84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf" Apr 24 23:43:44.005842 containerd[1572]: time="2026-04-24T23:43:44.005797046Z" level=info msg="RemoveContainer for \"84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf\"" Apr 24 23:43:44.008081 containerd[1572]: time="2026-04-24T23:43:44.007875871Z" level=info msg="RemoveContainer for \"84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf\" returns successfully" Apr 24 23:43:44.008132 kubelet[2664]: I0424 23:43:44.007984 2664 scope.go:117] "RemoveContainer" containerID="0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67" Apr 24 23:43:44.008210 containerd[1572]: time="2026-04-24T23:43:44.008165074Z" level=error msg="ContainerStatus for \"0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67\": not found" Apr 24 23:43:44.008659 kubelet[2664]: E0424 23:43:44.008419 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67\": not found" containerID="0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67" Apr 24 23:43:44.008955 kubelet[2664]: I0424 23:43:44.008734 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67"} err="failed to get container status \"0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a7489d032db6f35fd0213d5af7b3c870fb55be343a55cf34397b196a2351f67\": not found" Apr 24 23:43:44.008955 kubelet[2664]: I0424 23:43:44.008853 2664 scope.go:117] "RemoveContainer" containerID="2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001" Apr 24 23:43:44.009736 containerd[1572]: time="2026-04-24T23:43:44.009637999Z" level=error msg="ContainerStatus for \"2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001\": not found" Apr 24 23:43:44.009998 kubelet[2664]: E0424 23:43:44.009927 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001\": not found" containerID="2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001" Apr 24 23:43:44.010041 kubelet[2664]: I0424 23:43:44.010011 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001"} err="failed to get container status \"2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b2997e60f7e3b5248a0c515c1ae0994266ee48c9e41213692c768c71a971001\": not found" Apr 24 23:43:44.010041 kubelet[2664]: I0424 23:43:44.010030 2664 scope.go:117] "RemoveContainer" containerID="500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9" Apr 24 23:43:44.010685 containerd[1572]: time="2026-04-24T23:43:44.010233766Z" level=error msg="ContainerStatus for \"500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9\": not found" Apr 24 23:43:44.010764 kubelet[2664]: E0424 23:43:44.010518 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9\": not found" containerID="500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9" Apr 24 23:43:44.010764 kubelet[2664]: I0424 23:43:44.010580 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9"} err="failed to get container status \"500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"500ac02ed50da59341e22055bdb0c326fad3de7cd6af0ef5ea4f823d1da3d6d9\": not found" Apr 24 23:43:44.010764 kubelet[2664]: I0424 23:43:44.010602 2664 scope.go:117] "RemoveContainer" containerID="6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14" Apr 24 23:43:44.010930 containerd[1572]: time="2026-04-24T23:43:44.010818497Z" level=error msg="ContainerStatus for \"6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14\": not found" Apr 24 23:43:44.011086 kubelet[2664]: E0424 23:43:44.011032 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14\": not found" containerID="6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14" Apr 24 23:43:44.011112 kubelet[2664]: I0424 23:43:44.011059 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14"} err="failed to get container status \"6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d884c48a56919e662b2dc25ccbc54e85ae05f8e86a4b1d0cc27bc14d89a9b14\": not found" Apr 24 23:43:44.011129 kubelet[2664]: I0424 23:43:44.011115 2664 scope.go:117] "RemoveContainer" containerID="84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf" Apr 24 23:43:44.014228 containerd[1572]: time="2026-04-24T23:43:44.012094503Z" level=error msg="ContainerStatus for \"84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf\": not found" Apr 24 23:43:44.014568 kubelet[2664]: E0424 23:43:44.013106 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf\": not found" containerID="84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf" Apr 24 23:43:44.014568 kubelet[2664]: I0424 23:43:44.013279 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf"} err="failed to get container status \"84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf\": rpc error: code = NotFound desc = an error occurred when try to find container \"84b4f372b94eee007b88d31a427d42ac247bc76135edee2c3d6574c813777bbf\": not found" Apr 24 23:43:44.518342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2-rootfs.mount: Deactivated successfully. Apr 24 23:43:44.518528 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84fe2d554e5d49c91c97d678ae7a40022bc8a003e0fd63d37b431f3ef9be33d2-shm.mount: Deactivated successfully. Apr 24 23:43:44.518603 systemd[1]: var-lib-kubelet-pods-89aadd07\x2d4bff\x2d4d36\x2dad4c\x2dff232e640d5d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 24 23:43:44.518676 systemd[1]: var-lib-kubelet-pods-89aadd07\x2d4bff\x2d4d36\x2dad4c\x2dff232e640d5d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 24 23:43:44.518738 systemd[1]: var-lib-kubelet-pods-b13bdef4\x2d3c13\x2d4253\x2db936\x2d2909f7d4c686-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2jdh8.mount: Deactivated successfully. Apr 24 23:43:44.518804 systemd[1]: var-lib-kubelet-pods-89aadd07\x2d4bff\x2d4d36\x2dad4c\x2dff232e640d5d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcn4q9.mount: Deactivated successfully. Apr 24 23:43:45.305772 kubelet[2664]: E0424 23:43:45.305693 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:43:45.308099 kubelet[2664]: I0424 23:43:45.308069 2664 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89aadd07-4bff-4d36-ad4c-ff232e640d5d" path="/var/lib/kubelet/pods/89aadd07-4bff-4d36-ad4c-ff232e640d5d/volumes" Apr 24 23:43:45.309570 kubelet[2664]: I0424 23:43:45.309293 2664 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b13bdef4-3c13-4253-b936-2909f7d4c686" path="/var/lib/kubelet/pods/b13bdef4-3c13-4253-b936-2909f7d4c686/volumes" Apr 24 23:43:45.456735 sshd[4310]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:45.466478 systemd[1]: Started sshd@23-10.0.0.69:22-10.0.0.1:56702.service - OpenSSH per-connection server daemon (10.0.0.1:56702). Apr 24 23:43:45.466798 systemd[1]: sshd@22-10.0.0.69:22-10.0.0.1:56692.service: Deactivated successfully. Apr 24 23:43:45.468232 systemd[1]: session-23.scope: Deactivated successfully. Apr 24 23:43:45.469403 systemd-logind[1549]: Session 23 logged out. Waiting for processes to exit. Apr 24 23:43:45.470440 systemd-logind[1549]: Removed session 23. Apr 24 23:43:45.492899 sshd[4478]: Accepted publickey for core from 10.0.0.1 port 56702 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:45.494356 sshd[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:45.498758 systemd-logind[1549]: New session 24 of user core. Apr 24 23:43:45.506491 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 24 23:43:46.207840 sshd[4478]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:46.218702 systemd[1]: Started sshd@24-10.0.0.69:22-10.0.0.1:33874.service - OpenSSH per-connection server daemon (10.0.0.1:33874). Apr 24 23:43:46.219139 systemd[1]: sshd@23-10.0.0.69:22-10.0.0.1:56702.service: Deactivated successfully. Apr 24 23:43:46.232224 systemd[1]: session-24.scope: Deactivated successfully. Apr 24 23:43:46.238596 systemd-logind[1549]: Session 24 logged out. Waiting for processes to exit. Apr 24 23:43:46.244159 systemd-logind[1549]: Removed session 24. Apr 24 23:43:46.269415 sshd[4491]: Accepted publickey for core from 10.0.0.1 port 33874 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:46.271496 sshd[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:46.277425 systemd-logind[1549]: New session 25 of user core. Apr 24 23:43:46.285074 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 24 23:43:46.322127 kubelet[2664]: I0424 23:43:46.321808 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8b3779d6-ab51-409b-bedb-46777644f949-cilium-run\") pod \"cilium-cvz9h\" (UID: \"8b3779d6-ab51-409b-bedb-46777644f949\") " pod="kube-system/cilium-cvz9h" Apr 24 23:43:46.322127 kubelet[2664]: I0424 23:43:46.321852 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8b3779d6-ab51-409b-bedb-46777644f949-hostproc\") pod \"cilium-cvz9h\" (UID: \"8b3779d6-ab51-409b-bedb-46777644f949\") " pod="kube-system/cilium-cvz9h" Apr 24 23:43:46.322127 kubelet[2664]: I0424 23:43:46.321872 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8b3779d6-ab51-409b-bedb-46777644f949-cilium-cgroup\") pod \"cilium-cvz9h\" (UID: \"8b3779d6-ab51-409b-bedb-46777644f949\") " pod="kube-system/cilium-cvz9h" Apr 24 23:43:46.322127 kubelet[2664]: I0424 23:43:46.321890 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b3779d6-ab51-409b-bedb-46777644f949-lib-modules\") pod \"cilium-cvz9h\" (UID: \"8b3779d6-ab51-409b-bedb-46777644f949\") " pod="kube-system/cilium-cvz9h" Apr 24 23:43:46.322127 kubelet[2664]: I0424 23:43:46.321912 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b3779d6-ab51-409b-bedb-46777644f949-xtables-lock\") pod \"cilium-cvz9h\" (UID: \"8b3779d6-ab51-409b-bedb-46777644f949\") " pod="kube-system/cilium-cvz9h" Apr 24 23:43:46.322127 kubelet[2664]: I0424 23:43:46.321935 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b3779d6-ab51-409b-bedb-46777644f949-cilium-config-path\") pod \"cilium-cvz9h\" (UID: \"8b3779d6-ab51-409b-bedb-46777644f949\") " pod="kube-system/cilium-cvz9h" Apr 24 23:43:46.322551 kubelet[2664]: I0424 23:43:46.321974 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8b3779d6-ab51-409b-bedb-46777644f949-host-proc-sys-net\") pod \"cilium-cvz9h\" (UID: \"8b3779d6-ab51-409b-bedb-46777644f949\") " pod="kube-system/cilium-cvz9h" Apr 24 23:43:46.322551 kubelet[2664]: I0424 23:43:46.321999 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b3779d6-ab51-409b-bedb-46777644f949-etc-cni-netd\") pod \"cilium-cvz9h\" (UID: \"8b3779d6-ab51-409b-bedb-46777644f949\") " pod="kube-system/cilium-cvz9h" Apr 24 23:43:46.322551 kubelet[2664]: I0424 23:43:46.322017 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8b3779d6-ab51-409b-bedb-46777644f949-cilium-ipsec-secrets\") pod \"cilium-cvz9h\" (UID: \"8b3779d6-ab51-409b-bedb-46777644f949\") " pod="kube-system/cilium-cvz9h" Apr 24 23:43:46.322551 kubelet[2664]: I0424 23:43:46.322037 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8b3779d6-ab51-409b-bedb-46777644f949-hubble-tls\") pod \"cilium-cvz9h\" (UID: \"8b3779d6-ab51-409b-bedb-46777644f949\") " pod="kube-system/cilium-cvz9h" Apr 24 23:43:46.322551 kubelet[2664]: I0424 23:43:46.322083 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8b3779d6-ab51-409b-bedb-46777644f949-bpf-maps\") pod \"cilium-cvz9h\" (UID: \"8b3779d6-ab51-409b-bedb-46777644f949\") " pod="kube-system/cilium-cvz9h" Apr 24 23:43:46.322551 kubelet[2664]: I0424 23:43:46.322124 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8b3779d6-ab51-409b-bedb-46777644f949-cni-path\") pod \"cilium-cvz9h\" (UID: \"8b3779d6-ab51-409b-bedb-46777644f949\") " pod="kube-system/cilium-cvz9h" Apr 24 23:43:46.322659 kubelet[2664]: I0424 23:43:46.322143 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8b3779d6-ab51-409b-bedb-46777644f949-clustermesh-secrets\") pod \"cilium-cvz9h\" (UID: \"8b3779d6-ab51-409b-bedb-46777644f949\") " pod="kube-system/cilium-cvz9h" Apr 24 23:43:46.322659 kubelet[2664]: I0424 23:43:46.322295 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwt27\" (UniqueName: \"kubernetes.io/projected/8b3779d6-ab51-409b-bedb-46777644f949-kube-api-access-kwt27\") pod \"cilium-cvz9h\" (UID: \"8b3779d6-ab51-409b-bedb-46777644f949\") " pod="kube-system/cilium-cvz9h" Apr 24 23:43:46.322659 kubelet[2664]: I0424 23:43:46.322320 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8b3779d6-ab51-409b-bedb-46777644f949-host-proc-sys-kernel\") pod \"cilium-cvz9h\" (UID: \"8b3779d6-ab51-409b-bedb-46777644f949\") " pod="kube-system/cilium-cvz9h" Apr 24 23:43:46.337488 sshd[4491]: pam_unix(sshd:session): session closed for user core Apr 24 23:43:46.345494 systemd[1]: Started sshd@25-10.0.0.69:22-10.0.0.1:33886.service - OpenSSH per-connection server daemon (10.0.0.1:33886). Apr 24 23:43:46.345802 systemd[1]: sshd@24-10.0.0.69:22-10.0.0.1:33874.service: Deactivated successfully. Apr 24 23:43:46.347122 systemd[1]: session-25.scope: Deactivated successfully. Apr 24 23:43:46.348175 systemd-logind[1549]: Session 25 logged out. Waiting for processes to exit. Apr 24 23:43:46.349218 systemd-logind[1549]: Removed session 25. Apr 24 23:43:46.369410 sshd[4500]: Accepted publickey for core from 10.0.0.1 port 33886 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:43:46.370410 sshd[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:43:46.373708 systemd-logind[1549]: New session 26 of user core. Apr 24 23:43:46.388435 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 24 23:43:46.568425 kubelet[2664]: E0424 23:43:46.566718 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:43:46.575173 containerd[1572]: time="2026-04-24T23:43:46.572609847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cvz9h,Uid:8b3779d6-ab51-409b-bedb-46777644f949,Namespace:kube-system,Attempt:0,}" Apr 24 23:43:46.596066 containerd[1572]: time="2026-04-24T23:43:46.595905007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:43:46.596066 containerd[1572]: time="2026-04-24T23:43:46.595996174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:43:46.596066 containerd[1572]: time="2026-04-24T23:43:46.596008730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:43:46.596829 containerd[1572]: time="2026-04-24T23:43:46.596772127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:43:46.636724 containerd[1572]: time="2026-04-24T23:43:46.636632739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cvz9h,Uid:8b3779d6-ab51-409b-bedb-46777644f949,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e8e3bd49a283ba6e01e9e58869a7a37edea1c94b2796279121820e6cd08164f\"" Apr 24 23:43:46.641760 kubelet[2664]: E0424 23:43:46.640297 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:43:46.650568 containerd[1572]: time="2026-04-24T23:43:46.650338808Z" level=info msg="CreateContainer within sandbox \"6e8e3bd49a283ba6e01e9e58869a7a37edea1c94b2796279121820e6cd08164f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 24 23:43:46.662853 containerd[1572]: time="2026-04-24T23:43:46.662795291Z" level=info msg="CreateContainer within sandbox \"6e8e3bd49a283ba6e01e9e58869a7a37edea1c94b2796279121820e6cd08164f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"af31a8100da5e1caf89c2b40cacf9a56bf3cd3b7e1867f23843a2cdbc0211cf2\"" Apr 24 23:43:46.663587 containerd[1572]: time="2026-04-24T23:43:46.663544896Z" level=info msg="StartContainer for \"af31a8100da5e1caf89c2b40cacf9a56bf3cd3b7e1867f23843a2cdbc0211cf2\"" Apr 24 23:43:46.721863 containerd[1572]: time="2026-04-24T23:43:46.721819852Z" level=info msg="StartContainer for \"af31a8100da5e1caf89c2b40cacf9a56bf3cd3b7e1867f23843a2cdbc0211cf2\" returns successfully" Apr 24 23:43:46.755166 containerd[1572]: time="2026-04-24T23:43:46.755094964Z" level=info msg="shim disconnected" id=af31a8100da5e1caf89c2b40cacf9a56bf3cd3b7e1867f23843a2cdbc0211cf2 namespace=k8s.io Apr 24 23:43:46.755166 containerd[1572]: time="2026-04-24T23:43:46.755186288Z" level=warning msg="cleaning up after shim disconnected" id=af31a8100da5e1caf89c2b40cacf9a56bf3cd3b7e1867f23843a2cdbc0211cf2 namespace=k8s.io Apr 24 23:43:46.755567 containerd[1572]: time="2026-04-24T23:43:46.755193105Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:43:46.915983 kubelet[2664]: E0424 23:43:46.915674 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:43:46.930413 containerd[1572]: time="2026-04-24T23:43:46.930350901Z" level=info msg="CreateContainer within sandbox \"6e8e3bd49a283ba6e01e9e58869a7a37edea1c94b2796279121820e6cd08164f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 24 23:43:46.941646 containerd[1572]: time="2026-04-24T23:43:46.941440502Z" level=info msg="CreateContainer within sandbox \"6e8e3bd49a283ba6e01e9e58869a7a37edea1c94b2796279121820e6cd08164f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2903bd444e44ecf8a5c81581217c6fa2a09c9d1c6ed683c8a5250a95d4a1caf5\"" Apr 24 23:43:46.943122 containerd[1572]: time="2026-04-24T23:43:46.943094843Z" level=info msg="StartContainer for \"2903bd444e44ecf8a5c81581217c6fa2a09c9d1c6ed683c8a5250a95d4a1caf5\"" Apr 24 23:43:46.989493 containerd[1572]: time="2026-04-24T23:43:46.989450423Z" level=info msg="StartContainer for \"2903bd444e44ecf8a5c81581217c6fa2a09c9d1c6ed683c8a5250a95d4a1caf5\" returns successfully" Apr 24 23:43:47.024088 containerd[1572]: time="2026-04-24T23:43:47.023960960Z" level=info msg="shim disconnected" id=2903bd444e44ecf8a5c81581217c6fa2a09c9d1c6ed683c8a5250a95d4a1caf5 namespace=k8s.io Apr 24 23:43:47.024088 containerd[1572]: time="2026-04-24T23:43:47.024029367Z" level=warning msg="cleaning up after shim disconnected" id=2903bd444e44ecf8a5c81581217c6fa2a09c9d1c6ed683c8a5250a95d4a1caf5 namespace=k8s.io Apr 24 23:43:47.024088 containerd[1572]: time="2026-04-24T23:43:47.024036524Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:43:47.922404 kubelet[2664]: E0424 23:43:47.922358 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:43:47.930893 containerd[1572]: time="2026-04-24T23:43:47.930827648Z" level=info msg="CreateContainer within sandbox \"6e8e3bd49a283ba6e01e9e58869a7a37edea1c94b2796279121820e6cd08164f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 24 23:43:47.950528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3160469236.mount: Deactivated successfully. Apr 24 23:43:47.953094 containerd[1572]: time="2026-04-24T23:43:47.953030081Z" level=info msg="CreateContainer within sandbox \"6e8e3bd49a283ba6e01e9e58869a7a37edea1c94b2796279121820e6cd08164f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d7d3901e952c106d3a0d3b51b647613e2dcfc64fa9f67a109e716173e68f9012\"" Apr 24 23:43:47.955326 containerd[1572]: time="2026-04-24T23:43:47.955135474Z" level=info msg="StartContainer for \"d7d3901e952c106d3a0d3b51b647613e2dcfc64fa9f67a109e716173e68f9012\"" Apr 24 23:43:48.023493 containerd[1572]: time="2026-04-24T23:43:48.022912048Z" level=info msg="StartContainer for \"d7d3901e952c106d3a0d3b51b647613e2dcfc64fa9f67a109e716173e68f9012\" returns successfully" Apr 24 23:43:48.055230 containerd[1572]: time="2026-04-24T23:43:48.055085846Z" level=info msg="shim disconnected" id=d7d3901e952c106d3a0d3b51b647613e2dcfc64fa9f67a109e716173e68f9012 namespace=k8s.io Apr 24 23:43:48.055230 containerd[1572]: time="2026-04-24T23:43:48.055170728Z" level=warning msg="cleaning up after shim disconnected" id=d7d3901e952c106d3a0d3b51b647613e2dcfc64fa9f67a109e716173e68f9012 namespace=k8s.io Apr 24 23:43:48.055230 containerd[1572]: time="2026-04-24T23:43:48.055189178Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:43:48.367104 kubelet[2664]: E0424 23:43:48.367034 2664 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 24 23:43:48.428488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7d3901e952c106d3a0d3b51b647613e2dcfc64fa9f67a109e716173e68f9012-rootfs.mount: Deactivated successfully. Apr 24 23:43:48.929750 kubelet[2664]: E0424 23:43:48.929617 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:43:48.937656 containerd[1572]: time="2026-04-24T23:43:48.937608100Z" level=info msg="CreateContainer within sandbox \"6e8e3bd49a283ba6e01e9e58869a7a37edea1c94b2796279121820e6cd08164f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 24 23:43:48.963395 containerd[1572]: time="2026-04-24T23:43:48.963200747Z" level=info msg="CreateContainer within sandbox \"6e8e3bd49a283ba6e01e9e58869a7a37edea1c94b2796279121820e6cd08164f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e3703a9bc22c32edb880292a173b8dbabbf4771df56a2fa7b264b2ff2fb76ff3\"" Apr 24 23:43:48.987776 containerd[1572]: time="2026-04-24T23:43:48.987733306Z" level=info msg="StartContainer for \"e3703a9bc22c32edb880292a173b8dbabbf4771df56a2fa7b264b2ff2fb76ff3\"" Apr 24 23:43:49.027187 containerd[1572]: time="2026-04-24T23:43:49.027153318Z" level=info msg="StartContainer for \"e3703a9bc22c32edb880292a173b8dbabbf4771df56a2fa7b264b2ff2fb76ff3\" returns successfully" Apr 24 23:43:49.047155 containerd[1572]: time="2026-04-24T23:43:49.047041190Z" level=info msg="shim disconnected" id=e3703a9bc22c32edb880292a173b8dbabbf4771df56a2fa7b264b2ff2fb76ff3 namespace=k8s.io Apr 24 23:43:49.047155 containerd[1572]: time="2026-04-24T23:43:49.047133071Z" level=warning msg="cleaning up after shim disconnected" id=e3703a9bc22c32edb880292a173b8dbabbf4771df56a2fa7b264b2ff2fb76ff3 namespace=k8s.io Apr 24 23:43:49.047155 containerd[1572]: time="2026-04-24T23:43:49.047141072Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:43:49.428767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3703a9bc22c32edb880292a173b8dbabbf4771df56a2fa7b264b2ff2fb76ff3-rootfs.mount: Deactivated successfully. Apr 24 23:43:49.940662 kubelet[2664]: E0424 23:43:49.940466 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:43:50.006948 containerd[1572]: time="2026-04-24T23:43:50.006858486Z" level=info msg="CreateContainer within sandbox \"6e8e3bd49a283ba6e01e9e58869a7a37edea1c94b2796279121820e6cd08164f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 24 23:43:50.019935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308087554.mount: Deactivated successfully. Apr 24 23:43:50.022790 containerd[1572]: time="2026-04-24T23:43:50.022706307Z" level=info msg="CreateContainer within sandbox \"6e8e3bd49a283ba6e01e9e58869a7a37edea1c94b2796279121820e6cd08164f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2faf5af779e24b7ce4a8ba36374796aa36c888a152dbb4aba43b8e5340885016\"" Apr 24 23:43:50.024356 containerd[1572]: time="2026-04-24T23:43:50.024337331Z" level=info msg="StartContainer for \"2faf5af779e24b7ce4a8ba36374796aa36c888a152dbb4aba43b8e5340885016\"" Apr 24 23:43:50.081238 containerd[1572]: time="2026-04-24T23:43:50.081105697Z" level=info msg="StartContainer for \"2faf5af779e24b7ce4a8ba36374796aa36c888a152dbb4aba43b8e5340885016\" returns successfully" Apr 24 23:43:50.320281 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 24 23:43:50.952658 kubelet[2664]: E0424 23:43:50.952452 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:43:50.996665 kubelet[2664]: I0424 23:43:50.996412 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cvz9h" podStartSLOduration=4.9963879460000005 podStartE2EDuration="4.996387946s" podCreationTimestamp="2026-04-24 23:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:43:50.995620005 +0000 UTC m=+77.778691618" watchObservedRunningTime="2026-04-24 23:43:50.996387946 +0000 UTC m=+77.779459560" Apr 24 23:43:52.305029 kubelet[2664]: E0424 23:43:52.304909 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:43:52.570425 kubelet[2664]: E0424 23:43:52.570150 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:43:53.108095 systemd-networkd[1249]: lxc_health: Link UP Apr 24 23:43:53.117756 systemd-networkd[1249]: lxc_health: Gained carrier Apr 24 23:43:54.187588 systemd-networkd[1249]: lxc_health: Gained IPv6LL Apr 24 23:43:54.571437 kubelet[2664]: E0424 23:43:54.571287 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:43:54.973790 kubelet[2664]: E0424 23:43:54.973684 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:43:56.009304 kubelet[2664]: E0424 23:43:56.007394 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:43:57.005872 systemd[1]: run-containerd-runc-k8s.io-2faf5af779e24b7ce4a8ba36374796aa36c888a152dbb4aba43b8e5340885016-runc.rCTsGc.mount: Deactivated successfully. Apr 24 23:44:01.304665 kubelet[2664]: E0424 23:44:01.304625 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:44:01.338163 sshd[4500]: pam_unix(sshd:session): session closed for user core Apr 24 23:44:01.341208 systemd[1]: sshd@25-10.0.0.69:22-10.0.0.1:33886.service: Deactivated successfully. Apr 24 23:44:01.343648 systemd-logind[1549]: Session 26 logged out. Waiting for processes to exit. Apr 24 23:44:01.343719 systemd[1]: session-26.scope: Deactivated successfully. Apr 24 23:44:01.344796 systemd-logind[1549]: Removed session 26.