Apr 17 23:48:08.948789 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:48:08.948838 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:48:08.948852 kernel: BIOS-provided physical RAM map: Apr 17 23:48:08.948859 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 23:48:08.948866 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 17 23:48:08.948874 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 17 23:48:08.948883 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 17 23:48:08.948890 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 17 23:48:08.948898 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 17 23:48:08.948906 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 17 23:48:08.948915 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 17 23:48:08.948923 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 17 23:48:08.948931 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 17 23:48:08.948938 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 17 23:48:08.948948 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 17 23:48:08.948957 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 17 23:48:08.948966 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 17 23:48:08.948975 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 17 23:48:08.948983 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 17 23:48:08.948991 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 23:48:08.949000 kernel: NX (Execute Disable) protection: active Apr 17 23:48:08.949007 kernel: APIC: Static calls initialized Apr 17 23:48:08.949015 kernel: efi: EFI v2.7 by EDK II Apr 17 23:48:08.949056 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Apr 17 23:48:08.949065 kernel: SMBIOS 2.8 present. Apr 17 23:48:08.949074 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 17 23:48:08.949081 kernel: Hypervisor detected: KVM Apr 17 23:48:08.949092 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:48:08.949101 kernel: kvm-clock: using sched offset of 5443719009 cycles Apr 17 23:48:08.949109 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:48:08.949118 kernel: tsc: Detected 2793.438 MHz processor Apr 17 23:48:08.949127 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:48:08.949135 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:48:08.949144 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 17 23:48:08.949152 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 23:48:08.949160 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:48:08.949171 kernel: Using GB pages for direct mapping Apr 17 23:48:08.949179 kernel: Secure boot disabled Apr 17 23:48:08.949188 kernel: ACPI: Early table checksum verification disabled Apr 17 23:48:08.949197 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 17 23:48:08.949209 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 17 23:48:08.949218 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:48:08.949226 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:48:08.949237 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 17 23:48:08.949246 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:48:08.949254 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:48:08.949263 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:48:08.949273 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:48:08.949281 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 17 23:48:08.949290 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 17 23:48:08.949301 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 17 23:48:08.949310 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 17 23:48:08.949319 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 17 23:48:08.949328 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 17 23:48:08.949337 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 17 23:48:08.949346 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 17 23:48:08.949354 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 17 23:48:08.949363 kernel: No NUMA configuration found Apr 17 23:48:08.949372 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 17 23:48:08.949383 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 17 23:48:08.949392 kernel: Zone ranges: Apr 17 23:48:08.949400 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:48:08.949409 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 17 23:48:08.949419 kernel: Normal empty Apr 17 23:48:08.949427 kernel: Movable zone start for each node Apr 17 23:48:08.949436 kernel: Early memory node ranges Apr 17 23:48:08.949445 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 23:48:08.949453 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 17 23:48:08.949463 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 17 23:48:08.949473 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 17 23:48:08.949482 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 17 23:48:08.949491 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 17 23:48:08.949499 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 17 23:48:08.949508 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:48:08.949517 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 23:48:08.949525 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 17 23:48:08.949534 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:48:08.949543 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 17 23:48:08.949553 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 17 23:48:08.949562 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 17 23:48:08.949570 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 23:48:08.949579 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:48:08.949587 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:48:08.949595 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 23:48:08.949604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:48:08.949613 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:48:08.949622 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:48:08.949630 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:48:08.949641 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:48:08.949649 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:48:08.949657 kernel: TSC deadline timer available Apr 17 23:48:08.949666 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 17 23:48:08.949675 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:48:08.949683 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 23:48:08.949691 kernel: kvm-guest: setup PV sched yield Apr 17 23:48:08.949699 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 17 23:48:08.949707 kernel: Booting paravirtualized kernel on KVM Apr 17 23:48:08.949718 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:48:08.949727 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 17 23:48:08.949735 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 17 23:48:08.949744 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 17 23:48:08.949753 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 17 23:48:08.949761 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:48:08.949770 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:48:08.949779 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:48:08.949789 kernel: random: crng init done Apr 17 23:48:08.949798 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:48:08.949827 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:48:08.949837 kernel: Fallback order for Node 0: 0 Apr 17 23:48:08.949845 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 17 23:48:08.949854 kernel: Policy zone: DMA32 Apr 17 23:48:08.949863 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:48:08.949872 kernel: Memory: 2399660K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 167136K reserved, 0K cma-reserved) Apr 17 23:48:08.949881 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 17 23:48:08.949891 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:48:08.949900 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:48:08.949909 kernel: Dynamic Preempt: voluntary Apr 17 23:48:08.949918 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:48:08.949935 kernel: rcu: RCU event tracing is enabled. Apr 17 23:48:08.949942 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 17 23:48:08.949948 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:48:08.949953 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:48:08.949959 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:48:08.949964 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:48:08.949969 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 17 23:48:08.949975 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 17 23:48:08.949982 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:48:08.949987 kernel: Console: colour dummy device 80x25 Apr 17 23:48:08.949993 kernel: printk: console [ttyS0] enabled Apr 17 23:48:08.949998 kernel: ACPI: Core revision 20230628 Apr 17 23:48:08.950004 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 23:48:08.950011 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:48:08.950016 kernel: x2apic enabled Apr 17 23:48:08.950064 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:48:08.950074 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 23:48:08.950085 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 23:48:08.950095 kernel: kvm-guest: setup PV IPIs Apr 17 23:48:08.950101 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 23:48:08.950107 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:48:08.950112 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 17 23:48:08.950120 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 23:48:08.950125 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 17 23:48:08.950131 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 17 23:48:08.950137 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:48:08.950142 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:48:08.950148 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:48:08.950153 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:48:08.950159 kernel: RETBleed: Vulnerable Apr 17 23:48:08.950164 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:48:08.950171 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:48:08.950177 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 23:48:08.950182 kernel: active return thunk: its_return_thunk Apr 17 23:48:08.950187 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:48:08.950193 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:48:08.950199 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:48:08.950204 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:48:08.950210 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:48:08.950215 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:48:08.950222 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:48:08.950227 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:48:08.950233 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 23:48:08.950238 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 23:48:08.950244 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 23:48:08.950249 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 23:48:08.950255 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:48:08.950260 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:48:08.950266 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:48:08.950272 kernel: landlock: Up and running. Apr 17 23:48:08.950278 kernel: SELinux: Initializing. Apr 17 23:48:08.950283 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:48:08.950289 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:48:08.950294 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 17 23:48:08.950300 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:48:08.950306 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:48:08.950312 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:48:08.950318 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 17 23:48:08.950324 kernel: signal: max sigframe size: 3632 Apr 17 23:48:08.950329 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:48:08.950335 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:48:08.950341 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:48:08.950346 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:48:08.950351 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:48:08.950357 kernel: .... node #0, CPUs: #1 #2 #3 Apr 17 23:48:08.950362 kernel: smp: Brought up 1 node, 4 CPUs Apr 17 23:48:08.950370 kernel: smpboot: Max logical packages: 1 Apr 17 23:48:08.950375 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 17 23:48:08.950381 kernel: devtmpfs: initialized Apr 17 23:48:08.950386 kernel: x86/mm: Memory block size: 128MB Apr 17 23:48:08.950392 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 17 23:48:08.950397 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 17 23:48:08.950403 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 17 23:48:08.950408 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 17 23:48:08.950414 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 17 23:48:08.950421 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:48:08.950426 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 17 23:48:08.950432 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:48:08.950437 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:48:08.950443 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:48:08.950448 kernel: audit: type=2000 audit(1776469688.247:1): state=initialized audit_enabled=0 res=1 Apr 17 23:48:08.950453 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:48:08.950459 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:48:08.950464 kernel: cpuidle: using governor menu Apr 17 23:48:08.950471 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:48:08.950477 kernel: dca service started, version 1.12.1 Apr 17 23:48:08.950482 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 17 23:48:08.950488 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 23:48:08.950493 kernel: PCI: Using configuration type 1 for base access Apr 17 23:48:08.950499 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:48:08.950504 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:48:08.950510 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:48:08.950516 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:48:08.950522 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:48:08.950528 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:48:08.950533 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:48:08.950539 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:48:08.950544 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:48:08.950550 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:48:08.950555 kernel: ACPI: Interpreter enabled Apr 17 23:48:08.950561 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:48:08.950566 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:48:08.950573 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:48:08.950578 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:48:08.950584 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 23:48:08.950589 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:48:08.950707 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:48:08.950770 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 23:48:08.950854 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 23:48:08.950864 kernel: PCI host bridge to bus 0000:00 Apr 17 23:48:08.950922 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:48:08.950971 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:48:08.951103 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:48:08.951179 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 17 23:48:08.951227 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 23:48:08.951275 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 17 23:48:08.951328 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:48:08.951393 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 17 23:48:08.951454 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 17 23:48:08.951509 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 17 23:48:08.951563 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 17 23:48:08.951617 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 17 23:48:08.951670 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 17 23:48:08.951727 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:48:08.951788 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 17 23:48:08.951868 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 17 23:48:08.951922 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 17 23:48:08.951977 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 17 23:48:08.952070 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 17 23:48:08.952131 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 17 23:48:08.952186 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 17 23:48:08.952240 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 17 23:48:08.952306 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:48:08.952390 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 17 23:48:08.952473 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 17 23:48:08.952556 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 17 23:48:08.952637 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 17 23:48:08.952714 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 17 23:48:08.952782 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 23:48:08.952885 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 17 23:48:08.952954 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 17 23:48:08.953058 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 17 23:48:08.953136 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 17 23:48:08.953207 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 17 23:48:08.953217 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:48:08.953225 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:48:08.953233 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:48:08.953241 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:48:08.953249 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 23:48:08.953257 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 23:48:08.953265 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 23:48:08.953274 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 23:48:08.953282 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 23:48:08.953290 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 23:48:08.953298 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 23:48:08.953305 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 23:48:08.953313 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 23:48:08.953321 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 23:48:08.953329 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 23:48:08.953336 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 23:48:08.953346 kernel: iommu: Default domain type: Translated Apr 17 23:48:08.953354 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:48:08.953362 kernel: efivars: Registered efivars operations Apr 17 23:48:08.953369 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:48:08.953377 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:48:08.953385 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 17 23:48:08.953392 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 17 23:48:08.953400 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 17 23:48:08.953408 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 17 23:48:08.953476 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 23:48:08.953543 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 23:48:08.953611 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:48:08.953621 kernel: vgaarb: loaded Apr 17 23:48:08.953629 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 23:48:08.953637 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 23:48:08.953645 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:48:08.953652 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:48:08.953660 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:48:08.953669 kernel: pnp: PnP ACPI init Apr 17 23:48:08.953741 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 23:48:08.953751 kernel: pnp: PnP ACPI: found 6 devices Apr 17 23:48:08.953759 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:48:08.953767 kernel: NET: Registered PF_INET protocol family Apr 17 23:48:08.953775 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:48:08.953783 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:48:08.953791 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:48:08.953801 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:48:08.953833 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:48:08.953841 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:48:08.953850 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:48:08.953858 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:48:08.953866 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:48:08.953873 kernel: NET: Registered PF_XDP protocol family Apr 17 23:48:08.953946 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 17 23:48:08.954014 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 17 23:48:08.954146 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:48:08.954223 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:48:08.954295 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:48:08.954366 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 17 23:48:08.954437 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 23:48:08.954514 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 17 23:48:08.954526 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:48:08.954537 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:48:08.954572 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:48:08.954582 kernel: Initialise system trusted keyrings Apr 17 23:48:08.954592 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:48:08.954602 kernel: Key type asymmetric registered Apr 17 23:48:08.954611 kernel: Asymmetric key parser 'x509' registered Apr 17 23:48:08.954621 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:48:08.954631 kernel: io scheduler mq-deadline registered Apr 17 23:48:08.954640 kernel: io scheduler kyber registered Apr 17 23:48:08.954650 kernel: io scheduler bfq registered Apr 17 23:48:08.954662 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:48:08.954673 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 23:48:08.954682 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 23:48:08.954692 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 17 23:48:08.954702 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:48:08.954711 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:48:08.954721 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:48:08.954730 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:48:08.954740 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:48:08.954861 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 17 23:48:08.954877 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:48:08.954957 kernel: rtc_cmos 00:04: registered as rtc0 Apr 17 23:48:08.955078 kernel: rtc_cmos 00:04: setting system clock to 2026-04-17T23:48:08 UTC (1776469688) Apr 17 23:48:08.955164 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 17 23:48:08.955176 kernel: intel_pstate: CPU model not supported Apr 17 23:48:08.955186 kernel: efifb: probing for efifb Apr 17 23:48:08.955198 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 17 23:48:08.955207 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 17 23:48:08.955216 kernel: efifb: scrolling: redraw Apr 17 23:48:08.955224 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 17 23:48:08.955233 kernel: Console: switching to colour frame buffer device 100x37 Apr 17 23:48:08.955243 kernel: fb0: EFI VGA frame buffer device Apr 17 23:48:08.955269 kernel: pstore: Using crash dump compression: deflate Apr 17 23:48:08.955281 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:48:08.955292 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:48:08.955303 kernel: Segment Routing with IPv6 Apr 17 23:48:08.955314 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:48:08.955323 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:48:08.955334 kernel: Key type dns_resolver registered Apr 17 23:48:08.955343 kernel: IPI shorthand broadcast: enabled Apr 17 23:48:08.955353 kernel: sched_clock: Marking stable (944026445, 239825961)->(1271922547, -88070141) Apr 17 23:48:08.955363 kernel: registered taskstats version 1 Apr 17 23:48:08.955373 kernel: Loading compiled-in X.509 certificates Apr 17 23:48:08.955384 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:48:08.955393 kernel: Key type .fscrypt registered Apr 17 23:48:08.955405 kernel: Key type fscrypt-provisioning registered Apr 17 23:48:08.955414 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:48:08.955424 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:48:08.955433 kernel: ima: No architecture policies found Apr 17 23:48:08.955443 kernel: clk: Disabling unused clocks Apr 17 23:48:08.955453 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:48:08.955463 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:48:08.955472 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:48:08.955482 kernel: Run /init as init process Apr 17 23:48:08.955494 kernel: with arguments: Apr 17 23:48:08.955504 kernel: /init Apr 17 23:48:08.955513 kernel: with environment: Apr 17 23:48:08.955523 kernel: HOME=/ Apr 17 23:48:08.955532 kernel: TERM=linux Apr 17 23:48:08.955545 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:48:08.955557 systemd[1]: Detected virtualization kvm. Apr 17 23:48:08.955571 systemd[1]: Detected architecture x86-64. Apr 17 23:48:08.955581 systemd[1]: Running in initrd. Apr 17 23:48:08.955592 systemd[1]: No hostname configured, using default hostname. Apr 17 23:48:08.955603 systemd[1]: Hostname set to . Apr 17 23:48:08.955613 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:48:08.955626 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:48:08.955636 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:48:08.955647 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:48:08.955660 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:48:08.955671 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:48:08.955681 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:48:08.955692 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:48:08.955707 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:48:08.955718 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:48:08.955729 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:48:08.955740 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:48:08.955750 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:48:08.955761 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:48:08.955772 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:48:08.955782 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:48:08.955793 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:48:08.955855 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:48:08.955870 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:48:08.955880 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:48:08.955891 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:48:08.955902 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:48:08.955913 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:48:08.955923 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:48:08.955933 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:48:08.955949 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:48:08.955960 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:48:08.955971 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:48:08.955981 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:48:08.955992 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:48:08.956002 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:48:08.956013 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:48:08.956082 systemd-journald[194]: Collecting audit messages is disabled. Apr 17 23:48:08.956111 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:48:08.956122 systemd-journald[194]: Journal started Apr 17 23:48:08.956149 systemd-journald[194]: Runtime Journal (/run/log/journal/f897e586bf2f4a598f13f0ccf01fc887) is 6.0M, max 48.3M, 42.2M free. Apr 17 23:48:08.961064 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:48:08.962494 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:48:08.963083 systemd-modules-load[195]: Inserted module 'overlay' Apr 17 23:48:08.975233 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:48:08.977948 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:48:08.986335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:48:08.993273 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:48:08.996206 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:48:08.997197 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:48:09.001141 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:48:09.011508 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:48:09.029346 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:48:09.035300 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:48:09.031373 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:48:09.040848 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 17 23:48:09.041566 kernel: Bridge firewalling registered Apr 17 23:48:09.041499 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:48:09.045461 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:48:09.057648 dracut-cmdline[224]: dracut-dracut-053 Apr 17 23:48:09.057672 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:48:09.064267 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:48:09.080272 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:48:09.104337 systemd-resolved[238]: Positive Trust Anchors: Apr 17 23:48:09.104350 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:48:09.104375 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:48:09.106350 systemd-resolved[238]: Defaulting to hostname 'linux'. Apr 17 23:48:09.107146 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:48:09.112736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:48:09.189129 kernel: SCSI subsystem initialized Apr 17 23:48:09.199101 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:48:09.210089 kernel: iscsi: registered transport (tcp) Apr 17 23:48:09.230106 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:48:09.230175 kernel: QLogic iSCSI HBA Driver Apr 17 23:48:09.262557 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:48:09.281213 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:48:09.306666 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:48:09.306735 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:48:09.308401 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:48:09.346117 kernel: raid6: avx512x4 gen() 42461 MB/s Apr 17 23:48:09.363081 kernel: raid6: avx512x2 gen() 41378 MB/s Apr 17 23:48:09.381188 kernel: raid6: avx512x1 gen() 39747 MB/s Apr 17 23:48:09.398106 kernel: raid6: avx2x4 gen() 29250 MB/s Apr 17 23:48:09.415131 kernel: raid6: avx2x2 gen() 34790 MB/s Apr 17 23:48:09.433116 kernel: raid6: avx2x1 gen() 27191 MB/s Apr 17 23:48:09.433182 kernel: raid6: using algorithm avx512x4 gen() 42461 MB/s Apr 17 23:48:09.451084 kernel: raid6: .... xor() 9612 MB/s, rmw enabled Apr 17 23:48:09.451160 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:48:09.472080 kernel: xor: automatically using best checksumming function avx Apr 17 23:48:09.608103 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:48:09.618159 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:48:09.630270 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:48:09.639670 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 17 23:48:09.642661 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:48:09.653187 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:48:09.667296 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Apr 17 23:48:09.696910 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:48:09.709266 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:48:09.742348 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:48:09.751380 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:48:09.762717 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:48:09.768272 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:48:09.773913 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:48:09.776584 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:48:09.789081 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 17 23:48:09.792196 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:48:09.798345 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 17 23:48:09.805139 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:48:09.805206 kernel: GPT:9289727 != 19775487 Apr 17 23:48:09.805248 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:48:09.805275 kernel: GPT:9289727 != 19775487 Apr 17 23:48:09.805292 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:48:09.805309 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:48:09.804851 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:48:09.804933 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:48:09.816582 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:48:09.817472 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:48:09.817696 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:48:09.821653 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:48:09.835071 kernel: libata version 3.00 loaded. Apr 17 23:48:09.837711 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:48:09.840350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:48:09.851628 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:48:09.851653 kernel: AES CTR mode by8 optimization enabled Apr 17 23:48:09.844484 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:48:09.855865 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:48:09.857932 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:48:09.863512 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 23:48:09.863639 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 23:48:09.870784 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 17 23:48:09.870995 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 23:48:09.875125 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/vda3 scanned by (udev-worker) (475) Apr 17 23:48:09.877370 kernel: scsi host0: ahci Apr 17 23:48:09.875099 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:48:09.881835 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (465) Apr 17 23:48:09.885176 kernel: scsi host1: ahci Apr 17 23:48:09.886730 kernel: scsi host2: ahci Apr 17 23:48:09.887727 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 17 23:48:09.894550 kernel: scsi host3: ahci Apr 17 23:48:09.894729 kernel: scsi host4: ahci Apr 17 23:48:09.893397 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:48:09.904057 kernel: scsi host5: ahci Apr 17 23:48:09.904210 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 17 23:48:09.904227 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 17 23:48:09.905415 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 17 23:48:09.914911 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 17 23:48:09.914928 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 17 23:48:09.914936 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 17 23:48:09.914943 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 17 23:48:09.913976 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 17 23:48:09.915961 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 17 23:48:09.925083 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:48:09.945304 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:48:09.947406 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:48:09.962919 disk-uuid[577]: Primary Header is updated. Apr 17 23:48:09.962919 disk-uuid[577]: Secondary Entries is updated. Apr 17 23:48:09.962919 disk-uuid[577]: Secondary Header is updated. Apr 17 23:48:09.969076 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:48:09.969298 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:48:10.222123 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 23:48:10.229067 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 23:48:10.231106 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 17 23:48:10.234081 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 17 23:48:10.234102 kernel: ata3.00: applying bridge limits Apr 17 23:48:10.235088 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 23:48:10.236088 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 23:48:10.239088 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 23:48:10.239106 kernel: ata3.00: configured for UDMA/100 Apr 17 23:48:10.243106 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 23:48:10.289268 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 17 23:48:10.289571 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 23:48:10.303112 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 17 23:48:10.985073 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:48:10.985213 disk-uuid[582]: The operation has completed successfully. Apr 17 23:48:11.008632 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:48:11.008751 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:48:11.036338 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:48:11.043782 sh[604]: Success Apr 17 23:48:11.060066 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 17 23:48:11.092875 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:48:11.114882 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:48:11.118513 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:48:11.131064 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:48:11.131099 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:48:11.131109 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:48:11.132810 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:48:11.135247 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:48:11.140640 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:48:11.143775 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:48:11.158308 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:48:11.161904 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:48:11.172279 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:48:11.172317 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:48:11.172335 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:48:11.176041 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:48:11.184255 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:48:11.187583 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:48:11.195913 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:48:11.202293 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:48:11.249472 ignition[696]: Ignition 2.19.0 Apr 17 23:48:11.249492 ignition[696]: Stage: fetch-offline Apr 17 23:48:11.249517 ignition[696]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:48:11.249523 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:48:11.249707 ignition[696]: parsed url from cmdline: "" Apr 17 23:48:11.249710 ignition[696]: no config URL provided Apr 17 23:48:11.249714 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:48:11.249724 ignition[696]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:48:11.249746 ignition[696]: op(1): [started] loading QEMU firmware config module Apr 17 23:48:11.249750 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 17 23:48:11.256561 ignition[696]: op(1): [finished] loading QEMU firmware config module Apr 17 23:48:11.272062 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:48:11.291332 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:48:11.307681 systemd-networkd[792]: lo: Link UP Apr 17 23:48:11.307688 systemd-networkd[792]: lo: Gained carrier Apr 17 23:48:11.308613 systemd-networkd[792]: Enumeration completed Apr 17 23:48:11.309146 systemd-networkd[792]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:48:11.309148 systemd-networkd[792]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:48:11.311095 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:48:11.311882 systemd-networkd[792]: eth0: Link UP Apr 17 23:48:11.311884 systemd-networkd[792]: eth0: Gained carrier Apr 17 23:48:11.311891 systemd-networkd[792]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:48:11.314460 systemd[1]: Reached target network.target - Network. Apr 17 23:48:11.340239 systemd-networkd[792]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:48:11.405415 ignition[696]: parsing config with SHA512: db1ca9feefbde99db3003fd4d63c71adf0f3e068debd126c7533f45400356e83c6d9519cb7dcf197c6e2c3cbf6ea3f2d3708a9ec8cb3c5a5ff412c0e055be5c8 Apr 17 23:48:11.412969 unknown[696]: fetched base config from "system" Apr 17 23:48:11.412989 unknown[696]: fetched user config from "qemu" Apr 17 23:48:11.413382 ignition[696]: fetch-offline: fetch-offline passed Apr 17 23:48:11.414430 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:48:11.413434 ignition[696]: Ignition finished successfully Apr 17 23:48:11.414932 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 17 23:48:11.432315 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:48:11.444396 ignition[797]: Ignition 2.19.0 Apr 17 23:48:11.444413 ignition[797]: Stage: kargs Apr 17 23:48:11.444565 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:48:11.444572 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:48:11.445292 ignition[797]: kargs: kargs passed Apr 17 23:48:11.449360 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:48:11.445325 ignition[797]: Ignition finished successfully Apr 17 23:48:11.470315 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:48:11.483758 ignition[805]: Ignition 2.19.0 Apr 17 23:48:11.483771 ignition[805]: Stage: disks Apr 17 23:48:11.484095 ignition[805]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:48:11.484107 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:48:11.487628 ignition[805]: disks: disks passed Apr 17 23:48:11.487683 ignition[805]: Ignition finished successfully Apr 17 23:48:11.493207 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:48:11.495462 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:48:11.498639 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:48:11.499635 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:48:11.508115 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:48:11.511574 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:48:11.529380 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:48:11.540929 systemd-fsck[815]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:48:11.545306 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:48:11.556200 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:48:11.641966 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:48:11.644676 kernel: EXT4-fs (vda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:48:11.644096 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:48:11.660281 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:48:11.662413 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:48:11.666253 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:48:11.666339 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:48:11.666365 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:48:11.678285 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:48:11.684213 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:48:11.694967 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (823) Apr 17 23:48:11.699115 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:48:11.699176 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:48:11.702159 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:48:11.707065 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:48:11.709686 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:48:11.728771 initrd-setup-root[847]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:48:11.735605 initrd-setup-root[854]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:48:11.742789 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:48:11.749925 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:48:11.831464 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:48:11.847190 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:48:11.848740 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:48:11.862094 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:48:11.874793 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:48:11.882870 ignition[936]: INFO : Ignition 2.19.0 Apr 17 23:48:11.882870 ignition[936]: INFO : Stage: mount Apr 17 23:48:11.885862 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:48:11.885862 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:48:11.885862 ignition[936]: INFO : mount: mount passed Apr 17 23:48:11.885862 ignition[936]: INFO : Ignition finished successfully Apr 17 23:48:11.888341 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:48:11.899253 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:48:12.129366 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:48:12.146278 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:48:12.155088 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (950) Apr 17 23:48:12.158737 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:48:12.158756 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:48:12.158765 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:48:12.165070 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:48:12.166416 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:48:12.197943 ignition[967]: INFO : Ignition 2.19.0 Apr 17 23:48:12.197943 ignition[967]: INFO : Stage: files Apr 17 23:48:12.201183 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:48:12.201183 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:48:12.201183 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:48:12.209187 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:48:12.209187 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:48:12.214739 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:48:12.217317 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:48:12.220131 unknown[967]: wrote ssh authorized keys file for user: core Apr 17 23:48:12.222315 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:48:12.222315 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:48:12.222315 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:48:12.222315 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:48:12.222315 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:48:12.295538 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 17 23:48:12.400646 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:48:12.400646 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 23:48:12.408101 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 17 23:48:12.620881 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 17 23:48:12.724257 systemd-networkd[792]: eth0: Gained IPv6LL Apr 17 23:48:12.775720 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 23:48:12.775720 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:48:12.782188 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:48:12.782188 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:48:12.782188 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:48:12.782188 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:48:12.782188 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:48:12.782188 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:48:12.782188 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:48:12.782188 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:48:12.782188 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:48:12.782188 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:48:12.782188 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:48:12.782188 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:48:12.782188 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 17 23:48:12.840918 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 17 23:48:13.246853 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:48:13.246853 ignition[967]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 17 23:48:13.253058 ignition[967]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:48:13.253058 ignition[967]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:48:13.253058 ignition[967]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 17 23:48:13.253058 ignition[967]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 17 23:48:13.253058 ignition[967]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:48:13.253058 ignition[967]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:48:13.253058 ignition[967]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 17 23:48:13.253058 ignition[967]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Apr 17 23:48:13.253058 ignition[967]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:48:13.253058 ignition[967]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:48:13.253058 ignition[967]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Apr 17 23:48:13.253058 ignition[967]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Apr 17 23:48:13.290679 ignition[967]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:48:13.290679 ignition[967]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:48:13.290679 ignition[967]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Apr 17 23:48:13.290679 ignition[967]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:48:13.290679 ignition[967]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:48:13.290679 ignition[967]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:48:13.290679 ignition[967]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:48:13.290679 ignition[967]: INFO : files: files passed Apr 17 23:48:13.290679 ignition[967]: INFO : Ignition finished successfully Apr 17 23:48:13.291460 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:48:13.302291 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:48:13.304370 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:48:13.334354 initrd-setup-root-after-ignition[994]: grep: /sysroot/oem/oem-release: No such file or directory Apr 17 23:48:13.314712 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:48:13.338214 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:48:13.338214 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:48:13.314852 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:48:13.344908 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:48:13.320284 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:48:13.323001 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:48:13.327091 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:48:13.375407 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:48:13.375545 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:48:13.384308 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:48:13.385676 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:48:13.390710 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:48:13.391502 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:48:13.407226 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:48:13.424244 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:48:13.436226 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:48:13.437133 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:48:13.441510 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:48:13.445123 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:48:13.445224 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:48:13.452490 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:48:13.457106 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:48:13.460244 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:48:13.465281 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:48:13.467470 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:48:13.470108 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:48:13.473698 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:48:13.477124 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:48:13.481115 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:48:13.484716 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:48:13.487965 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:48:13.488137 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:48:13.491951 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:48:13.493941 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:48:13.497544 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:48:13.497775 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:48:13.500876 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:48:13.500981 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:48:13.507576 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:48:13.507696 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:48:13.511269 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:48:13.514108 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:48:13.515741 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:48:13.519300 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:48:13.522473 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:48:13.525225 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:48:13.525303 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:48:13.528362 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:48:13.528448 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:48:13.529491 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:48:13.529588 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:48:13.534451 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:48:13.534570 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:48:13.560591 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:48:13.561822 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:48:13.562058 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:48:13.567513 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:48:13.570891 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:48:13.571645 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:48:13.575110 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:48:13.575238 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:48:13.583646 ignition[1021]: INFO : Ignition 2.19.0 Apr 17 23:48:13.583646 ignition[1021]: INFO : Stage: umount Apr 17 23:48:13.583646 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:48:13.583646 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:48:13.583646 ignition[1021]: INFO : umount: umount passed Apr 17 23:48:13.583646 ignition[1021]: INFO : Ignition finished successfully Apr 17 23:48:13.583899 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:48:13.583965 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:48:13.585483 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:48:13.585539 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:48:13.590465 systemd[1]: Stopped target network.target - Network. Apr 17 23:48:13.592395 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:48:13.592439 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:48:13.596895 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:48:13.596940 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:48:13.597863 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:48:13.597901 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:48:13.601869 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:48:13.601906 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:48:13.619433 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:48:13.623067 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:48:13.626801 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:48:13.628089 systemd-networkd[792]: eth0: DHCPv6 lease lost Apr 17 23:48:13.629637 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:48:13.629796 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:48:13.634290 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:48:13.634375 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:48:13.653264 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:48:13.655403 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:48:13.655467 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:48:13.656609 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:48:13.662943 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:48:13.663073 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:48:13.667723 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:48:13.667820 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:48:13.673116 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:48:13.673166 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:48:13.677726 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:48:13.677760 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:48:13.681961 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:48:13.681995 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:48:13.685313 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:48:13.685342 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:48:13.689427 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:48:13.689510 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:48:13.717995 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:48:13.718203 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:48:13.721212 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:48:13.721247 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:48:13.724957 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:48:13.724984 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:48:13.728565 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:48:13.728600 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:48:13.734232 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:48:13.734272 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:48:13.737563 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:48:13.737590 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:48:13.752219 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:48:13.754878 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:48:13.754926 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:48:13.757967 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:48:13.758006 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:48:13.762327 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:48:13.762413 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:48:13.763105 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:48:13.767809 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:48:13.785425 systemd[1]: Switching root. Apr 17 23:48:13.818528 systemd-journald[194]: Journal stopped Apr 17 23:48:14.662551 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 17 23:48:14.662606 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:48:14.662619 kernel: SELinux: policy capability open_perms=1 Apr 17 23:48:14.662627 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:48:14.662639 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:48:14.662647 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:48:14.662654 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:48:14.662662 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:48:14.662670 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:48:14.662680 kernel: audit: type=1403 audit(1776469693.975:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:48:14.662689 systemd[1]: Successfully loaded SELinux policy in 33.131ms. Apr 17 23:48:14.662707 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.181ms. Apr 17 23:48:14.662716 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:48:14.662724 systemd[1]: Detected virtualization kvm. Apr 17 23:48:14.662733 systemd[1]: Detected architecture x86-64. Apr 17 23:48:14.662743 systemd[1]: Detected first boot. Apr 17 23:48:14.662750 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:48:14.662758 zram_generator::config[1083]: No configuration found. Apr 17 23:48:14.662767 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:48:14.662777 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:48:14.662784 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 17 23:48:14.662793 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:48:14.662800 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:48:14.662808 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:48:14.662816 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:48:14.662825 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:48:14.662833 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:48:14.662866 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:48:14.662877 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:48:14.662884 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:48:14.662892 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:48:14.662900 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:48:14.662908 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:48:14.662916 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:48:14.662924 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:48:14.662932 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:48:14.662944 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:48:14.662952 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:48:14.662960 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:48:14.662968 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:48:14.662976 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:48:14.662984 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:48:14.662992 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:48:14.662999 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:48:14.663008 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:48:14.663016 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:48:14.663054 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:48:14.663062 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:48:14.663071 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:48:14.663079 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:48:14.663086 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:48:14.663094 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:48:14.663102 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:48:14.663110 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:48:14.663121 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:48:14.663129 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:48:14.663137 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:48:14.663145 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:48:14.663153 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:48:14.663160 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:48:14.663168 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:48:14.663176 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:48:14.663185 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:48:14.663193 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:48:14.663201 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:48:14.663208 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:48:14.663216 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:48:14.663224 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 17 23:48:14.663232 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 17 23:48:14.663240 kernel: loop: module loaded Apr 17 23:48:14.663248 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:48:14.663255 kernel: fuse: init (API version 7.39) Apr 17 23:48:14.663263 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:48:14.663270 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:48:14.663288 systemd-journald[1179]: Collecting audit messages is disabled. Apr 17 23:48:14.663305 systemd-journald[1179]: Journal started Apr 17 23:48:14.663321 systemd-journald[1179]: Runtime Journal (/run/log/journal/f897e586bf2f4a598f13f0ccf01fc887) is 6.0M, max 48.3M, 42.2M free. Apr 17 23:48:14.668103 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:48:14.673786 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:48:14.678068 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:48:14.680095 kernel: ACPI: bus type drm_connector registered Apr 17 23:48:14.680126 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:48:14.683343 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:48:14.685237 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:48:14.687263 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:48:14.688996 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:48:14.690935 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:48:14.692869 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:48:14.694730 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:48:14.696961 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:48:14.699274 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:48:14.699404 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:48:14.701590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:48:14.701713 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:48:14.703803 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:48:14.704058 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:48:14.705999 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:48:14.706142 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:48:14.708356 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:48:14.708473 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:48:14.710449 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:48:14.710589 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:48:14.712625 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:48:14.714734 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:48:14.717122 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:48:14.719949 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:48:14.729139 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:48:14.746166 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:48:14.749327 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:48:14.751344 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:48:14.752569 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:48:14.756160 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:48:14.758187 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:48:14.759087 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:48:14.761060 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:48:14.761825 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:48:14.765196 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:48:14.770516 systemd-journald[1179]: Time spent on flushing to /var/log/journal/f897e586bf2f4a598f13f0ccf01fc887 is 13.949ms for 987 entries. Apr 17 23:48:14.770516 systemd-journald[1179]: System Journal (/var/log/journal/f897e586bf2f4a598f13f0ccf01fc887) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:48:14.804907 systemd-journald[1179]: Received client request to flush runtime journal. Apr 17 23:48:14.772394 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:48:14.775488 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:48:14.777626 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:48:14.780092 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:48:14.788448 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:48:14.792140 udevadm[1223]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 17 23:48:14.798694 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:48:14.802297 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Apr 17 23:48:14.802313 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Apr 17 23:48:14.806083 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:48:14.809775 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:48:14.819256 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:48:14.842693 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:48:14.852461 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:48:14.867291 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Apr 17 23:48:14.867318 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Apr 17 23:48:14.872154 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:48:15.122086 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:48:15.141324 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:48:15.159566 systemd-udevd[1248]: Using default interface naming scheme 'v255'. Apr 17 23:48:15.174779 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:48:15.183174 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:48:15.189014 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:48:15.200699 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 17 23:48:15.210200 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1252) Apr 17 23:48:15.233578 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:48:15.247900 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:48:15.265074 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 17 23:48:15.274062 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:48:15.277937 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 17 23:48:15.278158 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 23:48:15.284123 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 17 23:48:15.284291 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 23:48:15.284390 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 17 23:48:15.291812 systemd-networkd[1258]: lo: Link UP Apr 17 23:48:15.291834 systemd-networkd[1258]: lo: Gained carrier Apr 17 23:48:15.293520 systemd-networkd[1258]: Enumeration completed Apr 17 23:48:15.293691 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:48:15.295628 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:48:15.295630 systemd-networkd[1258]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:48:15.297130 systemd-networkd[1258]: eth0: Link UP Apr 17 23:48:15.297133 systemd-networkd[1258]: eth0: Gained carrier Apr 17 23:48:15.297145 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:48:15.309166 systemd-networkd[1258]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:48:15.309297 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:48:15.313603 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:48:15.355446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:48:15.365329 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:48:15.365745 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:48:15.372344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:48:15.451622 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:48:15.454393 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:48:15.467297 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:48:15.476630 lvm[1298]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:48:15.514084 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:48:15.516702 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:48:15.527550 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:48:15.531870 lvm[1301]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:48:15.572563 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:48:15.575805 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:48:15.578290 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:48:15.578335 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:48:15.580429 systemd[1]: Reached target machines.target - Containers. Apr 17 23:48:15.583582 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:48:15.602642 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:48:15.606525 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:48:15.608431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:48:15.609270 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:48:15.615198 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:48:15.621172 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:48:15.622649 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:48:15.628979 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:48:15.638089 kernel: loop0: detected capacity change from 0 to 228704 Apr 17 23:48:15.644672 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:48:15.645330 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:48:15.657104 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:48:15.697089 kernel: loop1: detected capacity change from 0 to 142488 Apr 17 23:48:15.730068 kernel: loop2: detected capacity change from 0 to 140768 Apr 17 23:48:15.763106 kernel: loop3: detected capacity change from 0 to 228704 Apr 17 23:48:15.775075 kernel: loop4: detected capacity change from 0 to 142488 Apr 17 23:48:15.788096 kernel: loop5: detected capacity change from 0 to 140768 Apr 17 23:48:15.796216 (sd-merge)[1321]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 17 23:48:15.796806 (sd-merge)[1321]: Merged extensions into '/usr'. Apr 17 23:48:15.800276 systemd[1]: Reloading requested from client PID 1309 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:48:15.800303 systemd[1]: Reloading... Apr 17 23:48:15.829114 zram_generator::config[1345]: No configuration found. Apr 17 23:48:15.849501 ldconfig[1306]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:48:15.945393 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:48:15.993632 systemd[1]: Reloading finished in 193 ms. Apr 17 23:48:16.007148 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:48:16.009705 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:48:16.032359 systemd[1]: Starting ensure-sysext.service... Apr 17 23:48:16.034988 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:48:16.040299 systemd[1]: Reloading requested from client PID 1393 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:48:16.040415 systemd[1]: Reloading... Apr 17 23:48:16.050157 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:48:16.050372 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:48:16.050873 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:48:16.051080 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Apr 17 23:48:16.051129 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Apr 17 23:48:16.052881 systemd-tmpfiles[1394]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:48:16.052899 systemd-tmpfiles[1394]: Skipping /boot Apr 17 23:48:16.058725 systemd-tmpfiles[1394]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:48:16.058761 systemd-tmpfiles[1394]: Skipping /boot Apr 17 23:48:16.081075 zram_generator::config[1422]: No configuration found. Apr 17 23:48:16.175327 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:48:16.219243 systemd[1]: Reloading finished in 178 ms. Apr 17 23:48:16.249790 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:48:16.259952 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:48:16.263397 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:48:16.266478 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:48:16.272282 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:48:16.278144 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:48:16.283838 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:48:16.285312 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:48:16.287313 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:48:16.291748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:48:16.295235 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:48:16.297375 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:48:16.297476 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:48:16.298313 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:48:16.301438 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:48:16.301589 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:48:16.304338 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:48:16.304551 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:48:16.307357 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:48:16.307484 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:48:16.308301 augenrules[1495]: No rules Apr 17 23:48:16.309978 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:48:16.325403 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:48:16.331533 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:48:16.337342 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:48:16.337502 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:48:16.345254 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:48:16.348122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:48:16.351153 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:48:16.356295 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:48:16.358318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:48:16.359667 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:48:16.361598 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:48:16.361676 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:48:16.362531 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:48:16.362667 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:48:16.365391 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:48:16.365497 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:48:16.367788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:48:16.367946 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:48:16.370604 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:48:16.370708 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:48:16.373150 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:48:16.376917 systemd[1]: Finished ensure-sysext.service. Apr 17 23:48:16.380781 systemd-resolved[1473]: Positive Trust Anchors: Apr 17 23:48:16.380817 systemd-resolved[1473]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:48:16.380842 systemd-resolved[1473]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:48:16.381999 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:48:16.382086 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:48:16.383987 systemd-resolved[1473]: Defaulting to hostname 'linux'. Apr 17 23:48:16.387279 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 23:48:16.389301 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:48:16.391458 systemd[1]: Reached target network.target - Network. Apr 17 23:48:16.393010 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:48:16.428360 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 23:48:16.430976 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:48:17.277915 systemd-timesyncd[1531]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 17 23:48:17.277944 systemd-timesyncd[1531]: Initial clock synchronization to Fri 2026-04-17 23:48:17.277834 UTC. Apr 17 23:48:17.279229 systemd-resolved[1473]: Clock change detected. Flushing caches. Apr 17 23:48:17.279819 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:48:17.282124 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:48:17.284381 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:48:17.286453 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:48:17.286493 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:48:17.288082 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:48:17.289990 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:48:17.291964 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:48:17.294139 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:48:17.296193 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:48:17.299712 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:48:17.302324 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:48:17.311614 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:48:17.313600 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:48:17.315297 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:48:17.317021 systemd[1]: System is tainted: cgroupsv1 Apr 17 23:48:17.317074 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:48:17.317094 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:48:17.318133 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:48:17.321080 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:48:17.323864 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:48:17.329827 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:48:17.331854 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:48:17.333557 jq[1537]: false Apr 17 23:48:17.336547 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:48:17.340794 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:48:17.344993 extend-filesystems[1539]: Found loop3 Apr 17 23:48:17.347192 extend-filesystems[1539]: Found loop4 Apr 17 23:48:17.347192 extend-filesystems[1539]: Found loop5 Apr 17 23:48:17.347192 extend-filesystems[1539]: Found sr0 Apr 17 23:48:17.347192 extend-filesystems[1539]: Found vda Apr 17 23:48:17.347192 extend-filesystems[1539]: Found vda1 Apr 17 23:48:17.347192 extend-filesystems[1539]: Found vda2 Apr 17 23:48:17.347192 extend-filesystems[1539]: Found vda3 Apr 17 23:48:17.347192 extend-filesystems[1539]: Found usr Apr 17 23:48:17.347192 extend-filesystems[1539]: Found vda4 Apr 17 23:48:17.347192 extend-filesystems[1539]: Found vda6 Apr 17 23:48:17.347192 extend-filesystems[1539]: Found vda7 Apr 17 23:48:17.347192 extend-filesystems[1539]: Found vda9 Apr 17 23:48:17.347192 extend-filesystems[1539]: Checking size of /dev/vda9 Apr 17 23:48:17.382060 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1262) Apr 17 23:48:17.350022 dbus-daemon[1536]: [system] SELinux support is enabled Apr 17 23:48:17.357273 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:48:17.382470 extend-filesystems[1539]: Resized partition /dev/vda9 Apr 17 23:48:17.364013 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:48:17.387580 extend-filesystems[1559]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:48:17.369770 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:48:17.378236 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:48:17.384995 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:48:17.392780 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:48:17.396218 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:48:17.399943 jq[1565]: true Apr 17 23:48:17.400738 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 17 23:48:17.414025 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:48:17.414273 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:48:17.414661 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:48:17.414919 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:48:17.418306 update_engine[1563]: I20260417 23:48:17.416306 1563 main.cc:92] Flatcar Update Engine starting Apr 17 23:48:17.418306 update_engine[1563]: I20260417 23:48:17.417409 1563 update_check_scheduler.cc:74] Next update check in 4m26s Apr 17 23:48:17.419080 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:48:17.419311 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:48:17.429082 systemd-logind[1558]: Watching system buttons on /dev/input/event1 (Power Button) Apr 17 23:48:17.429094 systemd-logind[1558]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:48:17.432070 jq[1570]: true Apr 17 23:48:17.432400 systemd-logind[1558]: New seat seat0. Apr 17 23:48:17.436929 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:48:17.438439 (ntainerd)[1571]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:48:17.446719 tar[1568]: linux-amd64/LICENSE Apr 17 23:48:17.446719 tar[1568]: linux-amd64/helm Apr 17 23:48:17.446824 dbus-daemon[1536]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 23:48:17.450039 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 17 23:48:17.448433 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:48:17.453552 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:48:17.453933 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:48:17.457410 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:48:17.457599 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:48:17.461290 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:48:17.470887 extend-filesystems[1559]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 17 23:48:17.470887 extend-filesystems[1559]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 17 23:48:17.470887 extend-filesystems[1559]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 17 23:48:17.480406 extend-filesystems[1539]: Resized filesystem in /dev/vda9 Apr 17 23:48:17.474162 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:48:17.483286 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:48:17.483472 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:48:17.488404 bash[1596]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:48:17.489598 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:48:17.494047 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 23:48:17.512003 locksmithd[1597]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:48:17.608219 containerd[1571]: time="2026-04-17T23:48:17.607817888Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:48:17.630538 containerd[1571]: time="2026-04-17T23:48:17.630203680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:48:17.632188 containerd[1571]: time="2026-04-17T23:48:17.631977295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:48:17.632188 containerd[1571]: time="2026-04-17T23:48:17.632013469Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:48:17.632188 containerd[1571]: time="2026-04-17T23:48:17.632032118Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:48:17.632259 containerd[1571]: time="2026-04-17T23:48:17.632208367Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:48:17.632259 containerd[1571]: time="2026-04-17T23:48:17.632228481Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:48:17.632475 containerd[1571]: time="2026-04-17T23:48:17.632289301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:48:17.632475 containerd[1571]: time="2026-04-17T23:48:17.632309026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:48:17.632639 containerd[1571]: time="2026-04-17T23:48:17.632594117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:48:17.632665 containerd[1571]: time="2026-04-17T23:48:17.632638362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:48:17.632665 containerd[1571]: time="2026-04-17T23:48:17.632655890Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:48:17.632725 containerd[1571]: time="2026-04-17T23:48:17.632667365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:48:17.632850 containerd[1571]: time="2026-04-17T23:48:17.632790913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:48:17.633022 containerd[1571]: time="2026-04-17T23:48:17.632984287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:48:17.633277 containerd[1571]: time="2026-04-17T23:48:17.633167749Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:48:17.633277 containerd[1571]: time="2026-04-17T23:48:17.633187517Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:48:17.633308 containerd[1571]: time="2026-04-17T23:48:17.633274247Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:48:17.633360 containerd[1571]: time="2026-04-17T23:48:17.633318469Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:48:17.637653 containerd[1571]: time="2026-04-17T23:48:17.637559540Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:48:17.637653 containerd[1571]: time="2026-04-17T23:48:17.637606276Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:48:17.637653 containerd[1571]: time="2026-04-17T23:48:17.637625859Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:48:17.637653 containerd[1571]: time="2026-04-17T23:48:17.637643962Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:48:17.637773 containerd[1571]: time="2026-04-17T23:48:17.637733335Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:48:17.637980 containerd[1571]: time="2026-04-17T23:48:17.637849167Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:48:17.638157 containerd[1571]: time="2026-04-17T23:48:17.638139990Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:48:17.638657 containerd[1571]: time="2026-04-17T23:48:17.638227538Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:48:17.638657 containerd[1571]: time="2026-04-17T23:48:17.638247683Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:48:17.638657 containerd[1571]: time="2026-04-17T23:48:17.638263259Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:48:17.638657 containerd[1571]: time="2026-04-17T23:48:17.638279999Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:48:17.638657 containerd[1571]: time="2026-04-17T23:48:17.638296719Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:48:17.638657 containerd[1571]: time="2026-04-17T23:48:17.638311001Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:48:17.638657 containerd[1571]: time="2026-04-17T23:48:17.638329075Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:48:17.638657 containerd[1571]: time="2026-04-17T23:48:17.638345215Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:48:17.638657 containerd[1571]: time="2026-04-17T23:48:17.638359943Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:48:17.638657 containerd[1571]: time="2026-04-17T23:48:17.638374433Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:48:17.638657 containerd[1571]: time="2026-04-17T23:48:17.638388122Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:48:17.638657 containerd[1571]: time="2026-04-17T23:48:17.638412986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.638657 containerd[1571]: time="2026-04-17T23:48:17.638431862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.638657 containerd[1571]: time="2026-04-17T23:48:17.638446155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.638864 containerd[1571]: time="2026-04-17T23:48:17.638470031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.638864 containerd[1571]: time="2026-04-17T23:48:17.638483592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.638864 containerd[1571]: time="2026-04-17T23:48:17.638535573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.638864 containerd[1571]: time="2026-04-17T23:48:17.638547872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.638864 containerd[1571]: time="2026-04-17T23:48:17.638563103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.638864 containerd[1571]: time="2026-04-17T23:48:17.638577455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.638864 containerd[1571]: time="2026-04-17T23:48:17.638593764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.638864 containerd[1571]: time="2026-04-17T23:48:17.638608979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.638864 containerd[1571]: time="2026-04-17T23:48:17.638623505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.638864 containerd[1571]: time="2026-04-17T23:48:17.638638838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.638864 containerd[1571]: time="2026-04-17T23:48:17.638655823Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:48:17.638864 containerd[1571]: time="2026-04-17T23:48:17.638733220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.638864 containerd[1571]: time="2026-04-17T23:48:17.638751686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.638864 containerd[1571]: time="2026-04-17T23:48:17.638765590Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:48:17.639030 containerd[1571]: time="2026-04-17T23:48:17.638812763Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:48:17.639030 containerd[1571]: time="2026-04-17T23:48:17.638831672Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:48:17.639030 containerd[1571]: time="2026-04-17T23:48:17.638843971Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:48:17.639030 containerd[1571]: time="2026-04-17T23:48:17.638858591Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:48:17.639030 containerd[1571]: time="2026-04-17T23:48:17.638869714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.639030 containerd[1571]: time="2026-04-17T23:48:17.638883908Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:48:17.639030 containerd[1571]: time="2026-04-17T23:48:17.638895332Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:48:17.639030 containerd[1571]: time="2026-04-17T23:48:17.638908648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:48:17.639334 containerd[1571]: time="2026-04-17T23:48:17.639217971Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:48:17.639334 containerd[1571]: time="2026-04-17T23:48:17.639290584Z" level=info msg="Connect containerd service" Apr 17 23:48:17.639465 containerd[1571]: time="2026-04-17T23:48:17.639336682Z" level=info msg="using legacy CRI server" Apr 17 23:48:17.639465 containerd[1571]: time="2026-04-17T23:48:17.639345131Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:48:17.639491 containerd[1571]: time="2026-04-17T23:48:17.639464637Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:48:17.641175 containerd[1571]: time="2026-04-17T23:48:17.640965640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:48:17.641945 containerd[1571]: time="2026-04-17T23:48:17.641268467Z" level=info msg="Start subscribing containerd event" Apr 17 23:48:17.641945 containerd[1571]: time="2026-04-17T23:48:17.641360124Z" level=info msg="Start recovering state" Apr 17 23:48:17.641945 containerd[1571]: time="2026-04-17T23:48:17.641413347Z" level=info msg="Start event monitor" Apr 17 23:48:17.641945 containerd[1571]: time="2026-04-17T23:48:17.641428536Z" level=info msg="Start snapshots syncer" Apr 17 23:48:17.641945 containerd[1571]: time="2026-04-17T23:48:17.641437426Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:48:17.641945 containerd[1571]: time="2026-04-17T23:48:17.641442853Z" level=info msg="Start streaming server" Apr 17 23:48:17.641945 containerd[1571]: time="2026-04-17T23:48:17.641745560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:48:17.641945 containerd[1571]: time="2026-04-17T23:48:17.641829067Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:48:17.641945 containerd[1571]: time="2026-04-17T23:48:17.641919648Z" level=info msg="containerd successfully booted in 0.034831s" Apr 17 23:48:17.642012 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:48:17.666881 systemd-networkd[1258]: eth0: Gained IPv6LL Apr 17 23:48:17.669963 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:48:17.673261 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:48:17.688545 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 17 23:48:17.690401 sshd_keygen[1562]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:48:17.692991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:48:17.697881 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:48:17.717912 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 17 23:48:17.718090 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 17 23:48:17.722044 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:48:17.729015 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:48:17.732307 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:48:17.739987 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:48:17.746199 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:48:17.746349 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:48:17.750770 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:48:17.762462 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:48:17.774246 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:48:17.777632 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:48:17.779901 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:48:17.908012 tar[1568]: linux-amd64/README.md Apr 17 23:48:17.919065 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:48:18.391360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:48:18.394018 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:48:18.395837 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:48:18.396332 systemd[1]: Startup finished in 6.334s (kernel) + 3.606s (userspace) = 9.940s. Apr 17 23:48:18.852482 kubelet[1673]: E0417 23:48:18.852297 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:48:18.854973 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:48:18.855237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:48:22.929110 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:48:22.945985 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:44026.service - OpenSSH per-connection server daemon (10.0.0.1:44026). Apr 17 23:48:22.988173 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 44026 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:48:22.990392 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:23.003327 systemd-logind[1558]: New session 1 of user core. Apr 17 23:48:23.004351 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:48:23.015582 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:48:23.028315 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:48:23.039202 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:48:23.045055 (systemd)[1692]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:48:23.139056 systemd[1692]: Queued start job for default target default.target. Apr 17 23:48:23.139459 systemd[1692]: Created slice app.slice - User Application Slice. Apr 17 23:48:23.139473 systemd[1692]: Reached target paths.target - Paths. Apr 17 23:48:23.139481 systemd[1692]: Reached target timers.target - Timers. Apr 17 23:48:23.149926 systemd[1692]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:48:23.156779 systemd[1692]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:48:23.156871 systemd[1692]: Reached target sockets.target - Sockets. Apr 17 23:48:23.156887 systemd[1692]: Reached target basic.target - Basic System. Apr 17 23:48:23.156930 systemd[1692]: Reached target default.target - Main User Target. Apr 17 23:48:23.156962 systemd[1692]: Startup finished in 103ms. Apr 17 23:48:23.157042 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:48:23.158246 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:48:23.213173 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:44042.service - OpenSSH per-connection server daemon (10.0.0.1:44042). Apr 17 23:48:23.254230 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 44042 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:48:23.255489 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:23.260981 systemd-logind[1558]: New session 2 of user core. Apr 17 23:48:23.270226 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:48:23.325421 sshd[1704]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:23.335021 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:44052.service - OpenSSH per-connection server daemon (10.0.0.1:44052). Apr 17 23:48:23.335325 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:44042.service: Deactivated successfully. Apr 17 23:48:23.337368 systemd-logind[1558]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:48:23.337850 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:48:23.338859 systemd-logind[1558]: Removed session 2. Apr 17 23:48:23.375803 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 44052 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:48:23.377506 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:23.388283 systemd-logind[1558]: New session 3 of user core. Apr 17 23:48:23.406042 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:48:23.455880 sshd[1709]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:23.464982 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:44068.service - OpenSSH per-connection server daemon (10.0.0.1:44068). Apr 17 23:48:23.465274 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:44052.service: Deactivated successfully. Apr 17 23:48:23.467740 systemd-logind[1558]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:48:23.468490 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:48:23.470400 systemd-logind[1558]: Removed session 3. Apr 17 23:48:23.496101 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 44068 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:48:23.497827 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:23.502450 systemd-logind[1558]: New session 4 of user core. Apr 17 23:48:23.516224 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:48:23.570581 sshd[1717]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:23.591238 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:44072.service - OpenSSH per-connection server daemon (10.0.0.1:44072). Apr 17 23:48:23.591765 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:44068.service: Deactivated successfully. Apr 17 23:48:23.594432 systemd-logind[1558]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:48:23.595148 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:48:23.597256 systemd-logind[1558]: Removed session 4. Apr 17 23:48:23.625314 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 44072 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:48:23.626756 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:23.630913 systemd-logind[1558]: New session 5 of user core. Apr 17 23:48:23.640975 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:48:23.698463 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:48:23.698782 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:48:23.714994 sudo[1732]: pam_unix(sudo:session): session closed for user root Apr 17 23:48:23.716596 sshd[1725]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:23.730202 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:44082.service - OpenSSH per-connection server daemon (10.0.0.1:44082). Apr 17 23:48:23.730588 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:44072.service: Deactivated successfully. Apr 17 23:48:23.732756 systemd-logind[1558]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:48:23.733176 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:48:23.734181 systemd-logind[1558]: Removed session 5. Apr 17 23:48:23.760261 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 44082 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:48:23.761358 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:23.764968 systemd-logind[1558]: New session 6 of user core. Apr 17 23:48:23.774921 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:48:23.826834 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:48:23.827114 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:48:23.831443 sudo[1742]: pam_unix(sudo:session): session closed for user root Apr 17 23:48:23.835763 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:48:23.836120 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:48:23.853040 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:48:23.854622 auditctl[1745]: No rules Apr 17 23:48:23.854982 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:48:23.855213 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:48:23.857412 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:48:23.881758 augenrules[1764]: No rules Apr 17 23:48:23.882940 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:48:23.884006 sudo[1741]: pam_unix(sudo:session): session closed for user root Apr 17 23:48:23.885424 sshd[1734]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:23.891944 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:44094.service - OpenSSH per-connection server daemon (10.0.0.1:44094). Apr 17 23:48:23.892465 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:44082.service: Deactivated successfully. Apr 17 23:48:23.893824 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:48:23.894480 systemd-logind[1558]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:48:23.896085 systemd-logind[1558]: Removed session 6. Apr 17 23:48:23.922150 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 44094 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:48:23.923158 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:23.926890 systemd-logind[1558]: New session 7 of user core. Apr 17 23:48:23.937943 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:48:23.989209 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:48:23.989439 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:48:24.240169 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:48:24.240175 (dockerd)[1795]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:48:24.488559 dockerd[1795]: time="2026-04-17T23:48:24.488426394Z" level=info msg="Starting up" Apr 17 23:48:24.712257 dockerd[1795]: time="2026-04-17T23:48:24.712075275Z" level=info msg="Loading containers: start." Apr 17 23:48:24.826722 kernel: Initializing XFRM netlink socket Apr 17 23:48:24.942987 systemd-networkd[1258]: docker0: Link UP Apr 17 23:48:24.966648 dockerd[1795]: time="2026-04-17T23:48:24.966431439Z" level=info msg="Loading containers: done." Apr 17 23:48:24.981624 dockerd[1795]: time="2026-04-17T23:48:24.981525121Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:48:24.981803 dockerd[1795]: time="2026-04-17T23:48:24.981729724Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:48:24.981834 dockerd[1795]: time="2026-04-17T23:48:24.981804897Z" level=info msg="Daemon has completed initialization" Apr 17 23:48:25.021713 dockerd[1795]: time="2026-04-17T23:48:25.021568626Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:48:25.021903 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:48:25.459642 containerd[1571]: time="2026-04-17T23:48:25.459464368Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 23:48:25.981851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount614709537.mount: Deactivated successfully. Apr 17 23:48:26.737767 containerd[1571]: time="2026-04-17T23:48:26.737654011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:26.738957 containerd[1571]: time="2026-04-17T23:48:26.738878897Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 17 23:48:26.739841 containerd[1571]: time="2026-04-17T23:48:26.739784597Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:26.744263 containerd[1571]: time="2026-04-17T23:48:26.744192402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:26.745609 containerd[1571]: time="2026-04-17T23:48:26.745458939Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.285935384s" Apr 17 23:48:26.745609 containerd[1571]: time="2026-04-17T23:48:26.745591904Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 17 23:48:26.746419 containerd[1571]: time="2026-04-17T23:48:26.746356620Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 23:48:27.554525 containerd[1571]: time="2026-04-17T23:48:27.554386281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:27.555494 containerd[1571]: time="2026-04-17T23:48:27.555392352Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 17 23:48:27.556815 containerd[1571]: time="2026-04-17T23:48:27.556733401Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:27.560860 containerd[1571]: time="2026-04-17T23:48:27.560762246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:27.562483 containerd[1571]: time="2026-04-17T23:48:27.562422724Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 816.004472ms" Apr 17 23:48:27.562576 containerd[1571]: time="2026-04-17T23:48:27.562483101Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 17 23:48:27.563293 containerd[1571]: time="2026-04-17T23:48:27.563219292Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 23:48:28.295024 containerd[1571]: time="2026-04-17T23:48:28.294947251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:28.295857 containerd[1571]: time="2026-04-17T23:48:28.295796619Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 17 23:48:28.296891 containerd[1571]: time="2026-04-17T23:48:28.296837256Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:28.299356 containerd[1571]: time="2026-04-17T23:48:28.299291378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:28.300084 containerd[1571]: time="2026-04-17T23:48:28.300024945Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 736.758649ms" Apr 17 23:48:28.300084 containerd[1571]: time="2026-04-17T23:48:28.300068220Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 17 23:48:28.302510 containerd[1571]: time="2026-04-17T23:48:28.301082503Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 23:48:29.034182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount315517952.mount: Deactivated successfully. Apr 17 23:48:29.035406 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:48:29.044987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:48:29.173648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:48:29.178502 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:48:29.219321 kubelet[2028]: E0417 23:48:29.219143 2028 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:48:29.223000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:48:29.223291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:48:29.472993 containerd[1571]: time="2026-04-17T23:48:29.472646643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:29.473758 containerd[1571]: time="2026-04-17T23:48:29.473693456Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 17 23:48:29.475122 containerd[1571]: time="2026-04-17T23:48:29.475068729Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:29.478220 containerd[1571]: time="2026-04-17T23:48:29.478103223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:29.478937 containerd[1571]: time="2026-04-17T23:48:29.478863400Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.177747435s" Apr 17 23:48:29.478937 containerd[1571]: time="2026-04-17T23:48:29.478904709Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 17 23:48:29.479754 containerd[1571]: time="2026-04-17T23:48:29.479721262Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 23:48:29.937094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4243126929.mount: Deactivated successfully. Apr 17 23:48:30.605205 containerd[1571]: time="2026-04-17T23:48:30.605130161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:30.606152 containerd[1571]: time="2026-04-17T23:48:30.606119037Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 17 23:48:30.607853 containerd[1571]: time="2026-04-17T23:48:30.607784161Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:30.610797 containerd[1571]: time="2026-04-17T23:48:30.610724114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:30.612020 containerd[1571]: time="2026-04-17T23:48:30.611966463Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.132195708s" Apr 17 23:48:30.612087 containerd[1571]: time="2026-04-17T23:48:30.612021277Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 17 23:48:30.613104 containerd[1571]: time="2026-04-17T23:48:30.613070215Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 23:48:30.980818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049206752.mount: Deactivated successfully. Apr 17 23:48:30.986142 containerd[1571]: time="2026-04-17T23:48:30.986078221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:30.986935 containerd[1571]: time="2026-04-17T23:48:30.986756628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 17 23:48:30.988234 containerd[1571]: time="2026-04-17T23:48:30.988181559Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:30.990326 containerd[1571]: time="2026-04-17T23:48:30.990244348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:30.991510 containerd[1571]: time="2026-04-17T23:48:30.991428077Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 378.313651ms" Apr 17 23:48:30.991510 containerd[1571]: time="2026-04-17T23:48:30.991476565Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 17 23:48:30.992137 containerd[1571]: time="2026-04-17T23:48:30.992109546Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 23:48:31.362723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3508374211.mount: Deactivated successfully. Apr 17 23:48:32.049480 containerd[1571]: time="2026-04-17T23:48:32.049382152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:32.050543 containerd[1571]: time="2026-04-17T23:48:32.050495104Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 17 23:48:32.052525 containerd[1571]: time="2026-04-17T23:48:32.052456280Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:32.057085 containerd[1571]: time="2026-04-17T23:48:32.057031783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:32.058329 containerd[1571]: time="2026-04-17T23:48:32.058279824Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.066143388s" Apr 17 23:48:32.058329 containerd[1571]: time="2026-04-17T23:48:32.058317244Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 17 23:48:34.228361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:48:34.241064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:48:34.265927 systemd[1]: Reloading requested from client PID 2190 ('systemctl') (unit session-7.scope)... Apr 17 23:48:34.265958 systemd[1]: Reloading... Apr 17 23:48:34.327728 zram_generator::config[2229]: No configuration found. Apr 17 23:48:34.426444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:48:34.476378 systemd[1]: Reloading finished in 210 ms. Apr 17 23:48:34.514450 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:48:34.514514 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:48:34.514815 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:48:34.516313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:48:34.627760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:48:34.633063 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:48:34.678281 kubelet[2289]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:48:34.678281 kubelet[2289]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:48:34.678281 kubelet[2289]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:48:34.678716 kubelet[2289]: I0417 23:48:34.678321 2289 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:48:34.913760 kernel: hrtimer: interrupt took 6595836 ns Apr 17 23:48:35.285756 kubelet[2289]: I0417 23:48:35.285630 2289 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:48:35.285756 kubelet[2289]: I0417 23:48:35.285724 2289 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:48:35.286061 kubelet[2289]: I0417 23:48:35.286007 2289 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:48:35.306375 kubelet[2289]: E0417 23:48:35.306275 2289 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:48:35.308786 kubelet[2289]: I0417 23:48:35.308732 2289 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:48:35.316110 kubelet[2289]: E0417 23:48:35.316017 2289 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:48:35.316110 kubelet[2289]: I0417 23:48:35.316085 2289 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:48:35.321632 kubelet[2289]: I0417 23:48:35.321578 2289 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:48:35.322946 kubelet[2289]: I0417 23:48:35.322895 2289 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:48:35.323160 kubelet[2289]: I0417 23:48:35.322948 2289 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 17 23:48:35.323160 kubelet[2289]: I0417 23:48:35.323159 2289 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:48:35.323308 kubelet[2289]: I0417 23:48:35.323171 2289 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:48:35.323325 kubelet[2289]: I0417 23:48:35.323313 2289 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:48:35.327276 kubelet[2289]: I0417 23:48:35.327225 2289 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:48:35.327276 kubelet[2289]: I0417 23:48:35.327260 2289 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:48:35.327328 kubelet[2289]: I0417 23:48:35.327288 2289 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:48:35.328919 kubelet[2289]: I0417 23:48:35.328897 2289 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:48:35.335544 kubelet[2289]: I0417 23:48:35.335512 2289 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:48:35.335969 kubelet[2289]: E0417 23:48:35.335809 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:48:35.336396 kubelet[2289]: E0417 23:48:35.336108 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:48:35.336396 kubelet[2289]: I0417 23:48:35.336248 2289 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:48:35.337543 kubelet[2289]: W0417 23:48:35.337473 2289 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:48:35.341503 kubelet[2289]: I0417 23:48:35.341450 2289 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:48:35.341573 kubelet[2289]: I0417 23:48:35.341515 2289 server.go:1289] "Started kubelet" Apr 17 23:48:35.341638 kubelet[2289]: I0417 23:48:35.341567 2289 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:48:35.341793 kubelet[2289]: I0417 23:48:35.341639 2289 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:48:35.342123 kubelet[2289]: I0417 23:48:35.342097 2289 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:48:35.342568 kubelet[2289]: I0417 23:48:35.342522 2289 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:48:35.344053 kubelet[2289]: I0417 23:48:35.343399 2289 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:48:35.344053 kubelet[2289]: I0417 23:48:35.344007 2289 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:48:35.345047 kubelet[2289]: E0417 23:48:35.343524 2289 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a749cb412b09b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 23:48:35.341478323 +0000 UTC m=+0.703334788,LastTimestamp:2026-04-17 23:48:35.341478323 +0000 UTC m=+0.703334788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 23:48:35.346087 kubelet[2289]: E0417 23:48:35.345330 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:48:35.346087 kubelet[2289]: I0417 23:48:35.345354 2289 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:48:35.346087 kubelet[2289]: I0417 23:48:35.345537 2289 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:48:35.346087 kubelet[2289]: I0417 23:48:35.345571 2289 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:48:35.346087 kubelet[2289]: E0417 23:48:35.345884 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:48:35.346863 kubelet[2289]: I0417 23:48:35.346821 2289 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:48:35.347214 kubelet[2289]: E0417 23:48:35.347137 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" Apr 17 23:48:35.347902 kubelet[2289]: E0417 23:48:35.347871 2289 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:48:35.348002 kubelet[2289]: I0417 23:48:35.347984 2289 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:48:35.348002 kubelet[2289]: I0417 23:48:35.347998 2289 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:48:35.361499 kubelet[2289]: I0417 23:48:35.361391 2289 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:48:35.362535 kubelet[2289]: I0417 23:48:35.362487 2289 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:48:35.362535 kubelet[2289]: I0417 23:48:35.362528 2289 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:48:35.362625 kubelet[2289]: I0417 23:48:35.362548 2289 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:48:35.362625 kubelet[2289]: I0417 23:48:35.362559 2289 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:48:35.362658 kubelet[2289]: E0417 23:48:35.362629 2289 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:48:35.367479 kubelet[2289]: E0417 23:48:35.367452 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:48:35.367776 kubelet[2289]: I0417 23:48:35.367766 2289 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:48:35.367865 kubelet[2289]: I0417 23:48:35.367859 2289 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:48:35.367939 kubelet[2289]: I0417 23:48:35.367906 2289 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:48:35.401257 kubelet[2289]: I0417 23:48:35.401155 2289 policy_none.go:49] "None policy: Start" Apr 17 23:48:35.401257 kubelet[2289]: I0417 23:48:35.401217 2289 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:48:35.401257 kubelet[2289]: I0417 23:48:35.401232 2289 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:48:35.406994 kubelet[2289]: E0417 23:48:35.406791 2289 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:48:35.407059 kubelet[2289]: I0417 23:48:35.407019 2289 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:48:35.407187 kubelet[2289]: I0417 23:48:35.407080 2289 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:48:35.408528 kubelet[2289]: I0417 23:48:35.408481 2289 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:48:35.408995 kubelet[2289]: E0417 23:48:35.408921 2289 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:48:35.409028 kubelet[2289]: E0417 23:48:35.409005 2289 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 23:48:35.472153 kubelet[2289]: E0417 23:48:35.472123 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:48:35.475200 kubelet[2289]: E0417 23:48:35.475164 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:48:35.476334 kubelet[2289]: E0417 23:48:35.476319 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:48:35.509994 kubelet[2289]: I0417 23:48:35.509927 2289 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:48:35.510345 kubelet[2289]: E0417 23:48:35.510282 2289 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Apr 17 23:48:35.548046 kubelet[2289]: E0417 23:48:35.547837 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" Apr 17 23:48:35.647453 kubelet[2289]: I0417 23:48:35.647293 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:48:35.647453 kubelet[2289]: I0417 23:48:35.647366 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4d1c1b1a1a941ce5b9854c7404ab06c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b4d1c1b1a1a941ce5b9854c7404ab06c\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:48:35.647453 kubelet[2289]: I0417 23:48:35.647387 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4d1c1b1a1a941ce5b9854c7404ab06c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b4d1c1b1a1a941ce5b9854c7404ab06c\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:48:35.647453 kubelet[2289]: I0417 23:48:35.647464 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:48:35.647806 kubelet[2289]: I0417 23:48:35.647546 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:48:35.647806 kubelet[2289]: I0417 23:48:35.647645 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4d1c1b1a1a941ce5b9854c7404ab06c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b4d1c1b1a1a941ce5b9854c7404ab06c\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:48:35.647806 kubelet[2289]: I0417 23:48:35.647721 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:48:35.647806 kubelet[2289]: I0417 23:48:35.647743 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:48:35.647806 kubelet[2289]: I0417 23:48:35.647763 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:48:35.712283 kubelet[2289]: I0417 23:48:35.712196 2289 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:48:35.712818 kubelet[2289]: E0417 23:48:35.712645 2289 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Apr 17 23:48:35.773240 kubelet[2289]: E0417 23:48:35.773114 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:35.774004 containerd[1571]: time="2026-04-17T23:48:35.773923034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 17 23:48:35.776339 kubelet[2289]: E0417 23:48:35.776302 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:35.777195 kubelet[2289]: E0417 23:48:35.776558 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:35.777253 containerd[1571]: time="2026-04-17T23:48:35.777001472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 17 23:48:35.777253 containerd[1571]: time="2026-04-17T23:48:35.777053465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b4d1c1b1a1a941ce5b9854c7404ab06c,Namespace:kube-system,Attempt:0,}" Apr 17 23:48:35.948668 kubelet[2289]: E0417 23:48:35.948469 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" Apr 17 23:48:36.114575 kubelet[2289]: I0417 23:48:36.114540 2289 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:48:36.115015 kubelet[2289]: E0417 23:48:36.114962 2289 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Apr 17 23:48:36.287878 kubelet[2289]: E0417 23:48:36.287665 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:48:36.347152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1535553502.mount: Deactivated successfully. Apr 17 23:48:36.355316 containerd[1571]: time="2026-04-17T23:48:36.355200748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:48:36.358130 containerd[1571]: time="2026-04-17T23:48:36.358023673Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 17 23:48:36.358917 containerd[1571]: time="2026-04-17T23:48:36.358868970Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:48:36.359715 containerd[1571]: time="2026-04-17T23:48:36.359652226Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:48:36.360790 containerd[1571]: time="2026-04-17T23:48:36.360729200Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:48:36.361603 containerd[1571]: time="2026-04-17T23:48:36.361482535Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:48:36.362580 containerd[1571]: time="2026-04-17T23:48:36.362556379Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:48:36.364696 containerd[1571]: time="2026-04-17T23:48:36.364593989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:48:36.365766 containerd[1571]: time="2026-04-17T23:48:36.365738115Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 588.635547ms" Apr 17 23:48:36.366396 containerd[1571]: time="2026-04-17T23:48:36.366363777Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 592.331798ms" Apr 17 23:48:36.368448 containerd[1571]: time="2026-04-17T23:48:36.368417649Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 591.342365ms" Apr 17 23:48:36.477922 containerd[1571]: time="2026-04-17T23:48:36.477268010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:48:36.477922 containerd[1571]: time="2026-04-17T23:48:36.477323657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:48:36.477922 containerd[1571]: time="2026-04-17T23:48:36.477331586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:48:36.477922 containerd[1571]: time="2026-04-17T23:48:36.477378431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:48:36.477922 containerd[1571]: time="2026-04-17T23:48:36.477087608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:48:36.477922 containerd[1571]: time="2026-04-17T23:48:36.477152988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:48:36.477922 containerd[1571]: time="2026-04-17T23:48:36.477164821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:48:36.477922 containerd[1571]: time="2026-04-17T23:48:36.477216104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:48:36.477922 containerd[1571]: time="2026-04-17T23:48:36.477047753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:48:36.477922 containerd[1571]: time="2026-04-17T23:48:36.477084046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:48:36.477922 containerd[1571]: time="2026-04-17T23:48:36.477108046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:48:36.477922 containerd[1571]: time="2026-04-17T23:48:36.477199916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:48:36.526907 containerd[1571]: time="2026-04-17T23:48:36.526841600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"40262bb92e9b388c7aaedf6448802d00aa1641fce8a7c1f5dbc95693f044f253\"" Apr 17 23:48:36.527960 kubelet[2289]: E0417 23:48:36.527864 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:36.534062 containerd[1571]: time="2026-04-17T23:48:36.534036973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bc34af18c0f4b639e5f4506c77651eed9b7dfd2b0a5a1fc16a38bb951664d7b\"" Apr 17 23:48:36.534324 containerd[1571]: time="2026-04-17T23:48:36.534075407Z" level=info msg="CreateContainer within sandbox \"40262bb92e9b388c7aaedf6448802d00aa1641fce8a7c1f5dbc95693f044f253\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:48:36.535780 kubelet[2289]: E0417 23:48:36.535752 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:36.540661 containerd[1571]: time="2026-04-17T23:48:36.540522010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b4d1c1b1a1a941ce5b9854c7404ab06c,Namespace:kube-system,Attempt:0,} returns sandbox id \"09a1ee0ecb86a3a0341f073a8c5e0eba279c272025c807475ed0372e5663be9f\"" Apr 17 23:48:36.541206 kubelet[2289]: E0417 23:48:36.541175 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:36.542020 containerd[1571]: time="2026-04-17T23:48:36.541989474Z" level=info msg="CreateContainer within sandbox \"0bc34af18c0f4b639e5f4506c77651eed9b7dfd2b0a5a1fc16a38bb951664d7b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:48:36.545525 containerd[1571]: time="2026-04-17T23:48:36.545505684Z" level=info msg="CreateContainer within sandbox \"09a1ee0ecb86a3a0341f073a8c5e0eba279c272025c807475ed0372e5663be9f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:48:36.555317 containerd[1571]: time="2026-04-17T23:48:36.555269368Z" level=info msg="CreateContainer within sandbox \"40262bb92e9b388c7aaedf6448802d00aa1641fce8a7c1f5dbc95693f044f253\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"235a634e95e51b4fdf1b3ba3af782bffa61bace703d9fc288a6ce4fc3614143f\"" Apr 17 23:48:36.556658 containerd[1571]: time="2026-04-17T23:48:36.556630142Z" level=info msg="StartContainer for \"235a634e95e51b4fdf1b3ba3af782bffa61bace703d9fc288a6ce4fc3614143f\"" Apr 17 23:48:36.562596 containerd[1571]: time="2026-04-17T23:48:36.562500583Z" level=info msg="CreateContainer within sandbox \"0bc34af18c0f4b639e5f4506c77651eed9b7dfd2b0a5a1fc16a38bb951664d7b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"31c2e6c9a00a69278316380a32e8ef8b43c4fa390e22e3e574d2128678e3556f\"" Apr 17 23:48:36.562999 containerd[1571]: time="2026-04-17T23:48:36.562954916Z" level=info msg="StartContainer for \"31c2e6c9a00a69278316380a32e8ef8b43c4fa390e22e3e574d2128678e3556f\"" Apr 17 23:48:36.572928 containerd[1571]: time="2026-04-17T23:48:36.572862749Z" level=info msg="CreateContainer within sandbox \"09a1ee0ecb86a3a0341f073a8c5e0eba279c272025c807475ed0372e5663be9f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ec0ea5614abfc2fd104902af127b475496b98d7686bfca0337bfd2ea981caee2\"" Apr 17 23:48:36.574253 containerd[1571]: time="2026-04-17T23:48:36.573769615Z" level=info msg="StartContainer for \"ec0ea5614abfc2fd104902af127b475496b98d7686bfca0337bfd2ea981caee2\"" Apr 17 23:48:36.612540 kubelet[2289]: E0417 23:48:36.612499 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:48:36.623156 containerd[1571]: time="2026-04-17T23:48:36.621967514Z" level=info msg="StartContainer for \"235a634e95e51b4fdf1b3ba3af782bffa61bace703d9fc288a6ce4fc3614143f\" returns successfully" Apr 17 23:48:36.641124 containerd[1571]: time="2026-04-17T23:48:36.641006934Z" level=info msg="StartContainer for \"31c2e6c9a00a69278316380a32e8ef8b43c4fa390e22e3e574d2128678e3556f\" returns successfully" Apr 17 23:48:36.652251 containerd[1571]: time="2026-04-17T23:48:36.651779432Z" level=info msg="StartContainer for \"ec0ea5614abfc2fd104902af127b475496b98d7686bfca0337bfd2ea981caee2\" returns successfully" Apr 17 23:48:36.653372 kubelet[2289]: E0417 23:48:36.653334 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:48:36.917901 kubelet[2289]: I0417 23:48:36.917476 2289 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:48:37.383903 kubelet[2289]: E0417 23:48:37.381808 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:48:37.383903 kubelet[2289]: E0417 23:48:37.381991 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:37.383903 kubelet[2289]: E0417 23:48:37.383480 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:48:37.383903 kubelet[2289]: E0417 23:48:37.383655 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:37.385315 kubelet[2289]: E0417 23:48:37.385264 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:48:37.385383 kubelet[2289]: E0417 23:48:37.385377 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:37.667832 kubelet[2289]: E0417 23:48:37.667005 2289 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 17 23:48:37.842743 kubelet[2289]: I0417 23:48:37.842663 2289 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 23:48:37.842743 kubelet[2289]: E0417 23:48:37.842732 2289 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 23:48:37.860649 kubelet[2289]: E0417 23:48:37.860572 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:48:37.962204 kubelet[2289]: E0417 23:48:37.961958 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:48:38.062953 kubelet[2289]: E0417 23:48:38.062877 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:48:38.163937 kubelet[2289]: E0417 23:48:38.163833 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:48:38.265445 kubelet[2289]: E0417 23:48:38.265253 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:48:38.365993 kubelet[2289]: E0417 23:48:38.365873 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:48:38.388799 kubelet[2289]: E0417 23:48:38.388748 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:48:38.388799 kubelet[2289]: E0417 23:48:38.388799 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:48:38.389040 kubelet[2289]: E0417 23:48:38.388980 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:38.389126 kubelet[2289]: E0417 23:48:38.389059 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:38.444058 kubelet[2289]: E0417 23:48:38.444003 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:48:38.444294 kubelet[2289]: E0417 23:48:38.444221 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:38.466902 kubelet[2289]: E0417 23:48:38.466786 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:48:38.647246 kubelet[2289]: I0417 23:48:38.647036 2289 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:48:38.658573 kubelet[2289]: I0417 23:48:38.658518 2289 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:48:38.664787 kubelet[2289]: I0417 23:48:38.664729 2289 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:48:39.334735 kubelet[2289]: I0417 23:48:39.334359 2289 apiserver.go:52] "Watching apiserver" Apr 17 23:48:39.345990 kubelet[2289]: I0417 23:48:39.345935 2289 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:48:39.414148 kubelet[2289]: E0417 23:48:39.414070 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:39.414930 kubelet[2289]: I0417 23:48:39.414362 2289 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:48:39.415420 kubelet[2289]: E0417 23:48:39.415311 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:39.421813 kubelet[2289]: E0417 23:48:39.421741 2289 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:48:39.421932 kubelet[2289]: E0417 23:48:39.421886 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:40.110415 systemd[1]: Reloading requested from client PID 2585 ('systemctl') (unit session-7.scope)... Apr 17 23:48:40.110450 systemd[1]: Reloading... Apr 17 23:48:40.189745 zram_generator::config[2624]: No configuration found. Apr 17 23:48:40.293078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:48:40.350899 systemd[1]: Reloading finished in 240 ms. Apr 17 23:48:40.377079 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:48:40.389157 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:48:40.389371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:48:40.402102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:48:40.519837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:48:40.533341 (kubelet)[2679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:48:40.587532 kubelet[2679]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:48:40.587532 kubelet[2679]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:48:40.587532 kubelet[2679]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:48:40.587532 kubelet[2679]: I0417 23:48:40.587521 2679 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:48:40.595103 kubelet[2679]: I0417 23:48:40.595005 2679 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:48:40.595103 kubelet[2679]: I0417 23:48:40.595029 2679 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:48:40.595927 kubelet[2679]: I0417 23:48:40.595881 2679 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:48:40.598288 kubelet[2679]: I0417 23:48:40.598223 2679 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:48:40.601321 kubelet[2679]: I0417 23:48:40.601272 2679 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:48:40.605481 kubelet[2679]: E0417 23:48:40.605371 2679 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:48:40.605481 kubelet[2679]: I0417 23:48:40.605409 2679 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:48:40.610410 kubelet[2679]: I0417 23:48:40.610336 2679 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:48:40.611120 kubelet[2679]: I0417 23:48:40.611040 2679 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:48:40.611332 kubelet[2679]: I0417 23:48:40.611102 2679 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 17 23:48:40.611332 kubelet[2679]: I0417 23:48:40.611308 2679 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:48:40.611332 kubelet[2679]: I0417 23:48:40.611320 2679 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:48:40.611629 kubelet[2679]: I0417 23:48:40.611405 2679 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:48:40.611737 kubelet[2679]: I0417 23:48:40.611663 2679 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:48:40.611879 kubelet[2679]: I0417 23:48:40.611807 2679 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:48:40.611879 kubelet[2679]: I0417 23:48:40.611853 2679 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:48:40.611879 kubelet[2679]: I0417 23:48:40.611869 2679 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:48:40.615312 kubelet[2679]: I0417 23:48:40.615293 2679 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:48:40.616447 kubelet[2679]: I0417 23:48:40.616287 2679 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:48:40.628275 kubelet[2679]: I0417 23:48:40.628090 2679 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:48:40.628704 kubelet[2679]: I0417 23:48:40.628402 2679 server.go:1289] "Started kubelet" Apr 17 23:48:40.632592 kubelet[2679]: I0417 23:48:40.632403 2679 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:48:40.634797 kubelet[2679]: I0417 23:48:40.634459 2679 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:48:40.635126 kubelet[2679]: I0417 23:48:40.632555 2679 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:48:40.635937 kubelet[2679]: I0417 23:48:40.635866 2679 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:48:40.636573 kubelet[2679]: I0417 23:48:40.636308 2679 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:48:40.638754 kubelet[2679]: I0417 23:48:40.638596 2679 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:48:40.640664 kubelet[2679]: I0417 23:48:40.640001 2679 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:48:40.641888 kubelet[2679]: I0417 23:48:40.641869 2679 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:48:40.643045 kubelet[2679]: I0417 23:48:40.643028 2679 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:48:40.643846 kubelet[2679]: I0417 23:48:40.643822 2679 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:48:40.644036 kubelet[2679]: I0417 23:48:40.644022 2679 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:48:40.649293 kubelet[2679]: E0417 23:48:40.649233 2679 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:48:40.650365 kubelet[2679]: I0417 23:48:40.650350 2679 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:48:40.660550 kubelet[2679]: I0417 23:48:40.660398 2679 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:48:40.661926 kubelet[2679]: I0417 23:48:40.661849 2679 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:48:40.662362 kubelet[2679]: I0417 23:48:40.662352 2679 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:48:40.662459 kubelet[2679]: I0417 23:48:40.662454 2679 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:48:40.662504 kubelet[2679]: I0417 23:48:40.662500 2679 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:48:40.662565 kubelet[2679]: E0417 23:48:40.662555 2679 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:48:40.690812 kubelet[2679]: I0417 23:48:40.690785 2679 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:48:40.690971 kubelet[2679]: I0417 23:48:40.690945 2679 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:48:40.690971 kubelet[2679]: I0417 23:48:40.690977 2679 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:48:40.691098 kubelet[2679]: I0417 23:48:40.691078 2679 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:48:40.691118 kubelet[2679]: I0417 23:48:40.691099 2679 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:48:40.691118 kubelet[2679]: I0417 23:48:40.691113 2679 policy_none.go:49] "None policy: Start" Apr 17 23:48:40.691159 kubelet[2679]: I0417 23:48:40.691121 2679 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:48:40.691159 kubelet[2679]: I0417 23:48:40.691128 2679 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:48:40.691211 kubelet[2679]: I0417 23:48:40.691191 2679 state_mem.go:75] "Updated machine memory state" Apr 17 23:48:40.692149 kubelet[2679]: E0417 23:48:40.692116 2679 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:48:40.692285 kubelet[2679]: I0417 23:48:40.692254 2679 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:48:40.692315 kubelet[2679]: I0417 23:48:40.692280 2679 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:48:40.692659 kubelet[2679]: I0417 23:48:40.692580 2679 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:48:40.693525 kubelet[2679]: E0417 23:48:40.693488 2679 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:48:40.806560 kubelet[2679]: I0417 23:48:40.805558 2679 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:48:40.806560 kubelet[2679]: I0417 23:48:40.806401 2679 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:48:40.806560 kubelet[2679]: I0417 23:48:40.807323 2679 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:48:40.815275 kubelet[2679]: I0417 23:48:40.815185 2679 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:48:40.824352 kubelet[2679]: E0417 23:48:40.824273 2679 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:48:40.824587 kubelet[2679]: E0417 23:48:40.824572 2679 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 23:48:40.824623 kubelet[2679]: E0417 23:48:40.824612 2679 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:48:40.826732 kubelet[2679]: I0417 23:48:40.826627 2679 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 17 23:48:40.826867 kubelet[2679]: I0417 23:48:40.826760 2679 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 23:48:40.844240 kubelet[2679]: I0417 23:48:40.844142 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4d1c1b1a1a941ce5b9854c7404ab06c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b4d1c1b1a1a941ce5b9854c7404ab06c\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:48:40.844240 kubelet[2679]: I0417 23:48:40.844183 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4d1c1b1a1a941ce5b9854c7404ab06c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b4d1c1b1a1a941ce5b9854c7404ab06c\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:48:40.844240 kubelet[2679]: I0417 23:48:40.844206 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:48:40.844240 kubelet[2679]: I0417 23:48:40.844218 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:48:40.844240 kubelet[2679]: I0417 23:48:40.844232 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:48:40.844475 kubelet[2679]: I0417 23:48:40.844244 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4d1c1b1a1a941ce5b9854c7404ab06c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b4d1c1b1a1a941ce5b9854c7404ab06c\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:48:40.844475 kubelet[2679]: I0417 23:48:40.844260 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:48:40.844475 kubelet[2679]: I0417 23:48:40.844280 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:48:40.846622 kubelet[2679]: I0417 23:48:40.844318 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:48:41.081769 sudo[2718]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 17 23:48:41.082007 sudo[2718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 17 23:48:41.125521 kubelet[2679]: E0417 23:48:41.125462 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:41.125521 kubelet[2679]: E0417 23:48:41.125471 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:41.125764 kubelet[2679]: E0417 23:48:41.125474 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:41.602393 sudo[2718]: pam_unix(sudo:session): session closed for user root Apr 17 23:48:41.614895 kubelet[2679]: I0417 23:48:41.614816 2679 apiserver.go:52] "Watching apiserver" Apr 17 23:48:41.643858 kubelet[2679]: I0417 23:48:41.642800 2679 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:48:41.663809 kubelet[2679]: I0417 23:48:41.663736 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.6637166089999997 podStartE2EDuration="3.663716609s" podCreationTimestamp="2026-04-17 23:48:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:48:41.663516719 +0000 UTC m=+1.124326159" watchObservedRunningTime="2026-04-17 23:48:41.663716609 +0000 UTC m=+1.124526046" Apr 17 23:48:41.680524 kubelet[2679]: E0417 23:48:41.680448 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:41.682721 kubelet[2679]: I0417 23:48:41.681084 2679 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:48:41.682721 kubelet[2679]: I0417 23:48:41.681887 2679 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:48:41.691531 kubelet[2679]: E0417 23:48:41.691479 2679 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:48:41.691762 kubelet[2679]: E0417 23:48:41.691736 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:41.692048 kubelet[2679]: E0417 23:48:41.692015 2679 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 23:48:41.692123 kubelet[2679]: I0417 23:48:41.692063 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.692048588 podStartE2EDuration="3.692048588s" podCreationTimestamp="2026-04-17 23:48:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:48:41.675244305 +0000 UTC m=+1.136053748" watchObservedRunningTime="2026-04-17 23:48:41.692048588 +0000 UTC m=+1.152858036" Apr 17 23:48:41.692194 kubelet[2679]: E0417 23:48:41.692176 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:41.743412 kubelet[2679]: I0417 23:48:41.742918 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.742898282 podStartE2EDuration="3.742898282s" podCreationTimestamp="2026-04-17 23:48:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:48:41.692444441 +0000 UTC m=+1.153253889" watchObservedRunningTime="2026-04-17 23:48:41.742898282 +0000 UTC m=+1.203707726" Apr 17 23:48:42.683297 kubelet[2679]: E0417 23:48:42.682837 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:42.683297 kubelet[2679]: E0417 23:48:42.682881 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:42.683297 kubelet[2679]: E0417 23:48:42.683123 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:43.053116 sudo[1777]: pam_unix(sudo:session): session closed for user root Apr 17 23:48:43.056493 sshd[1771]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:43.062140 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:44094.service: Deactivated successfully. Apr 17 23:48:43.065605 systemd-logind[1558]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:48:43.066136 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:48:43.068949 systemd-logind[1558]: Removed session 7. Apr 17 23:48:43.684334 kubelet[2679]: E0417 23:48:43.684273 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:46.297579 kubelet[2679]: E0417 23:48:46.297292 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:46.691527 kubelet[2679]: E0417 23:48:46.691320 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:46.758241 kubelet[2679]: I0417 23:48:46.758202 2679 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:48:46.758591 containerd[1571]: time="2026-04-17T23:48:46.758532609Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:48:46.758925 kubelet[2679]: I0417 23:48:46.758769 2679 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:48:47.695286 kubelet[2679]: E0417 23:48:47.693819 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:47.831571 kubelet[2679]: I0417 23:48:47.831394 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-xtables-lock\") pod \"cilium-rwmrg\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " pod="kube-system/cilium-rwmrg" Apr 17 23:48:47.831571 kubelet[2679]: I0417 23:48:47.831467 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9a593f1-d100-49f3-896d-8ff9283e01b2-cilium-config-path\") pod \"cilium-rwmrg\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " pod="kube-system/cilium-rwmrg" Apr 17 23:48:47.831571 kubelet[2679]: I0417 23:48:47.831490 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-host-proc-sys-net\") pod \"cilium-rwmrg\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " pod="kube-system/cilium-rwmrg" Apr 17 23:48:47.831571 kubelet[2679]: I0417 23:48:47.831503 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9a593f1-d100-49f3-896d-8ff9283e01b2-hubble-tls\") pod \"cilium-rwmrg\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " pod="kube-system/cilium-rwmrg" Apr 17 23:48:47.831571 kubelet[2679]: I0417 23:48:47.831519 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c10d2ad6-6781-44f7-b7b0-d524aef9e7b9-kube-proxy\") pod \"kube-proxy-7tpqs\" (UID: \"c10d2ad6-6781-44f7-b7b0-d524aef9e7b9\") " pod="kube-system/kube-proxy-7tpqs" Apr 17 23:48:47.831571 kubelet[2679]: I0417 23:48:47.831532 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c10d2ad6-6781-44f7-b7b0-d524aef9e7b9-lib-modules\") pod \"kube-proxy-7tpqs\" (UID: \"c10d2ad6-6781-44f7-b7b0-d524aef9e7b9\") " pod="kube-system/kube-proxy-7tpqs" Apr 17 23:48:47.832025 kubelet[2679]: I0417 23:48:47.831545 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-hostproc\") pod \"cilium-rwmrg\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " pod="kube-system/cilium-rwmrg" Apr 17 23:48:47.832025 kubelet[2679]: I0417 23:48:47.831558 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-host-proc-sys-kernel\") pod \"cilium-rwmrg\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " pod="kube-system/cilium-rwmrg" Apr 17 23:48:47.832025 kubelet[2679]: I0417 23:48:47.831573 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcdsb\" (UniqueName: \"kubernetes.io/projected/e9a593f1-d100-49f3-896d-8ff9283e01b2-kube-api-access-rcdsb\") pod \"cilium-rwmrg\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " pod="kube-system/cilium-rwmrg" Apr 17 23:48:47.832025 kubelet[2679]: I0417 23:48:47.831591 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-bpf-maps\") pod \"cilium-rwmrg\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " pod="kube-system/cilium-rwmrg" Apr 17 23:48:47.832025 kubelet[2679]: I0417 23:48:47.831603 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9a593f1-d100-49f3-896d-8ff9283e01b2-clustermesh-secrets\") pod \"cilium-rwmrg\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " pod="kube-system/cilium-rwmrg" Apr 17 23:48:47.832025 kubelet[2679]: I0417 23:48:47.831617 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-cilium-run\") pod \"cilium-rwmrg\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " pod="kube-system/cilium-rwmrg" Apr 17 23:48:47.832197 kubelet[2679]: I0417 23:48:47.831718 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c10d2ad6-6781-44f7-b7b0-d524aef9e7b9-xtables-lock\") pod \"kube-proxy-7tpqs\" (UID: \"c10d2ad6-6781-44f7-b7b0-d524aef9e7b9\") " pod="kube-system/kube-proxy-7tpqs" Apr 17 23:48:47.832197 kubelet[2679]: I0417 23:48:47.831739 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkw5b\" (UniqueName: \"kubernetes.io/projected/c10d2ad6-6781-44f7-b7b0-d524aef9e7b9-kube-api-access-kkw5b\") pod \"kube-proxy-7tpqs\" (UID: \"c10d2ad6-6781-44f7-b7b0-d524aef9e7b9\") " pod="kube-system/kube-proxy-7tpqs" Apr 17 23:48:47.832197 kubelet[2679]: I0417 23:48:47.831775 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-cilium-cgroup\") pod \"cilium-rwmrg\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " pod="kube-system/cilium-rwmrg" Apr 17 23:48:47.832197 kubelet[2679]: I0417 23:48:47.831856 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-etc-cni-netd\") pod \"cilium-rwmrg\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " pod="kube-system/cilium-rwmrg" Apr 17 23:48:47.832197 kubelet[2679]: I0417 23:48:47.831873 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-cni-path\") pod \"cilium-rwmrg\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " pod="kube-system/cilium-rwmrg" Apr 17 23:48:47.832197 kubelet[2679]: I0417 23:48:47.831886 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-lib-modules\") pod \"cilium-rwmrg\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " pod="kube-system/cilium-rwmrg" Apr 17 23:48:48.034031 kubelet[2679]: I0417 23:48:48.033812 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc239fd9-c47d-4024-a979-b6dee7296dc0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-sq2lh\" (UID: \"fc239fd9-c47d-4024-a979-b6dee7296dc0\") " pod="kube-system/cilium-operator-6c4d7847fc-sq2lh" Apr 17 23:48:48.034031 kubelet[2679]: I0417 23:48:48.033957 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdgsv\" (UniqueName: \"kubernetes.io/projected/fc239fd9-c47d-4024-a979-b6dee7296dc0-kube-api-access-hdgsv\") pod \"cilium-operator-6c4d7847fc-sq2lh\" (UID: \"fc239fd9-c47d-4024-a979-b6dee7296dc0\") " pod="kube-system/cilium-operator-6c4d7847fc-sq2lh" Apr 17 23:48:48.035202 kubelet[2679]: E0417 23:48:48.035139 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:48.036021 containerd[1571]: time="2026-04-17T23:48:48.035955244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7tpqs,Uid:c10d2ad6-6781-44f7-b7b0-d524aef9e7b9,Namespace:kube-system,Attempt:0,}" Apr 17 23:48:48.038861 kubelet[2679]: E0417 23:48:48.038747 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:48.039878 containerd[1571]: time="2026-04-17T23:48:48.039765490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rwmrg,Uid:e9a593f1-d100-49f3-896d-8ff9283e01b2,Namespace:kube-system,Attempt:0,}" Apr 17 23:48:48.068132 containerd[1571]: time="2026-04-17T23:48:48.067857572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:48:48.068132 containerd[1571]: time="2026-04-17T23:48:48.067902888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:48:48.068132 containerd[1571]: time="2026-04-17T23:48:48.067914998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:48:48.068388 containerd[1571]: time="2026-04-17T23:48:48.068230762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:48:48.070751 containerd[1571]: time="2026-04-17T23:48:48.070399828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:48:48.070751 containerd[1571]: time="2026-04-17T23:48:48.070625633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:48:48.070751 containerd[1571]: time="2026-04-17T23:48:48.070636374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:48:48.074140 containerd[1571]: time="2026-04-17T23:48:48.073876493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:48:48.112618 containerd[1571]: time="2026-04-17T23:48:48.112534101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rwmrg,Uid:e9a593f1-d100-49f3-896d-8ff9283e01b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323\"" Apr 17 23:48:48.113311 kubelet[2679]: E0417 23:48:48.113278 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:48.115510 containerd[1571]: time="2026-04-17T23:48:48.115274233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7tpqs,Uid:c10d2ad6-6781-44f7-b7b0-d524aef9e7b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"6369e15d0f8c0a2557e95af28b7ff01c9cc2c3ce1e48cc5839ec5a77632871c3\"" Apr 17 23:48:48.116979 containerd[1571]: time="2026-04-17T23:48:48.116931721Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 17 23:48:48.117826 kubelet[2679]: E0417 23:48:48.116892 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:48.126542 containerd[1571]: time="2026-04-17T23:48:48.125846566Z" level=info msg="CreateContainer within sandbox \"6369e15d0f8c0a2557e95af28b7ff01c9cc2c3ce1e48cc5839ec5a77632871c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:48:48.151978 containerd[1571]: time="2026-04-17T23:48:48.151873077Z" level=info msg="CreateContainer within sandbox \"6369e15d0f8c0a2557e95af28b7ff01c9cc2c3ce1e48cc5839ec5a77632871c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"614e83dda5ee424da2dbc66b271e0faf8450e7eb7628623b8d236cd9f270bd7b\"" Apr 17 23:48:48.152599 containerd[1571]: time="2026-04-17T23:48:48.152581269Z" level=info msg="StartContainer for \"614e83dda5ee424da2dbc66b271e0faf8450e7eb7628623b8d236cd9f270bd7b\"" Apr 17 23:48:48.210982 containerd[1571]: time="2026-04-17T23:48:48.210858414Z" level=info msg="StartContainer for \"614e83dda5ee424da2dbc66b271e0faf8450e7eb7628623b8d236cd9f270bd7b\" returns successfully" Apr 17 23:48:48.267509 kubelet[2679]: E0417 23:48:48.267385 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:48.269303 containerd[1571]: time="2026-04-17T23:48:48.268920890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sq2lh,Uid:fc239fd9-c47d-4024-a979-b6dee7296dc0,Namespace:kube-system,Attempt:0,}" Apr 17 23:48:48.299173 containerd[1571]: time="2026-04-17T23:48:48.298122683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:48:48.299173 containerd[1571]: time="2026-04-17T23:48:48.299028678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:48:48.299173 containerd[1571]: time="2026-04-17T23:48:48.299037943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:48:48.299747 containerd[1571]: time="2026-04-17T23:48:48.299509940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:48:48.433258 containerd[1571]: time="2026-04-17T23:48:48.433194312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sq2lh,Uid:fc239fd9-c47d-4024-a979-b6dee7296dc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5ca722e747ca106e41872b8be2b68fff4ea67599ff0d6fdcacab941735870d3\"" Apr 17 23:48:48.434935 kubelet[2679]: E0417 23:48:48.434385 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:48.698858 kubelet[2679]: E0417 23:48:48.698655 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:50.691369 kubelet[2679]: I0417 23:48:50.691260 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7tpqs" podStartSLOduration=3.691073884 podStartE2EDuration="3.691073884s" podCreationTimestamp="2026-04-17 23:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:48:48.710404333 +0000 UTC m=+8.171213761" watchObservedRunningTime="2026-04-17 23:48:50.691073884 +0000 UTC m=+10.151883327" Apr 17 23:48:51.735546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2771891283.mount: Deactivated successfully. Apr 17 23:48:52.403351 kubelet[2679]: E0417 23:48:52.403269 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:52.651336 kubelet[2679]: E0417 23:48:52.651298 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:53.481043 containerd[1571]: time="2026-04-17T23:48:53.480854685Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:53.482129 containerd[1571]: time="2026-04-17T23:48:53.482041526Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 17 23:48:53.497961 containerd[1571]: time="2026-04-17T23:48:53.497790322Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:53.502727 containerd[1571]: time="2026-04-17T23:48:53.502579277Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.385576627s" Apr 17 23:48:53.503022 containerd[1571]: time="2026-04-17T23:48:53.502754034Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 17 23:48:53.509118 containerd[1571]: time="2026-04-17T23:48:53.509082465Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 17 23:48:53.524091 containerd[1571]: time="2026-04-17T23:48:53.523995922Z" level=info msg="CreateContainer within sandbox \"b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 23:48:53.541091 containerd[1571]: time="2026-04-17T23:48:53.541025564Z" level=info msg="CreateContainer within sandbox \"b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1\"" Apr 17 23:48:53.541985 containerd[1571]: time="2026-04-17T23:48:53.541882779Z" level=info msg="StartContainer for \"c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1\"" Apr 17 23:48:53.601298 containerd[1571]: time="2026-04-17T23:48:53.601192095Z" level=info msg="StartContainer for \"c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1\" returns successfully" Apr 17 23:48:53.695720 containerd[1571]: time="2026-04-17T23:48:53.695610890Z" level=info msg="shim disconnected" id=c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1 namespace=k8s.io Apr 17 23:48:53.695720 containerd[1571]: time="2026-04-17T23:48:53.695719358Z" level=warning msg="cleaning up after shim disconnected" id=c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1 namespace=k8s.io Apr 17 23:48:53.695720 containerd[1571]: time="2026-04-17T23:48:53.695727729Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:48:53.717714 kubelet[2679]: E0417 23:48:53.717579 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:53.723920 containerd[1571]: time="2026-04-17T23:48:53.723733323Z" level=info msg="CreateContainer within sandbox \"b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 23:48:53.742293 containerd[1571]: time="2026-04-17T23:48:53.742106014Z" level=info msg="CreateContainer within sandbox \"b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c\"" Apr 17 23:48:53.743962 containerd[1571]: time="2026-04-17T23:48:53.743842731Z" level=info msg="StartContainer for \"8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c\"" Apr 17 23:48:53.805481 containerd[1571]: time="2026-04-17T23:48:53.805341002Z" level=info msg="StartContainer for \"8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c\" returns successfully" Apr 17 23:48:53.820810 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:48:53.822072 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:48:53.822143 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:48:53.831023 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:48:53.849030 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:48:53.850843 containerd[1571]: time="2026-04-17T23:48:53.850645791Z" level=info msg="shim disconnected" id=8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c namespace=k8s.io Apr 17 23:48:53.850995 containerd[1571]: time="2026-04-17T23:48:53.850877381Z" level=warning msg="cleaning up after shim disconnected" id=8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c namespace=k8s.io Apr 17 23:48:53.850995 containerd[1571]: time="2026-04-17T23:48:53.850977377Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:48:54.534646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1-rootfs.mount: Deactivated successfully. Apr 17 23:48:54.722008 kubelet[2679]: E0417 23:48:54.721969 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:54.727581 containerd[1571]: time="2026-04-17T23:48:54.727438116Z" level=info msg="CreateContainer within sandbox \"b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 23:48:54.750636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333329386.mount: Deactivated successfully. Apr 17 23:48:54.765982 containerd[1571]: time="2026-04-17T23:48:54.765870335Z" level=info msg="CreateContainer within sandbox \"b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14\"" Apr 17 23:48:54.766553 containerd[1571]: time="2026-04-17T23:48:54.766521706Z" level=info msg="StartContainer for \"abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14\"" Apr 17 23:48:54.837588 containerd[1571]: time="2026-04-17T23:48:54.837355781Z" level=info msg="StartContainer for \"abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14\" returns successfully" Apr 17 23:48:54.868776 containerd[1571]: time="2026-04-17T23:48:54.868613365Z" level=info msg="shim disconnected" id=abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14 namespace=k8s.io Apr 17 23:48:54.868776 containerd[1571]: time="2026-04-17T23:48:54.868759036Z" level=warning msg="cleaning up after shim disconnected" id=abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14 namespace=k8s.io Apr 17 23:48:54.868776 containerd[1571]: time="2026-04-17T23:48:54.868767475Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:48:55.503177 containerd[1571]: time="2026-04-17T23:48:55.503040949Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:55.504015 containerd[1571]: time="2026-04-17T23:48:55.503957487Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 17 23:48:55.505160 containerd[1571]: time="2026-04-17T23:48:55.505074578Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:48:55.507232 containerd[1571]: time="2026-04-17T23:48:55.507180280Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.998061774s" Apr 17 23:48:55.507232 containerd[1571]: time="2026-04-17T23:48:55.507224548Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 17 23:48:55.513975 containerd[1571]: time="2026-04-17T23:48:55.513931220Z" level=info msg="CreateContainer within sandbox \"e5ca722e747ca106e41872b8be2b68fff4ea67599ff0d6fdcacab941735870d3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 17 23:48:55.535530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14-rootfs.mount: Deactivated successfully. Apr 17 23:48:55.568062 containerd[1571]: time="2026-04-17T23:48:55.567962246Z" level=info msg="CreateContainer within sandbox \"e5ca722e747ca106e41872b8be2b68fff4ea67599ff0d6fdcacab941735870d3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d7584a85f4cebc95a352f98d5be045c851f886df7b154d4d79c6414758bad692\"" Apr 17 23:48:55.569268 containerd[1571]: time="2026-04-17T23:48:55.569226764Z" level=info msg="StartContainer for \"d7584a85f4cebc95a352f98d5be045c851f886df7b154d4d79c6414758bad692\"" Apr 17 23:48:55.622352 containerd[1571]: time="2026-04-17T23:48:55.622300767Z" level=info msg="StartContainer for \"d7584a85f4cebc95a352f98d5be045c851f886df7b154d4d79c6414758bad692\" returns successfully" Apr 17 23:48:55.772137 kubelet[2679]: E0417 23:48:55.771916 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:55.775624 kubelet[2679]: E0417 23:48:55.772152 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:55.788009 containerd[1571]: time="2026-04-17T23:48:55.787856188Z" level=info msg="CreateContainer within sandbox \"b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 23:48:55.825044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1566360739.mount: Deactivated successfully. Apr 17 23:48:55.830201 containerd[1571]: time="2026-04-17T23:48:55.830031091Z" level=info msg="CreateContainer within sandbox \"b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234\"" Apr 17 23:48:55.832209 containerd[1571]: time="2026-04-17T23:48:55.832065984Z" level=info msg="StartContainer for \"1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234\"" Apr 17 23:48:55.906835 containerd[1571]: time="2026-04-17T23:48:55.906369252Z" level=info msg="StartContainer for \"1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234\" returns successfully" Apr 17 23:48:55.936033 containerd[1571]: time="2026-04-17T23:48:55.935904288Z" level=info msg="shim disconnected" id=1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234 namespace=k8s.io Apr 17 23:48:55.936033 containerd[1571]: time="2026-04-17T23:48:55.935991175Z" level=warning msg="cleaning up after shim disconnected" id=1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234 namespace=k8s.io Apr 17 23:48:55.936033 containerd[1571]: time="2026-04-17T23:48:55.936002683Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:48:56.772254 kubelet[2679]: E0417 23:48:56.772176 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:56.773740 kubelet[2679]: E0417 23:48:56.773664 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:56.790202 containerd[1571]: time="2026-04-17T23:48:56.790041619Z" level=info msg="CreateContainer within sandbox \"b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 23:48:56.796476 kubelet[2679]: I0417 23:48:56.796380 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-sq2lh" podStartSLOduration=2.723379884 podStartE2EDuration="9.796365363s" podCreationTimestamp="2026-04-17 23:48:47 +0000 UTC" firstStartedPulling="2026-04-17 23:48:48.435593612 +0000 UTC m=+7.896403050" lastFinishedPulling="2026-04-17 23:48:55.508579094 +0000 UTC m=+14.969388529" observedRunningTime="2026-04-17 23:48:55.839008605 +0000 UTC m=+15.299818110" watchObservedRunningTime="2026-04-17 23:48:56.796365363 +0000 UTC m=+16.257174803" Apr 17 23:48:56.810064 containerd[1571]: time="2026-04-17T23:48:56.809982353Z" level=info msg="CreateContainer within sandbox \"b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66\"" Apr 17 23:48:56.810828 containerd[1571]: time="2026-04-17T23:48:56.810753492Z" level=info msg="StartContainer for \"8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66\"" Apr 17 23:48:56.877375 containerd[1571]: time="2026-04-17T23:48:56.877292962Z" level=info msg="StartContainer for \"8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66\" returns successfully" Apr 17 23:48:57.056000 kubelet[2679]: I0417 23:48:57.055842 2679 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 23:48:57.240237 kubelet[2679]: I0417 23:48:57.240145 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rfww\" (UniqueName: \"kubernetes.io/projected/5826d830-9185-4b1a-b0b2-12448d34b8de-kube-api-access-4rfww\") pod \"coredns-674b8bbfcf-8ww4r\" (UID: \"5826d830-9185-4b1a-b0b2-12448d34b8de\") " pod="kube-system/coredns-674b8bbfcf-8ww4r" Apr 17 23:48:57.240237 kubelet[2679]: I0417 23:48:57.240224 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df698c85-3474-4190-9be2-79167a0b43f9-config-volume\") pod \"coredns-674b8bbfcf-mx2n7\" (UID: \"df698c85-3474-4190-9be2-79167a0b43f9\") " pod="kube-system/coredns-674b8bbfcf-mx2n7" Apr 17 23:48:57.240237 kubelet[2679]: I0417 23:48:57.240251 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7ktm\" (UniqueName: \"kubernetes.io/projected/df698c85-3474-4190-9be2-79167a0b43f9-kube-api-access-w7ktm\") pod \"coredns-674b8bbfcf-mx2n7\" (UID: \"df698c85-3474-4190-9be2-79167a0b43f9\") " pod="kube-system/coredns-674b8bbfcf-mx2n7" Apr 17 23:48:57.240619 kubelet[2679]: I0417 23:48:57.240263 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5826d830-9185-4b1a-b0b2-12448d34b8de-config-volume\") pod \"coredns-674b8bbfcf-8ww4r\" (UID: \"5826d830-9185-4b1a-b0b2-12448d34b8de\") " pod="kube-system/coredns-674b8bbfcf-8ww4r" Apr 17 23:48:57.397745 kubelet[2679]: E0417 23:48:57.397424 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:57.398977 containerd[1571]: time="2026-04-17T23:48:57.398921686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8ww4r,Uid:5826d830-9185-4b1a-b0b2-12448d34b8de,Namespace:kube-system,Attempt:0,}" Apr 17 23:48:57.402149 kubelet[2679]: E0417 23:48:57.401994 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:57.403439 containerd[1571]: time="2026-04-17T23:48:57.403299343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mx2n7,Uid:df698c85-3474-4190-9be2-79167a0b43f9,Namespace:kube-system,Attempt:0,}" Apr 17 23:48:57.783177 kubelet[2679]: E0417 23:48:57.782904 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:57.803246 kubelet[2679]: I0417 23:48:57.803093 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rwmrg" podStartSLOduration=5.410092828 podStartE2EDuration="10.803071021s" podCreationTimestamp="2026-04-17 23:48:47 +0000 UTC" firstStartedPulling="2026-04-17 23:48:48.115909719 +0000 UTC m=+7.576719152" lastFinishedPulling="2026-04-17 23:48:53.50888791 +0000 UTC m=+12.969697345" observedRunningTime="2026-04-17 23:48:57.802863584 +0000 UTC m=+17.263673020" watchObservedRunningTime="2026-04-17 23:48:57.803071021 +0000 UTC m=+17.263880457" Apr 17 23:48:58.785895 kubelet[2679]: E0417 23:48:58.785794 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:59.761714 systemd-networkd[1258]: cilium_host: Link UP Apr 17 23:48:59.761887 systemd-networkd[1258]: cilium_net: Link UP Apr 17 23:48:59.762044 systemd-networkd[1258]: cilium_net: Gained carrier Apr 17 23:48:59.762186 systemd-networkd[1258]: cilium_host: Gained carrier Apr 17 23:48:59.790914 kubelet[2679]: E0417 23:48:59.790467 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:48:59.914394 systemd-networkd[1258]: cilium_vxlan: Link UP Apr 17 23:48:59.914400 systemd-networkd[1258]: cilium_vxlan: Gained carrier Apr 17 23:48:59.922872 systemd-networkd[1258]: cilium_host: Gained IPv6LL Apr 17 23:49:00.107109 systemd-networkd[1258]: cilium_net: Gained IPv6LL Apr 17 23:49:00.185781 kernel: NET: Registered PF_ALG protocol family Apr 17 23:49:00.793189 kubelet[2679]: E0417 23:49:00.793146 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:01.012249 systemd-networkd[1258]: lxc_health: Link UP Apr 17 23:49:01.020603 systemd-networkd[1258]: lxc_health: Gained carrier Apr 17 23:49:01.506280 systemd-networkd[1258]: lxce961e3d3b602: Link UP Apr 17 23:49:01.516992 kernel: eth0: renamed from tmpcf170 Apr 17 23:49:01.535426 systemd-networkd[1258]: lxc16a2d09161cf: Link UP Apr 17 23:49:01.537338 systemd-networkd[1258]: lxce961e3d3b602: Gained carrier Apr 17 23:49:01.537763 kernel: eth0: renamed from tmpde7d6 Apr 17 23:49:01.544199 systemd-networkd[1258]: lxc16a2d09161cf: Gained carrier Apr 17 23:49:01.698921 systemd-networkd[1258]: cilium_vxlan: Gained IPv6LL Apr 17 23:49:02.041195 kubelet[2679]: E0417 23:49:02.041162 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:02.468058 update_engine[1563]: I20260417 23:49:02.466874 1563 update_attempter.cc:509] Updating boot flags... Apr 17 23:49:02.467013 systemd-networkd[1258]: lxc_health: Gained IPv6LL Apr 17 23:49:02.498758 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (3536) Apr 17 23:49:02.659132 systemd-networkd[1258]: lxc16a2d09161cf: Gained IPv6LL Apr 17 23:49:02.798236 kubelet[2679]: E0417 23:49:02.798061 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:03.170996 systemd-networkd[1258]: lxce961e3d3b602: Gained IPv6LL Apr 17 23:49:05.630315 containerd[1571]: time="2026-04-17T23:49:05.628057476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:49:05.630315 containerd[1571]: time="2026-04-17T23:49:05.629424790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:49:05.630315 containerd[1571]: time="2026-04-17T23:49:05.629462864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:49:05.630315 containerd[1571]: time="2026-04-17T23:49:05.629574643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:49:05.652037 containerd[1571]: time="2026-04-17T23:49:05.650881788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:49:05.652037 containerd[1571]: time="2026-04-17T23:49:05.650970066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:49:05.652037 containerd[1571]: time="2026-04-17T23:49:05.650988890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:49:05.652037 containerd[1571]: time="2026-04-17T23:49:05.651096647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:49:05.668428 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:49:05.696910 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:49:05.716041 containerd[1571]: time="2026-04-17T23:49:05.715979447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8ww4r,Uid:5826d830-9185-4b1a-b0b2-12448d34b8de,Namespace:kube-system,Attempt:0,} returns sandbox id \"de7d6a9dd556c9c4dd651aaada9f45baaf731f984ca73be6f896b949b3fac633\"" Apr 17 23:49:05.717429 kubelet[2679]: E0417 23:49:05.717384 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:05.726247 containerd[1571]: time="2026-04-17T23:49:05.726156985Z" level=info msg="CreateContainer within sandbox \"de7d6a9dd556c9c4dd651aaada9f45baaf731f984ca73be6f896b949b3fac633\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:49:05.741276 containerd[1571]: time="2026-04-17T23:49:05.741080666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mx2n7,Uid:df698c85-3474-4190-9be2-79167a0b43f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf1704be6f4cb8e4f7518785515d46913840c09e2a4b5944a10fcafc416ce786\"" Apr 17 23:49:05.743186 kubelet[2679]: E0417 23:49:05.743110 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:05.758090 containerd[1571]: time="2026-04-17T23:49:05.757456893Z" level=info msg="CreateContainer within sandbox \"cf1704be6f4cb8e4f7518785515d46913840c09e2a4b5944a10fcafc416ce786\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:49:05.760364 containerd[1571]: time="2026-04-17T23:49:05.760207347Z" level=info msg="CreateContainer within sandbox \"de7d6a9dd556c9c4dd651aaada9f45baaf731f984ca73be6f896b949b3fac633\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"96fce6daf52656e1a3939d4a5caec5d3da6cbdfa01689be1f74380b1beb46a86\"" Apr 17 23:49:05.775125 containerd[1571]: time="2026-04-17T23:49:05.775068400Z" level=info msg="StartContainer for \"96fce6daf52656e1a3939d4a5caec5d3da6cbdfa01689be1f74380b1beb46a86\"" Apr 17 23:49:05.784304 containerd[1571]: time="2026-04-17T23:49:05.784217941Z" level=info msg="CreateContainer within sandbox \"cf1704be6f4cb8e4f7518785515d46913840c09e2a4b5944a10fcafc416ce786\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"27bee4a3a44117ba9f5ab47c7f671aa2dd0383e1854b356201d076f5f0ee95c1\"" Apr 17 23:49:05.785987 containerd[1571]: time="2026-04-17T23:49:05.785651415Z" level=info msg="StartContainer for \"27bee4a3a44117ba9f5ab47c7f671aa2dd0383e1854b356201d076f5f0ee95c1\"" Apr 17 23:49:05.908032 containerd[1571]: time="2026-04-17T23:49:05.907905172Z" level=info msg="StartContainer for \"96fce6daf52656e1a3939d4a5caec5d3da6cbdfa01689be1f74380b1beb46a86\" returns successfully" Apr 17 23:49:05.920661 containerd[1571]: time="2026-04-17T23:49:05.920172048Z" level=info msg="StartContainer for \"27bee4a3a44117ba9f5ab47c7f671aa2dd0383e1854b356201d076f5f0ee95c1\" returns successfully" Apr 17 23:49:06.634970 systemd[1]: run-containerd-runc-k8s.io-cf1704be6f4cb8e4f7518785515d46913840c09e2a4b5944a10fcafc416ce786-runc.estHSD.mount: Deactivated successfully. Apr 17 23:49:06.820085 kubelet[2679]: E0417 23:49:06.820033 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:06.823376 kubelet[2679]: E0417 23:49:06.823317 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:06.856171 kubelet[2679]: I0417 23:49:06.856052 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mx2n7" podStartSLOduration=19.856025213 podStartE2EDuration="19.856025213s" podCreationTimestamp="2026-04-17 23:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:49:06.837551489 +0000 UTC m=+26.298360925" watchObservedRunningTime="2026-04-17 23:49:06.856025213 +0000 UTC m=+26.316834650" Apr 17 23:49:06.877768 kubelet[2679]: I0417 23:49:06.877238 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8ww4r" podStartSLOduration=19.877216191 podStartE2EDuration="19.877216191s" podCreationTimestamp="2026-04-17 23:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:49:06.856577774 +0000 UTC m=+26.317387219" watchObservedRunningTime="2026-04-17 23:49:06.877216191 +0000 UTC m=+26.338025631" Apr 17 23:49:07.365000 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:47268.service - OpenSSH per-connection server daemon (10.0.0.1:47268). Apr 17 23:49:07.404419 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 47268 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:07.405931 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:07.412042 systemd-logind[1558]: New session 8 of user core. Apr 17 23:49:07.418020 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:49:07.821084 sshd[4079]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:07.824135 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:47268.service: Deactivated successfully. Apr 17 23:49:07.826060 kubelet[2679]: E0417 23:49:07.825971 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:07.826060 kubelet[2679]: E0417 23:49:07.825972 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:07.828120 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:49:07.828298 systemd-logind[1558]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:49:07.829955 systemd-logind[1558]: Removed session 8. Apr 17 23:49:08.830143 kubelet[2679]: E0417 23:49:08.830097 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:08.830875 kubelet[2679]: E0417 23:49:08.830251 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:12.840405 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:51848.service - OpenSSH per-connection server daemon (10.0.0.1:51848). Apr 17 23:49:12.880487 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 51848 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:12.882354 sshd[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:12.888092 systemd-logind[1558]: New session 9 of user core. Apr 17 23:49:12.898383 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:49:13.017984 sshd[4095]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:13.022167 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:51848.service: Deactivated successfully. Apr 17 23:49:13.024840 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:49:13.025758 systemd-logind[1558]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:49:13.028426 systemd-logind[1558]: Removed session 9. Apr 17 23:49:18.037484 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:51864.service - OpenSSH per-connection server daemon (10.0.0.1:51864). Apr 17 23:49:18.076196 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 51864 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:18.079120 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:18.090514 systemd-logind[1558]: New session 10 of user core. Apr 17 23:49:18.095607 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:49:18.288181 sshd[4112]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:18.291837 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:51864.service: Deactivated successfully. Apr 17 23:49:18.293622 systemd-logind[1558]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:49:18.293740 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:49:18.294845 systemd-logind[1558]: Removed session 10. Apr 17 23:49:23.308630 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:52384.service - OpenSSH per-connection server daemon (10.0.0.1:52384). Apr 17 23:49:23.353509 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 52384 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:23.355432 sshd[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:23.362937 systemd-logind[1558]: New session 11 of user core. Apr 17 23:49:23.381383 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:49:23.519795 sshd[4131]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:23.530171 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:52386.service - OpenSSH per-connection server daemon (10.0.0.1:52386). Apr 17 23:49:23.530602 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:52384.service: Deactivated successfully. Apr 17 23:49:23.535091 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:49:23.535195 systemd-logind[1558]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:49:23.536932 systemd-logind[1558]: Removed session 11. Apr 17 23:49:23.566086 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 52386 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:23.569028 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:23.573977 systemd-logind[1558]: New session 12 of user core. Apr 17 23:49:23.588240 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:49:23.765213 sshd[4144]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:23.774296 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:52388.service - OpenSSH per-connection server daemon (10.0.0.1:52388). Apr 17 23:49:23.774863 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:52386.service: Deactivated successfully. Apr 17 23:49:23.784747 systemd-logind[1558]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:49:23.786095 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:49:23.791292 systemd-logind[1558]: Removed session 12. Apr 17 23:49:23.829798 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 52388 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:23.831600 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:23.837344 systemd-logind[1558]: New session 13 of user core. Apr 17 23:49:23.851268 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:49:23.972582 sshd[4158]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:23.976261 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:52388.service: Deactivated successfully. Apr 17 23:49:23.978937 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:49:23.978938 systemd-logind[1558]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:49:23.980227 systemd-logind[1558]: Removed session 13. Apr 17 23:49:28.983097 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:52390.service - OpenSSH per-connection server daemon (10.0.0.1:52390). Apr 17 23:49:29.019133 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 52390 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:29.021254 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:29.027912 systemd-logind[1558]: New session 14 of user core. Apr 17 23:49:29.042484 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:49:29.162825 sshd[4178]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:29.166525 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:52390.service: Deactivated successfully. Apr 17 23:49:29.168458 systemd-logind[1558]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:49:29.168553 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:49:29.169796 systemd-logind[1558]: Removed session 14. Apr 17 23:49:34.173623 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:43786.service - OpenSSH per-connection server daemon (10.0.0.1:43786). Apr 17 23:49:34.214421 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 43786 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:34.216890 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:34.227890 systemd-logind[1558]: New session 15 of user core. Apr 17 23:49:34.240557 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:49:34.378465 sshd[4193]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:34.386161 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:43800.service - OpenSSH per-connection server daemon (10.0.0.1:43800). Apr 17 23:49:34.386893 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:43786.service: Deactivated successfully. Apr 17 23:49:34.391031 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:49:34.392316 systemd-logind[1558]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:49:34.394242 systemd-logind[1558]: Removed session 15. Apr 17 23:49:34.423858 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 43800 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:34.425220 sshd[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:34.432978 systemd-logind[1558]: New session 16 of user core. Apr 17 23:49:34.444244 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:49:34.653447 sshd[4205]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:34.658006 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:43804.service - OpenSSH per-connection server daemon (10.0.0.1:43804). Apr 17 23:49:34.658320 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:43800.service: Deactivated successfully. Apr 17 23:49:34.660771 systemd-logind[1558]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:49:34.661278 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:49:34.665017 systemd-logind[1558]: Removed session 16. Apr 17 23:49:34.700612 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 43804 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:34.702277 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:34.707610 systemd-logind[1558]: New session 17 of user core. Apr 17 23:49:34.716156 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:49:35.271146 sshd[4219]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:35.282459 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:43810.service - OpenSSH per-connection server daemon (10.0.0.1:43810). Apr 17 23:49:35.284184 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:43804.service: Deactivated successfully. Apr 17 23:49:35.291494 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:49:35.294939 systemd-logind[1558]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:49:35.308649 systemd-logind[1558]: Removed session 17. Apr 17 23:49:35.403914 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 43810 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:35.405992 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:35.411823 systemd-logind[1558]: New session 18 of user core. Apr 17 23:49:35.423262 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:49:35.740596 sshd[4237]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:35.750794 systemd[1]: Started sshd@18-10.0.0.92:22-10.0.0.1:43812.service - OpenSSH per-connection server daemon (10.0.0.1:43812). Apr 17 23:49:35.751268 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:43810.service: Deactivated successfully. Apr 17 23:49:35.754605 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:49:35.755812 systemd-logind[1558]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:49:35.758908 systemd-logind[1558]: Removed session 18. Apr 17 23:49:35.784527 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 43812 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:35.786031 sshd[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:35.791741 systemd-logind[1558]: New session 19 of user core. Apr 17 23:49:35.801236 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:49:35.917040 sshd[4252]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:35.920426 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:43812.service: Deactivated successfully. Apr 17 23:49:35.923381 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:49:35.923921 systemd-logind[1558]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:49:35.925470 systemd-logind[1558]: Removed session 19. Apr 17 23:49:40.939221 systemd[1]: Started sshd@19-10.0.0.92:22-10.0.0.1:49840.service - OpenSSH per-connection server daemon (10.0.0.1:49840). Apr 17 23:49:40.974696 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 49840 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:40.977203 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:40.984274 systemd-logind[1558]: New session 20 of user core. Apr 17 23:49:40.994303 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:49:41.120137 sshd[4275]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:41.123370 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:49840.service: Deactivated successfully. Apr 17 23:49:41.125189 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:49:41.125375 systemd-logind[1558]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:49:41.126507 systemd-logind[1558]: Removed session 20. Apr 17 23:49:46.135293 systemd[1]: Started sshd@20-10.0.0.92:22-10.0.0.1:49846.service - OpenSSH per-connection server daemon (10.0.0.1:49846). Apr 17 23:49:46.174230 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 49846 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:46.176069 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:46.183895 systemd-logind[1558]: New session 21 of user core. Apr 17 23:49:46.193186 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:49:46.319457 sshd[4290]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:46.333372 systemd[1]: Started sshd@21-10.0.0.92:22-10.0.0.1:49852.service - OpenSSH per-connection server daemon (10.0.0.1:49852). Apr 17 23:49:46.334064 systemd[1]: sshd@20-10.0.0.92:22-10.0.0.1:49846.service: Deactivated successfully. Apr 17 23:49:46.337226 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:49:46.339662 systemd-logind[1558]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:49:46.341937 systemd-logind[1558]: Removed session 21. Apr 17 23:49:46.371722 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 49852 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:46.373214 sshd[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:46.378236 systemd-logind[1558]: New session 22 of user core. Apr 17 23:49:46.384164 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 23:49:47.855777 containerd[1571]: time="2026-04-17T23:49:47.855631461Z" level=info msg="StopContainer for \"d7584a85f4cebc95a352f98d5be045c851f886df7b154d4d79c6414758bad692\" with timeout 30 (s)" Apr 17 23:49:47.856314 containerd[1571]: time="2026-04-17T23:49:47.856279264Z" level=info msg="Stop container \"d7584a85f4cebc95a352f98d5be045c851f886df7b154d4d79c6414758bad692\" with signal terminated" Apr 17 23:49:47.903757 containerd[1571]: time="2026-04-17T23:49:47.903633190Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:49:47.907534 containerd[1571]: time="2026-04-17T23:49:47.907487245Z" level=info msg="StopContainer for \"8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66\" with timeout 2 (s)" Apr 17 23:49:47.907909 containerd[1571]: time="2026-04-17T23:49:47.907873931Z" level=info msg="Stop container \"8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66\" with signal terminated" Apr 17 23:49:47.913643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7584a85f4cebc95a352f98d5be045c851f886df7b154d4d79c6414758bad692-rootfs.mount: Deactivated successfully. Apr 17 23:49:47.919971 containerd[1571]: time="2026-04-17T23:49:47.919744147Z" level=info msg="shim disconnected" id=d7584a85f4cebc95a352f98d5be045c851f886df7b154d4d79c6414758bad692 namespace=k8s.io Apr 17 23:49:47.919971 containerd[1571]: time="2026-04-17T23:49:47.919833265Z" level=warning msg="cleaning up after shim disconnected" id=d7584a85f4cebc95a352f98d5be045c851f886df7b154d4d79c6414758bad692 namespace=k8s.io Apr 17 23:49:47.919971 containerd[1571]: time="2026-04-17T23:49:47.919848808Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:49:47.922245 systemd-networkd[1258]: lxc_health: Link DOWN Apr 17 23:49:47.922251 systemd-networkd[1258]: lxc_health: Lost carrier Apr 17 23:49:47.947255 containerd[1571]: time="2026-04-17T23:49:47.947177804Z" level=info msg="StopContainer for \"d7584a85f4cebc95a352f98d5be045c851f886df7b154d4d79c6414758bad692\" returns successfully" Apr 17 23:49:47.953173 containerd[1571]: time="2026-04-17T23:49:47.953094318Z" level=info msg="StopPodSandbox for \"e5ca722e747ca106e41872b8be2b68fff4ea67599ff0d6fdcacab941735870d3\"" Apr 17 23:49:47.954740 containerd[1571]: time="2026-04-17T23:49:47.953182823Z" level=info msg="Container to stop \"d7584a85f4cebc95a352f98d5be045c851f886df7b154d4d79c6414758bad692\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:49:47.957624 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5ca722e747ca106e41872b8be2b68fff4ea67599ff0d6fdcacab941735870d3-shm.mount: Deactivated successfully. Apr 17 23:49:47.971341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66-rootfs.mount: Deactivated successfully. Apr 17 23:49:47.977889 containerd[1571]: time="2026-04-17T23:49:47.977809082Z" level=info msg="shim disconnected" id=8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66 namespace=k8s.io Apr 17 23:49:47.977889 containerd[1571]: time="2026-04-17T23:49:47.977887958Z" level=warning msg="cleaning up after shim disconnected" id=8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66 namespace=k8s.io Apr 17 23:49:47.977889 containerd[1571]: time="2026-04-17T23:49:47.977896816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:49:48.002414 containerd[1571]: time="2026-04-17T23:49:48.002255766Z" level=info msg="StopContainer for \"8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66\" returns successfully" Apr 17 23:49:48.003821 containerd[1571]: time="2026-04-17T23:49:48.003405182Z" level=info msg="StopPodSandbox for \"b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323\"" Apr 17 23:49:48.003821 containerd[1571]: time="2026-04-17T23:49:48.003448171Z" level=info msg="Container to stop \"c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:49:48.003821 containerd[1571]: time="2026-04-17T23:49:48.003469288Z" level=info msg="Container to stop \"8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:49:48.003821 containerd[1571]: time="2026-04-17T23:49:48.003481849Z" level=info msg="Container to stop \"abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:49:48.003821 containerd[1571]: time="2026-04-17T23:49:48.003493600Z" level=info msg="Container to stop \"1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:49:48.003821 containerd[1571]: time="2026-04-17T23:49:48.003506718Z" level=info msg="Container to stop \"8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:49:48.003575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5ca722e747ca106e41872b8be2b68fff4ea67599ff0d6fdcacab941735870d3-rootfs.mount: Deactivated successfully. Apr 17 23:49:48.009748 containerd[1571]: time="2026-04-17T23:49:48.009619389Z" level=info msg="shim disconnected" id=e5ca722e747ca106e41872b8be2b68fff4ea67599ff0d6fdcacab941735870d3 namespace=k8s.io Apr 17 23:49:48.009892 containerd[1571]: time="2026-04-17T23:49:48.009753660Z" level=warning msg="cleaning up after shim disconnected" id=e5ca722e747ca106e41872b8be2b68fff4ea67599ff0d6fdcacab941735870d3 namespace=k8s.io Apr 17 23:49:48.009892 containerd[1571]: time="2026-04-17T23:49:48.009764912Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:49:48.029426 containerd[1571]: time="2026-04-17T23:49:48.029267297Z" level=info msg="TearDown network for sandbox \"e5ca722e747ca106e41872b8be2b68fff4ea67599ff0d6fdcacab941735870d3\" successfully" Apr 17 23:49:48.029426 containerd[1571]: time="2026-04-17T23:49:48.029299312Z" level=info msg="StopPodSandbox for \"e5ca722e747ca106e41872b8be2b68fff4ea67599ff0d6fdcacab941735870d3\" returns successfully" Apr 17 23:49:48.055590 containerd[1571]: time="2026-04-17T23:49:48.055491286Z" level=info msg="shim disconnected" id=b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323 namespace=k8s.io Apr 17 23:49:48.055590 containerd[1571]: time="2026-04-17T23:49:48.055578950Z" level=warning msg="cleaning up after shim disconnected" id=b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323 namespace=k8s.io Apr 17 23:49:48.055590 containerd[1571]: time="2026-04-17T23:49:48.055590116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:49:48.072762 containerd[1571]: time="2026-04-17T23:49:48.072656693Z" level=info msg="TearDown network for sandbox \"b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323\" successfully" Apr 17 23:49:48.072762 containerd[1571]: time="2026-04-17T23:49:48.072749001Z" level=info msg="StopPodSandbox for \"b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323\" returns successfully" Apr 17 23:49:48.131874 kubelet[2679]: I0417 23:49:48.131627 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-xtables-lock\") pod \"e9a593f1-d100-49f3-896d-8ff9283e01b2\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " Apr 17 23:49:48.131874 kubelet[2679]: I0417 23:49:48.131835 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9a593f1-d100-49f3-896d-8ff9283e01b2-cilium-config-path\") pod \"e9a593f1-d100-49f3-896d-8ff9283e01b2\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " Apr 17 23:49:48.131874 kubelet[2679]: I0417 23:49:48.131867 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9a593f1-d100-49f3-896d-8ff9283e01b2-hubble-tls\") pod \"e9a593f1-d100-49f3-896d-8ff9283e01b2\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " Apr 17 23:49:48.131874 kubelet[2679]: I0417 23:49:48.131869 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e9a593f1-d100-49f3-896d-8ff9283e01b2" (UID: "e9a593f1-d100-49f3-896d-8ff9283e01b2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:49:48.132427 kubelet[2679]: I0417 23:49:48.131892 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-bpf-maps\") pod \"e9a593f1-d100-49f3-896d-8ff9283e01b2\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " Apr 17 23:49:48.132427 kubelet[2679]: I0417 23:49:48.131914 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-cni-path\") pod \"e9a593f1-d100-49f3-896d-8ff9283e01b2\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " Apr 17 23:49:48.132427 kubelet[2679]: I0417 23:49:48.131940 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-cni-path" (OuterVolumeSpecName: "cni-path") pod "e9a593f1-d100-49f3-896d-8ff9283e01b2" (UID: "e9a593f1-d100-49f3-896d-8ff9283e01b2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:49:48.132427 kubelet[2679]: I0417 23:49:48.131965 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e9a593f1-d100-49f3-896d-8ff9283e01b2" (UID: "e9a593f1-d100-49f3-896d-8ff9283e01b2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:49:48.132427 kubelet[2679]: I0417 23:49:48.132009 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-host-proc-sys-net\") pod \"e9a593f1-d100-49f3-896d-8ff9283e01b2\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " Apr 17 23:49:48.132427 kubelet[2679]: I0417 23:49:48.132027 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-cilium-cgroup\") pod \"e9a593f1-d100-49f3-896d-8ff9283e01b2\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " Apr 17 23:49:48.132595 kubelet[2679]: I0417 23:49:48.132043 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdgsv\" (UniqueName: \"kubernetes.io/projected/fc239fd9-c47d-4024-a979-b6dee7296dc0-kube-api-access-hdgsv\") pod \"fc239fd9-c47d-4024-a979-b6dee7296dc0\" (UID: \"fc239fd9-c47d-4024-a979-b6dee7296dc0\") " Apr 17 23:49:48.132595 kubelet[2679]: I0417 23:49:48.132079 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcdsb\" (UniqueName: \"kubernetes.io/projected/e9a593f1-d100-49f3-896d-8ff9283e01b2-kube-api-access-rcdsb\") pod \"e9a593f1-d100-49f3-896d-8ff9283e01b2\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " Apr 17 23:49:48.132595 kubelet[2679]: I0417 23:49:48.132092 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9a593f1-d100-49f3-896d-8ff9283e01b2-clustermesh-secrets\") pod \"e9a593f1-d100-49f3-896d-8ff9283e01b2\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " Apr 17 23:49:48.132595 kubelet[2679]: I0417 23:49:48.132103 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-lib-modules\") pod \"e9a593f1-d100-49f3-896d-8ff9283e01b2\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " Apr 17 23:49:48.132595 kubelet[2679]: I0417 23:49:48.132115 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-hostproc\") pod \"e9a593f1-d100-49f3-896d-8ff9283e01b2\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " Apr 17 23:49:48.132595 kubelet[2679]: I0417 23:49:48.132126 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-etc-cni-netd\") pod \"e9a593f1-d100-49f3-896d-8ff9283e01b2\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " Apr 17 23:49:48.132876 kubelet[2679]: I0417 23:49:48.132141 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-cilium-run\") pod \"e9a593f1-d100-49f3-896d-8ff9283e01b2\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " Apr 17 23:49:48.132876 kubelet[2679]: I0417 23:49:48.132154 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-host-proc-sys-kernel\") pod \"e9a593f1-d100-49f3-896d-8ff9283e01b2\" (UID: \"e9a593f1-d100-49f3-896d-8ff9283e01b2\") " Apr 17 23:49:48.132876 kubelet[2679]: I0417 23:49:48.132167 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc239fd9-c47d-4024-a979-b6dee7296dc0-cilium-config-path\") pod \"fc239fd9-c47d-4024-a979-b6dee7296dc0\" (UID: \"fc239fd9-c47d-4024-a979-b6dee7296dc0\") " Apr 17 23:49:48.132876 kubelet[2679]: I0417 23:49:48.132194 2679 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 17 23:49:48.132876 kubelet[2679]: I0417 23:49:48.132202 2679 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 17 23:49:48.132876 kubelet[2679]: I0417 23:49:48.132208 2679 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 17 23:49:48.134106 kubelet[2679]: I0417 23:49:48.134019 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc239fd9-c47d-4024-a979-b6dee7296dc0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fc239fd9-c47d-4024-a979-b6dee7296dc0" (UID: "fc239fd9-c47d-4024-a979-b6dee7296dc0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:49:48.134106 kubelet[2679]: I0417 23:49:48.134086 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e9a593f1-d100-49f3-896d-8ff9283e01b2" (UID: "e9a593f1-d100-49f3-896d-8ff9283e01b2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:49:48.134106 kubelet[2679]: I0417 23:49:48.134098 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e9a593f1-d100-49f3-896d-8ff9283e01b2" (UID: "e9a593f1-d100-49f3-896d-8ff9283e01b2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:49:48.136251 kubelet[2679]: I0417 23:49:48.134229 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-hostproc" (OuterVolumeSpecName: "hostproc") pod "e9a593f1-d100-49f3-896d-8ff9283e01b2" (UID: "e9a593f1-d100-49f3-896d-8ff9283e01b2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:49:48.136251 kubelet[2679]: I0417 23:49:48.134494 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9a593f1-d100-49f3-896d-8ff9283e01b2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e9a593f1-d100-49f3-896d-8ff9283e01b2" (UID: "e9a593f1-d100-49f3-896d-8ff9283e01b2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:49:48.136251 kubelet[2679]: I0417 23:49:48.134536 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e9a593f1-d100-49f3-896d-8ff9283e01b2" (UID: "e9a593f1-d100-49f3-896d-8ff9283e01b2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:49:48.136251 kubelet[2679]: I0417 23:49:48.134555 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e9a593f1-d100-49f3-896d-8ff9283e01b2" (UID: "e9a593f1-d100-49f3-896d-8ff9283e01b2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:49:48.136251 kubelet[2679]: I0417 23:49:48.134571 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e9a593f1-d100-49f3-896d-8ff9283e01b2" (UID: "e9a593f1-d100-49f3-896d-8ff9283e01b2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:49:48.136416 kubelet[2679]: I0417 23:49:48.134586 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e9a593f1-d100-49f3-896d-8ff9283e01b2" (UID: "e9a593f1-d100-49f3-896d-8ff9283e01b2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:49:48.137734 kubelet[2679]: I0417 23:49:48.137654 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9a593f1-d100-49f3-896d-8ff9283e01b2-kube-api-access-rcdsb" (OuterVolumeSpecName: "kube-api-access-rcdsb") pod "e9a593f1-d100-49f3-896d-8ff9283e01b2" (UID: "e9a593f1-d100-49f3-896d-8ff9283e01b2"). InnerVolumeSpecName "kube-api-access-rcdsb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:49:48.137890 kubelet[2679]: I0417 23:49:48.137851 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9a593f1-d100-49f3-896d-8ff9283e01b2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e9a593f1-d100-49f3-896d-8ff9283e01b2" (UID: "e9a593f1-d100-49f3-896d-8ff9283e01b2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:49:48.138369 kubelet[2679]: I0417 23:49:48.138322 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc239fd9-c47d-4024-a979-b6dee7296dc0-kube-api-access-hdgsv" (OuterVolumeSpecName: "kube-api-access-hdgsv") pod "fc239fd9-c47d-4024-a979-b6dee7296dc0" (UID: "fc239fd9-c47d-4024-a979-b6dee7296dc0"). InnerVolumeSpecName "kube-api-access-hdgsv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:49:48.138923 kubelet[2679]: I0417 23:49:48.138885 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9a593f1-d100-49f3-896d-8ff9283e01b2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e9a593f1-d100-49f3-896d-8ff9283e01b2" (UID: "e9a593f1-d100-49f3-896d-8ff9283e01b2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:49:48.232730 kubelet[2679]: I0417 23:49:48.232592 2679 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9a593f1-d100-49f3-896d-8ff9283e01b2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 17 23:49:48.232964 kubelet[2679]: I0417 23:49:48.232769 2679 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9a593f1-d100-49f3-896d-8ff9283e01b2-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 17 23:49:48.232964 kubelet[2679]: I0417 23:49:48.232812 2679 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 17 23:49:48.232964 kubelet[2679]: I0417 23:49:48.232824 2679 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 17 23:49:48.232964 kubelet[2679]: I0417 23:49:48.232836 2679 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hdgsv\" (UniqueName: \"kubernetes.io/projected/fc239fd9-c47d-4024-a979-b6dee7296dc0-kube-api-access-hdgsv\") on node \"localhost\" DevicePath \"\"" Apr 17 23:49:48.232964 kubelet[2679]: I0417 23:49:48.232846 2679 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rcdsb\" (UniqueName: \"kubernetes.io/projected/e9a593f1-d100-49f3-896d-8ff9283e01b2-kube-api-access-rcdsb\") on node \"localhost\" DevicePath \"\"" Apr 17 23:49:48.232964 kubelet[2679]: I0417 23:49:48.232856 2679 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9a593f1-d100-49f3-896d-8ff9283e01b2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 17 23:49:48.232964 kubelet[2679]: I0417 23:49:48.232865 2679 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 17 23:49:48.232964 kubelet[2679]: I0417 23:49:48.232872 2679 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 17 23:49:48.233178 kubelet[2679]: I0417 23:49:48.232880 2679 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 17 23:49:48.233178 kubelet[2679]: I0417 23:49:48.232886 2679 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 17 23:49:48.233178 kubelet[2679]: I0417 23:49:48.232892 2679 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9a593f1-d100-49f3-896d-8ff9283e01b2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 17 23:49:48.233178 kubelet[2679]: I0417 23:49:48.232897 2679 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc239fd9-c47d-4024-a979-b6dee7296dc0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 17 23:49:48.664499 kubelet[2679]: E0417 23:49:48.664383 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:48.873658 systemd[1]: var-lib-kubelet-pods-fc239fd9\x2dc47d\x2d4024\x2da979\x2db6dee7296dc0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhdgsv.mount: Deactivated successfully. Apr 17 23:49:48.874047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323-rootfs.mount: Deactivated successfully. Apr 17 23:49:48.874121 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b00a0621202104d6b7f108b3022fa0c17db5be0b04dd7a2123bb61dec2060323-shm.mount: Deactivated successfully. Apr 17 23:49:48.874196 systemd[1]: var-lib-kubelet-pods-e9a593f1\x2dd100\x2d49f3\x2d896d\x2d8ff9283e01b2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 17 23:49:48.874262 systemd[1]: var-lib-kubelet-pods-e9a593f1\x2dd100\x2d49f3\x2d896d\x2d8ff9283e01b2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 17 23:49:48.874329 systemd[1]: var-lib-kubelet-pods-e9a593f1\x2dd100\x2d49f3\x2d896d\x2d8ff9283e01b2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drcdsb.mount: Deactivated successfully. Apr 17 23:49:48.998481 kubelet[2679]: I0417 23:49:48.998197 2679 scope.go:117] "RemoveContainer" containerID="d7584a85f4cebc95a352f98d5be045c851f886df7b154d4d79c6414758bad692" Apr 17 23:49:49.000932 containerd[1571]: time="2026-04-17T23:49:49.000867509Z" level=info msg="RemoveContainer for \"d7584a85f4cebc95a352f98d5be045c851f886df7b154d4d79c6414758bad692\"" Apr 17 23:49:49.009524 containerd[1571]: time="2026-04-17T23:49:49.009235759Z" level=info msg="RemoveContainer for \"d7584a85f4cebc95a352f98d5be045c851f886df7b154d4d79c6414758bad692\" returns successfully" Apr 17 23:49:49.012384 kubelet[2679]: I0417 23:49:49.012224 2679 scope.go:117] "RemoveContainer" containerID="8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66" Apr 17 23:49:49.013782 containerd[1571]: time="2026-04-17T23:49:49.013664187Z" level=info msg="RemoveContainer for \"8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66\"" Apr 17 23:49:49.018838 containerd[1571]: time="2026-04-17T23:49:49.018663293Z" level=info msg="RemoveContainer for \"8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66\" returns successfully" Apr 17 23:49:49.020024 kubelet[2679]: I0417 23:49:49.019124 2679 scope.go:117] "RemoveContainer" containerID="1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234" Apr 17 23:49:49.021563 containerd[1571]: time="2026-04-17T23:49:49.021138881Z" level=info msg="RemoveContainer for \"1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234\"" Apr 17 23:49:49.037455 containerd[1571]: time="2026-04-17T23:49:49.037388800Z" level=info msg="RemoveContainer for \"1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234\" returns successfully" Apr 17 23:49:49.037663 kubelet[2679]: I0417 23:49:49.037644 2679 scope.go:117] "RemoveContainer" containerID="abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14" Apr 17 23:49:49.039599 containerd[1571]: time="2026-04-17T23:49:49.039017824Z" level=info msg="RemoveContainer for \"abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14\"" Apr 17 23:49:49.047625 containerd[1571]: time="2026-04-17T23:49:49.047557175Z" level=info msg="RemoveContainer for \"abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14\" returns successfully" Apr 17 23:49:49.048079 kubelet[2679]: I0417 23:49:49.048036 2679 scope.go:117] "RemoveContainer" containerID="8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c" Apr 17 23:49:49.049470 containerd[1571]: time="2026-04-17T23:49:49.049414906Z" level=info msg="RemoveContainer for \"8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c\"" Apr 17 23:49:49.053165 containerd[1571]: time="2026-04-17T23:49:49.053104616Z" level=info msg="RemoveContainer for \"8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c\" returns successfully" Apr 17 23:49:49.053445 kubelet[2679]: I0417 23:49:49.053373 2679 scope.go:117] "RemoveContainer" containerID="c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1" Apr 17 23:49:49.054420 containerd[1571]: time="2026-04-17T23:49:49.054383696Z" level=info msg="RemoveContainer for \"c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1\"" Apr 17 23:49:49.057804 containerd[1571]: time="2026-04-17T23:49:49.057657974Z" level=info msg="RemoveContainer for \"c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1\" returns successfully" Apr 17 23:49:49.058138 kubelet[2679]: I0417 23:49:49.058106 2679 scope.go:117] "RemoveContainer" containerID="8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66" Apr 17 23:49:49.058431 containerd[1571]: time="2026-04-17T23:49:49.058380821Z" level=error msg="ContainerStatus for \"8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66\": not found" Apr 17 23:49:49.066655 kubelet[2679]: E0417 23:49:49.066489 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66\": not found" containerID="8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66" Apr 17 23:49:49.066655 kubelet[2679]: I0417 23:49:49.066629 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66"} err="failed to get container status \"8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ea41f49b4df7691088cc71482b2e899f3f1967d2122afc86eb36b2c4ac59d66\": not found" Apr 17 23:49:49.066959 kubelet[2679]: I0417 23:49:49.066729 2679 scope.go:117] "RemoveContainer" containerID="1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234" Apr 17 23:49:49.067223 containerd[1571]: time="2026-04-17T23:49:49.067166270Z" level=error msg="ContainerStatus for \"1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234\": not found" Apr 17 23:49:49.067426 kubelet[2679]: E0417 23:49:49.067365 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234\": not found" containerID="1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234" Apr 17 23:49:49.067426 kubelet[2679]: I0417 23:49:49.067407 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234"} err="failed to get container status \"1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d8722d6678ef3431a0deb00c30f8e502890baa82f91ccae35ae315a00653234\": not found" Apr 17 23:49:49.067509 kubelet[2679]: I0417 23:49:49.067430 2679 scope.go:117] "RemoveContainer" containerID="abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14" Apr 17 23:49:49.067819 containerd[1571]: time="2026-04-17T23:49:49.067751310Z" level=error msg="ContainerStatus for \"abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14\": not found" Apr 17 23:49:49.067954 kubelet[2679]: E0417 23:49:49.067917 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14\": not found" containerID="abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14" Apr 17 23:49:49.067954 kubelet[2679]: I0417 23:49:49.067939 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14"} err="failed to get container status \"abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14\": rpc error: code = NotFound desc = an error occurred when try to find container \"abf57941d066537b53f0811a66536d87353dea0533656f896056b34b55253b14\": not found" Apr 17 23:49:49.068014 kubelet[2679]: I0417 23:49:49.067954 2679 scope.go:117] "RemoveContainer" containerID="8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c" Apr 17 23:49:49.068275 containerd[1571]: time="2026-04-17T23:49:49.068215386Z" level=error msg="ContainerStatus for \"8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c\": not found" Apr 17 23:49:49.068347 kubelet[2679]: E0417 23:49:49.068335 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c\": not found" containerID="8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c" Apr 17 23:49:49.068385 kubelet[2679]: I0417 23:49:49.068350 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c"} err="failed to get container status \"8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e2ff8879200362b9f4f1083c3600c537669abeca2009d21efc4705b1714739c\": not found" Apr 17 23:49:49.068385 kubelet[2679]: I0417 23:49:49.068361 2679 scope.go:117] "RemoveContainer" containerID="c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1" Apr 17 23:49:49.068590 containerd[1571]: time="2026-04-17T23:49:49.068535028Z" level=error msg="ContainerStatus for \"c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1\": not found" Apr 17 23:49:49.068692 kubelet[2679]: E0417 23:49:49.068644 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1\": not found" containerID="c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1" Apr 17 23:49:49.068811 kubelet[2679]: I0417 23:49:49.068753 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1"} err="failed to get container status \"c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9eee325f801b4208b1075983cffabdf0fd506ea32ce8a178c690434f2b637d1\": not found" Apr 17 23:49:49.791385 sshd[4304]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:49.801295 systemd[1]: Started sshd@22-10.0.0.92:22-10.0.0.1:49960.service - OpenSSH per-connection server daemon (10.0.0.1:49960). Apr 17 23:49:49.802210 systemd[1]: sshd@21-10.0.0.92:22-10.0.0.1:49852.service: Deactivated successfully. Apr 17 23:49:49.804412 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 23:49:49.806228 systemd-logind[1558]: Session 22 logged out. Waiting for processes to exit. Apr 17 23:49:49.809211 systemd-logind[1558]: Removed session 22. Apr 17 23:49:49.838950 sshd[4473]: Accepted publickey for core from 10.0.0.1 port 49960 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:49.840544 sshd[4473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:49.845247 systemd-logind[1558]: New session 23 of user core. Apr 17 23:49:49.857180 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 17 23:49:50.252995 sshd[4473]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:50.264033 systemd[1]: Started sshd@23-10.0.0.92:22-10.0.0.1:49962.service - OpenSSH per-connection server daemon (10.0.0.1:49962). Apr 17 23:49:50.264663 systemd[1]: sshd@22-10.0.0.92:22-10.0.0.1:49960.service: Deactivated successfully. Apr 17 23:49:50.269449 systemd[1]: session-23.scope: Deactivated successfully. Apr 17 23:49:50.279067 systemd-logind[1558]: Session 23 logged out. Waiting for processes to exit. Apr 17 23:49:50.288778 systemd-logind[1558]: Removed session 23. Apr 17 23:49:50.337018 sshd[4487]: Accepted publickey for core from 10.0.0.1 port 49962 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:50.338950 sshd[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:50.343331 systemd-logind[1558]: New session 24 of user core. Apr 17 23:49:50.346909 kubelet[2679]: I0417 23:49:50.346857 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9dd65c4-5b32-40e5-a871-b7317bd2ff05-host-proc-sys-kernel\") pod \"cilium-xgncv\" (UID: \"f9dd65c4-5b32-40e5-a871-b7317bd2ff05\") " pod="kube-system/cilium-xgncv" Apr 17 23:49:50.346909 kubelet[2679]: I0417 23:49:50.346905 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9dd65c4-5b32-40e5-a871-b7317bd2ff05-cilium-cgroup\") pod \"cilium-xgncv\" (UID: \"f9dd65c4-5b32-40e5-a871-b7317bd2ff05\") " pod="kube-system/cilium-xgncv" Apr 17 23:49:50.350983 kubelet[2679]: I0417 23:49:50.346923 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8w9b\" (UniqueName: \"kubernetes.io/projected/f9dd65c4-5b32-40e5-a871-b7317bd2ff05-kube-api-access-v8w9b\") pod \"cilium-xgncv\" (UID: \"f9dd65c4-5b32-40e5-a871-b7317bd2ff05\") " pod="kube-system/cilium-xgncv" Apr 17 23:49:50.350983 kubelet[2679]: I0417 23:49:50.346937 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f9dd65c4-5b32-40e5-a871-b7317bd2ff05-cilium-ipsec-secrets\") pod \"cilium-xgncv\" (UID: \"f9dd65c4-5b32-40e5-a871-b7317bd2ff05\") " pod="kube-system/cilium-xgncv" Apr 17 23:49:50.350983 kubelet[2679]: I0417 23:49:50.346950 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9dd65c4-5b32-40e5-a871-b7317bd2ff05-host-proc-sys-net\") pod \"cilium-xgncv\" (UID: \"f9dd65c4-5b32-40e5-a871-b7317bd2ff05\") " pod="kube-system/cilium-xgncv" Apr 17 23:49:50.350983 kubelet[2679]: I0417 23:49:50.346962 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9dd65c4-5b32-40e5-a871-b7317bd2ff05-bpf-maps\") pod \"cilium-xgncv\" (UID: \"f9dd65c4-5b32-40e5-a871-b7317bd2ff05\") " pod="kube-system/cilium-xgncv" Apr 17 23:49:50.350983 kubelet[2679]: I0417 23:49:50.346973 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9dd65c4-5b32-40e5-a871-b7317bd2ff05-lib-modules\") pod \"cilium-xgncv\" (UID: \"f9dd65c4-5b32-40e5-a871-b7317bd2ff05\") " pod="kube-system/cilium-xgncv" Apr 17 23:49:50.350983 kubelet[2679]: I0417 23:49:50.347089 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9dd65c4-5b32-40e5-a871-b7317bd2ff05-hubble-tls\") pod \"cilium-xgncv\" (UID: \"f9dd65c4-5b32-40e5-a871-b7317bd2ff05\") " pod="kube-system/cilium-xgncv" Apr 17 23:49:50.351129 kubelet[2679]: I0417 23:49:50.347162 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9dd65c4-5b32-40e5-a871-b7317bd2ff05-cilium-run\") pod \"cilium-xgncv\" (UID: \"f9dd65c4-5b32-40e5-a871-b7317bd2ff05\") " pod="kube-system/cilium-xgncv" Apr 17 23:49:50.351129 kubelet[2679]: I0417 23:49:50.347181 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9dd65c4-5b32-40e5-a871-b7317bd2ff05-xtables-lock\") pod \"cilium-xgncv\" (UID: \"f9dd65c4-5b32-40e5-a871-b7317bd2ff05\") " pod="kube-system/cilium-xgncv" Apr 17 23:49:50.351129 kubelet[2679]: I0417 23:49:50.347200 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9dd65c4-5b32-40e5-a871-b7317bd2ff05-cilium-config-path\") pod \"cilium-xgncv\" (UID: \"f9dd65c4-5b32-40e5-a871-b7317bd2ff05\") " pod="kube-system/cilium-xgncv" Apr 17 23:49:50.351129 kubelet[2679]: I0417 23:49:50.347227 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9dd65c4-5b32-40e5-a871-b7317bd2ff05-hostproc\") pod \"cilium-xgncv\" (UID: \"f9dd65c4-5b32-40e5-a871-b7317bd2ff05\") " pod="kube-system/cilium-xgncv" Apr 17 23:49:50.351129 kubelet[2679]: I0417 23:49:50.347245 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9dd65c4-5b32-40e5-a871-b7317bd2ff05-cni-path\") pod \"cilium-xgncv\" (UID: \"f9dd65c4-5b32-40e5-a871-b7317bd2ff05\") " pod="kube-system/cilium-xgncv" Apr 17 23:49:50.351129 kubelet[2679]: I0417 23:49:50.347262 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9dd65c4-5b32-40e5-a871-b7317bd2ff05-etc-cni-netd\") pod \"cilium-xgncv\" (UID: \"f9dd65c4-5b32-40e5-a871-b7317bd2ff05\") " pod="kube-system/cilium-xgncv" Apr 17 23:49:50.351222 kubelet[2679]: I0417 23:49:50.347280 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9dd65c4-5b32-40e5-a871-b7317bd2ff05-clustermesh-secrets\") pod \"cilium-xgncv\" (UID: \"f9dd65c4-5b32-40e5-a871-b7317bd2ff05\") " pod="kube-system/cilium-xgncv" Apr 17 23:49:50.351285 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 17 23:49:50.405241 sshd[4487]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:50.412130 systemd[1]: Started sshd@24-10.0.0.92:22-10.0.0.1:49964.service - OpenSSH per-connection server daemon (10.0.0.1:49964). Apr 17 23:49:50.412643 systemd[1]: sshd@23-10.0.0.92:22-10.0.0.1:49962.service: Deactivated successfully. Apr 17 23:49:50.415761 systemd[1]: session-24.scope: Deactivated successfully. Apr 17 23:49:50.417345 systemd-logind[1558]: Session 24 logged out. Waiting for processes to exit. Apr 17 23:49:50.419029 systemd-logind[1558]: Removed session 24. Apr 17 23:49:50.445308 sshd[4496]: Accepted publickey for core from 10.0.0.1 port 49964 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:49:50.446920 sshd[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:50.452321 systemd-logind[1558]: New session 25 of user core. Apr 17 23:49:50.459075 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 17 23:49:50.608586 kubelet[2679]: E0417 23:49:50.607770 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:50.608872 containerd[1571]: time="2026-04-17T23:49:50.608790637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xgncv,Uid:f9dd65c4-5b32-40e5-a871-b7317bd2ff05,Namespace:kube-system,Attempt:0,}" Apr 17 23:49:50.649397 containerd[1571]: time="2026-04-17T23:49:50.648097393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:49:50.649397 containerd[1571]: time="2026-04-17T23:49:50.648386847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:49:50.649397 containerd[1571]: time="2026-04-17T23:49:50.648404559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:49:50.649397 containerd[1571]: time="2026-04-17T23:49:50.648659888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:49:50.667089 kubelet[2679]: I0417 23:49:50.666989 2679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9a593f1-d100-49f3-896d-8ff9283e01b2" path="/var/lib/kubelet/pods/e9a593f1-d100-49f3-896d-8ff9283e01b2/volumes" Apr 17 23:49:50.667899 kubelet[2679]: I0417 23:49:50.667765 2679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc239fd9-c47d-4024-a979-b6dee7296dc0" path="/var/lib/kubelet/pods/fc239fd9-c47d-4024-a979-b6dee7296dc0/volumes" Apr 17 23:49:50.701762 containerd[1571]: time="2026-04-17T23:49:50.701602238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xgncv,Uid:f9dd65c4-5b32-40e5-a871-b7317bd2ff05,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eaa95401f5e49c2801ade7de198559648ea1c61b2e8833b921751d6a6c28e30\"" Apr 17 23:49:50.703818 kubelet[2679]: E0417 23:49:50.703738 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:50.715067 containerd[1571]: time="2026-04-17T23:49:50.714667408Z" level=info msg="CreateContainer within sandbox \"0eaa95401f5e49c2801ade7de198559648ea1c61b2e8833b921751d6a6c28e30\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 23:49:50.723540 kubelet[2679]: E0417 23:49:50.723231 2679 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 23:49:50.742896 containerd[1571]: time="2026-04-17T23:49:50.742751994Z" level=info msg="CreateContainer within sandbox \"0eaa95401f5e49c2801ade7de198559648ea1c61b2e8833b921751d6a6c28e30\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"39a6945f35d50159609944cfcaa3bafec333a594a1b845b6f8d6203e2170e481\"" Apr 17 23:49:50.743611 containerd[1571]: time="2026-04-17T23:49:50.743488045Z" level=info msg="StartContainer for \"39a6945f35d50159609944cfcaa3bafec333a594a1b845b6f8d6203e2170e481\"" Apr 17 23:49:50.811041 containerd[1571]: time="2026-04-17T23:49:50.810947348Z" level=info msg="StartContainer for \"39a6945f35d50159609944cfcaa3bafec333a594a1b845b6f8d6203e2170e481\" returns successfully" Apr 17 23:49:50.922009 containerd[1571]: time="2026-04-17T23:49:50.921925969Z" level=info msg="shim disconnected" id=39a6945f35d50159609944cfcaa3bafec333a594a1b845b6f8d6203e2170e481 namespace=k8s.io Apr 17 23:49:50.922009 containerd[1571]: time="2026-04-17T23:49:50.921974316Z" level=warning msg="cleaning up after shim disconnected" id=39a6945f35d50159609944cfcaa3bafec333a594a1b845b6f8d6203e2170e481 namespace=k8s.io Apr 17 23:49:50.922009 containerd[1571]: time="2026-04-17T23:49:50.921982340Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:49:51.021621 kubelet[2679]: E0417 23:49:51.021516 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:51.029296 containerd[1571]: time="2026-04-17T23:49:51.029207608Z" level=info msg="CreateContainer within sandbox \"0eaa95401f5e49c2801ade7de198559648ea1c61b2e8833b921751d6a6c28e30\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 23:49:51.050573 containerd[1571]: time="2026-04-17T23:49:51.050319503Z" level=info msg="CreateContainer within sandbox \"0eaa95401f5e49c2801ade7de198559648ea1c61b2e8833b921751d6a6c28e30\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"570271b24392bca0c6e19204fbf162e51922262cd5c2eaef564f0d8728192700\"" Apr 17 23:49:51.051435 containerd[1571]: time="2026-04-17T23:49:51.051339834Z" level=info msg="StartContainer for \"570271b24392bca0c6e19204fbf162e51922262cd5c2eaef564f0d8728192700\"" Apr 17 23:49:51.104489 containerd[1571]: time="2026-04-17T23:49:51.104433569Z" level=info msg="StartContainer for \"570271b24392bca0c6e19204fbf162e51922262cd5c2eaef564f0d8728192700\" returns successfully" Apr 17 23:49:51.133317 containerd[1571]: time="2026-04-17T23:49:51.133159218Z" level=info msg="shim disconnected" id=570271b24392bca0c6e19204fbf162e51922262cd5c2eaef564f0d8728192700 namespace=k8s.io Apr 17 23:49:51.133317 containerd[1571]: time="2026-04-17T23:49:51.133228701Z" level=warning msg="cleaning up after shim disconnected" id=570271b24392bca0c6e19204fbf162e51922262cd5c2eaef564f0d8728192700 namespace=k8s.io Apr 17 23:49:51.133317 containerd[1571]: time="2026-04-17T23:49:51.133235940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:49:51.461113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142880079.mount: Deactivated successfully. Apr 17 23:49:52.029390 kubelet[2679]: E0417 23:49:52.027264 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:52.034294 containerd[1571]: time="2026-04-17T23:49:52.034188263Z" level=info msg="CreateContainer within sandbox \"0eaa95401f5e49c2801ade7de198559648ea1c61b2e8833b921751d6a6c28e30\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 23:49:52.055521 containerd[1571]: time="2026-04-17T23:49:52.055452271Z" level=info msg="CreateContainer within sandbox \"0eaa95401f5e49c2801ade7de198559648ea1c61b2e8833b921751d6a6c28e30\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dc5f6c888d151a45ad2f9a9f60780495669778edafc15c8fcea8dc19d2ae335d\"" Apr 17 23:49:52.056412 containerd[1571]: time="2026-04-17T23:49:52.056342273Z" level=info msg="StartContainer for \"dc5f6c888d151a45ad2f9a9f60780495669778edafc15c8fcea8dc19d2ae335d\"" Apr 17 23:49:52.118173 containerd[1571]: time="2026-04-17T23:49:52.118116736Z" level=info msg="StartContainer for \"dc5f6c888d151a45ad2f9a9f60780495669778edafc15c8fcea8dc19d2ae335d\" returns successfully" Apr 17 23:49:52.153613 containerd[1571]: time="2026-04-17T23:49:52.153514971Z" level=info msg="shim disconnected" id=dc5f6c888d151a45ad2f9a9f60780495669778edafc15c8fcea8dc19d2ae335d namespace=k8s.io Apr 17 23:49:52.153613 containerd[1571]: time="2026-04-17T23:49:52.153605155Z" level=warning msg="cleaning up after shim disconnected" id=dc5f6c888d151a45ad2f9a9f60780495669778edafc15c8fcea8dc19d2ae335d namespace=k8s.io Apr 17 23:49:52.153613 containerd[1571]: time="2026-04-17T23:49:52.153611836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:49:52.462733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc5f6c888d151a45ad2f9a9f60780495669778edafc15c8fcea8dc19d2ae335d-rootfs.mount: Deactivated successfully. Apr 17 23:49:52.534932 kubelet[2679]: I0417 23:49:52.534736 2679 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-17T23:49:52Z","lastTransitionTime":"2026-04-17T23:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 17 23:49:53.032214 kubelet[2679]: E0417 23:49:53.032075 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:53.038410 containerd[1571]: time="2026-04-17T23:49:53.038338366Z" level=info msg="CreateContainer within sandbox \"0eaa95401f5e49c2801ade7de198559648ea1c61b2e8833b921751d6a6c28e30\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 23:49:53.057023 containerd[1571]: time="2026-04-17T23:49:53.056936848Z" level=info msg="CreateContainer within sandbox \"0eaa95401f5e49c2801ade7de198559648ea1c61b2e8833b921751d6a6c28e30\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ab61c0c23887e8ec0ffb822ee722337ba3c0d1f802c6f35fbe188976b4cbb0ae\"" Apr 17 23:49:53.057765 containerd[1571]: time="2026-04-17T23:49:53.057731862Z" level=info msg="StartContainer for \"ab61c0c23887e8ec0ffb822ee722337ba3c0d1f802c6f35fbe188976b4cbb0ae\"" Apr 17 23:49:53.124301 containerd[1571]: time="2026-04-17T23:49:53.124198944Z" level=info msg="StartContainer for \"ab61c0c23887e8ec0ffb822ee722337ba3c0d1f802c6f35fbe188976b4cbb0ae\" returns successfully" Apr 17 23:49:53.149382 containerd[1571]: time="2026-04-17T23:49:53.149220017Z" level=info msg="shim disconnected" id=ab61c0c23887e8ec0ffb822ee722337ba3c0d1f802c6f35fbe188976b4cbb0ae namespace=k8s.io Apr 17 23:49:53.149382 containerd[1571]: time="2026-04-17T23:49:53.149287082Z" level=warning msg="cleaning up after shim disconnected" id=ab61c0c23887e8ec0ffb822ee722337ba3c0d1f802c6f35fbe188976b4cbb0ae namespace=k8s.io Apr 17 23:49:53.149382 containerd[1571]: time="2026-04-17T23:49:53.149294384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:49:53.461370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab61c0c23887e8ec0ffb822ee722337ba3c0d1f802c6f35fbe188976b4cbb0ae-rootfs.mount: Deactivated successfully. Apr 17 23:49:53.664256 kubelet[2679]: E0417 23:49:53.664109 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:54.044642 kubelet[2679]: E0417 23:49:54.044500 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:54.051019 containerd[1571]: time="2026-04-17T23:49:54.050732422Z" level=info msg="CreateContainer within sandbox \"0eaa95401f5e49c2801ade7de198559648ea1c61b2e8833b921751d6a6c28e30\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 23:49:54.072480 containerd[1571]: time="2026-04-17T23:49:54.072381008Z" level=info msg="CreateContainer within sandbox \"0eaa95401f5e49c2801ade7de198559648ea1c61b2e8833b921751d6a6c28e30\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6bf314c12d6cf1d15353ac13cd49f85fd84d0d6a87f7a63ecacadc80f16d0e72\"" Apr 17 23:49:54.073321 containerd[1571]: time="2026-04-17T23:49:54.073260880Z" level=info msg="StartContainer for \"6bf314c12d6cf1d15353ac13cd49f85fd84d0d6a87f7a63ecacadc80f16d0e72\"" Apr 17 23:49:54.135256 containerd[1571]: time="2026-04-17T23:49:54.135191806Z" level=info msg="StartContainer for \"6bf314c12d6cf1d15353ac13cd49f85fd84d0d6a87f7a63ecacadc80f16d0e72\" returns successfully" Apr 17 23:49:54.446748 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 17 23:49:55.051840 kubelet[2679]: E0417 23:49:55.051752 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:56.610738 kubelet[2679]: E0417 23:49:56.609598 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:57.633191 systemd-networkd[1258]: lxc_health: Link UP Apr 17 23:49:57.649028 systemd-networkd[1258]: lxc_health: Gained carrier Apr 17 23:49:58.613113 kubelet[2679]: E0417 23:49:58.613029 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:58.645737 kubelet[2679]: I0417 23:49:58.641848 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xgncv" podStartSLOduration=8.641828713 podStartE2EDuration="8.641828713s" podCreationTimestamp="2026-04-17 23:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:49:55.082070697 +0000 UTC m=+74.542880137" watchObservedRunningTime="2026-04-17 23:49:58.641828713 +0000 UTC m=+78.102638160" Apr 17 23:49:59.060311 kubelet[2679]: E0417 23:49:59.060217 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:49:59.618961 systemd-networkd[1258]: lxc_health: Gained IPv6LL Apr 17 23:50:00.062594 kubelet[2679]: E0417 23:50:00.062509 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:50:03.250215 sshd[4496]: pam_unix(sshd:session): session closed for user core Apr 17 23:50:03.253277 systemd[1]: sshd@24-10.0.0.92:22-10.0.0.1:49964.service: Deactivated successfully. Apr 17 23:50:03.254943 systemd[1]: session-25.scope: Deactivated successfully. Apr 17 23:50:03.254947 systemd-logind[1558]: Session 25 logged out. Waiting for processes to exit. Apr 17 23:50:03.256104 systemd-logind[1558]: Removed session 25.