Apr 28 02:02:01.530852 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 27 22:40:10 -00 2026 Apr 28 02:02:01.530871 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 02:02:01.530880 kernel: BIOS-provided physical RAM map: Apr 28 02:02:01.530886 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 28 02:02:01.530890 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 28 02:02:01.530962 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 28 02:02:01.530969 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 28 02:02:01.530974 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 28 02:02:01.530979 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 28 02:02:01.530986 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 28 02:02:01.530992 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 28 02:02:01.530997 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 28 02:02:01.531002 kernel: NX (Execute Disable) protection: active Apr 28 02:02:01.531007 kernel: APIC: Static calls initialized Apr 28 02:02:01.531013 kernel: SMBIOS 2.8 present. Apr 28 02:02:01.531021 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 28 02:02:01.531026 kernel: Hypervisor detected: KVM Apr 28 02:02:01.531032 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 28 02:02:01.531038 kernel: kvm-clock: using sched offset of 10522110789 cycles Apr 28 02:02:01.531044 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 28 02:02:01.531050 kernel: tsc: Detected 2793.438 MHz processor Apr 28 02:02:01.531056 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 28 02:02:01.531062 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 28 02:02:01.531067 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 28 02:02:01.531075 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 28 02:02:01.531081 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 28 02:02:01.531086 kernel: Using GB pages for direct mapping Apr 28 02:02:01.531092 kernel: ACPI: Early table checksum verification disabled Apr 28 02:02:01.531097 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 28 02:02:01.531103 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:02:01.531107 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:02:01.531112 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:02:01.531117 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 28 02:02:01.531123 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:02:01.531127 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:02:01.531132 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:02:01.531137 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:02:01.531141 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 28 02:02:01.531146 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 28 02:02:01.531151 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 28 02:02:01.531159 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 28 02:02:01.531164 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 28 02:02:01.531169 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 28 02:02:01.531174 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 28 02:02:01.531179 kernel: No NUMA configuration found Apr 28 02:02:01.531184 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 28 02:02:01.531189 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 28 02:02:01.531195 kernel: Zone ranges: Apr 28 02:02:01.531200 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 28 02:02:01.531205 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 28 02:02:01.531210 kernel: Normal empty Apr 28 02:02:01.531215 kernel: Movable zone start for each node Apr 28 02:02:01.531220 kernel: Early memory node ranges Apr 28 02:02:01.531225 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 28 02:02:01.531229 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 28 02:02:01.531234 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 28 02:02:01.531239 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 02:02:01.531246 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 28 02:02:01.531251 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 28 02:02:01.531256 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 28 02:02:01.531261 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 28 02:02:01.531266 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 28 02:02:01.531271 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 28 02:02:01.531276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 28 02:02:01.531281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 28 02:02:01.531286 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 28 02:02:01.531292 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 28 02:02:01.531297 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 28 02:02:01.531302 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 28 02:02:01.531307 kernel: TSC deadline timer available Apr 28 02:02:01.531312 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 28 02:02:01.531317 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 28 02:02:01.531322 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 28 02:02:01.531327 kernel: kvm-guest: setup PV sched yield Apr 28 02:02:01.531332 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 28 02:02:01.531338 kernel: Booting paravirtualized kernel on KVM Apr 28 02:02:01.531343 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 28 02:02:01.531348 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 28 02:02:01.531353 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 28 02:02:01.531358 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 28 02:02:01.531363 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 28 02:02:01.531368 kernel: kvm-guest: PV spinlocks enabled Apr 28 02:02:01.531373 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 28 02:02:01.531379 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 02:02:01.531386 kernel: random: crng init done Apr 28 02:02:01.531391 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 28 02:02:01.531396 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 28 02:02:01.531401 kernel: Fallback order for Node 0: 0 Apr 28 02:02:01.531406 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 28 02:02:01.531411 kernel: Policy zone: DMA32 Apr 28 02:02:01.531416 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 28 02:02:01.531421 kernel: Memory: 2433648K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 137900K reserved, 0K cma-reserved) Apr 28 02:02:01.531427 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 28 02:02:01.531432 kernel: ftrace: allocating 37996 entries in 149 pages Apr 28 02:02:01.531437 kernel: ftrace: allocated 149 pages with 4 groups Apr 28 02:02:01.531442 kernel: Dynamic Preempt: voluntary Apr 28 02:02:01.531447 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 28 02:02:01.531454 kernel: rcu: RCU event tracing is enabled. Apr 28 02:02:01.531459 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 28 02:02:01.531465 kernel: Trampoline variant of Tasks RCU enabled. Apr 28 02:02:01.531470 kernel: Rude variant of Tasks RCU enabled. Apr 28 02:02:01.531476 kernel: Tracing variant of Tasks RCU enabled. Apr 28 02:02:01.531481 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 28 02:02:01.531486 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 28 02:02:01.531491 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 28 02:02:01.531496 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 28 02:02:01.531501 kernel: Console: colour VGA+ 80x25 Apr 28 02:02:01.531506 kernel: printk: console [ttyS0] enabled Apr 28 02:02:01.531511 kernel: ACPI: Core revision 20230628 Apr 28 02:02:01.531516 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 28 02:02:01.531523 kernel: APIC: Switch to symmetric I/O mode setup Apr 28 02:02:01.531638 kernel: x2apic enabled Apr 28 02:02:01.531643 kernel: APIC: Switched APIC routing to: physical x2apic Apr 28 02:02:01.531648 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 28 02:02:01.531653 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 28 02:02:01.531658 kernel: kvm-guest: setup PV IPIs Apr 28 02:02:01.531663 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 28 02:02:01.531668 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 02:02:01.531680 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 28 02:02:01.531686 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 28 02:02:01.531691 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 28 02:02:01.531697 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 28 02:02:01.531703 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 28 02:02:01.531709 kernel: Spectre V2 : Mitigation: Retpolines Apr 28 02:02:01.531714 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 28 02:02:01.531720 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 28 02:02:01.531727 kernel: RETBleed: Vulnerable Apr 28 02:02:01.531732 kernel: Speculative Store Bypass: Vulnerable Apr 28 02:02:01.531737 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 28 02:02:01.531743 kernel: GDS: Unknown: Dependent on hypervisor status Apr 28 02:02:01.531748 kernel: active return thunk: its_return_thunk Apr 28 02:02:01.531754 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 28 02:02:01.531759 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 28 02:02:01.531765 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 28 02:02:01.531770 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 28 02:02:01.531778 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 28 02:02:01.531783 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 28 02:02:01.531789 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 28 02:02:01.531794 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 28 02:02:01.531799 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 28 02:02:01.531805 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 28 02:02:01.531810 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 28 02:02:01.531816 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 28 02:02:01.531821 kernel: Freeing SMP alternatives memory: 32K Apr 28 02:02:01.531828 kernel: pid_max: default: 32768 minimum: 301 Apr 28 02:02:01.531833 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 28 02:02:01.531838 kernel: landlock: Up and running. Apr 28 02:02:01.531844 kernel: SELinux: Initializing. Apr 28 02:02:01.531849 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 02:02:01.531855 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 02:02:01.531860 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 28 02:02:01.531866 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 02:02:01.531871 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 02:02:01.531878 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 02:02:01.531883 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 28 02:02:01.531889 kernel: signal: max sigframe size: 3632 Apr 28 02:02:01.531952 kernel: rcu: Hierarchical SRCU implementation. Apr 28 02:02:01.531958 kernel: rcu: Max phase no-delay instances is 400. Apr 28 02:02:01.531964 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 28 02:02:01.531969 kernel: smp: Bringing up secondary CPUs ... Apr 28 02:02:01.531975 kernel: smpboot: x86: Booting SMP configuration: Apr 28 02:02:01.531980 kernel: .... node #0, CPUs: #1 #2 #3 Apr 28 02:02:01.531987 kernel: smp: Brought up 1 node, 4 CPUs Apr 28 02:02:01.531992 kernel: smpboot: Max logical packages: 1 Apr 28 02:02:01.531997 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 28 02:02:01.532003 kernel: devtmpfs: initialized Apr 28 02:02:01.532008 kernel: x86/mm: Memory block size: 128MB Apr 28 02:02:01.532013 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 28 02:02:01.532019 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 28 02:02:01.532024 kernel: pinctrl core: initialized pinctrl subsystem Apr 28 02:02:01.532030 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 28 02:02:01.532036 kernel: audit: initializing netlink subsys (disabled) Apr 28 02:02:01.532042 kernel: audit: type=2000 audit(1777341716.645:1): state=initialized audit_enabled=0 res=1 Apr 28 02:02:01.532047 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 28 02:02:01.532053 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 28 02:02:01.532058 kernel: cpuidle: using governor menu Apr 28 02:02:01.532063 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 28 02:02:01.532069 kernel: dca service started, version 1.12.1 Apr 28 02:02:01.532074 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 28 02:02:01.532080 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 28 02:02:01.532087 kernel: PCI: Using configuration type 1 for base access Apr 28 02:02:01.532092 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 28 02:02:01.532098 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 28 02:02:01.532103 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 28 02:02:01.532108 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 28 02:02:01.532114 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 28 02:02:01.532119 kernel: ACPI: Added _OSI(Module Device) Apr 28 02:02:01.532125 kernel: ACPI: Added _OSI(Processor Device) Apr 28 02:02:01.532130 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 28 02:02:01.532137 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 28 02:02:01.532143 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 28 02:02:01.532148 kernel: ACPI: Interpreter enabled Apr 28 02:02:01.532153 kernel: ACPI: PM: (supports S0 S3 S5) Apr 28 02:02:01.532159 kernel: ACPI: Using IOAPIC for interrupt routing Apr 28 02:02:01.532164 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 28 02:02:01.532170 kernel: PCI: Using E820 reservations for host bridge windows Apr 28 02:02:01.532175 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 28 02:02:01.532181 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 28 02:02:01.532286 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 28 02:02:01.532348 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 28 02:02:01.532402 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 28 02:02:01.532409 kernel: PCI host bridge to bus 0000:00 Apr 28 02:02:01.532470 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 28 02:02:01.532521 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 28 02:02:01.532701 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 28 02:02:01.532751 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 28 02:02:01.532800 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 28 02:02:01.532849 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 28 02:02:01.532963 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 28 02:02:01.533036 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 28 02:02:01.533099 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 28 02:02:01.533159 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 28 02:02:01.533215 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 28 02:02:01.533270 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 28 02:02:01.533325 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 28 02:02:01.533380 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x110 took 11718 usecs Apr 28 02:02:01.533440 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 28 02:02:01.533497 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 28 02:02:01.533675 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 28 02:02:01.533732 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 28 02:02:01.533793 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 28 02:02:01.533849 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 28 02:02:01.533970 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 28 02:02:01.534028 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 28 02:02:01.534092 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 28 02:02:01.534148 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 28 02:02:01.534204 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 28 02:02:01.534259 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 28 02:02:01.534315 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 28 02:02:01.534374 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 28 02:02:01.534430 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 28 02:02:01.534491 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 10742 usecs Apr 28 02:02:01.534672 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 28 02:02:01.534729 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 28 02:02:01.534785 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 28 02:02:01.534850 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 28 02:02:01.534971 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 28 02:02:01.534979 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 28 02:02:01.534988 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 28 02:02:01.534993 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 28 02:02:01.534999 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 28 02:02:01.535005 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 28 02:02:01.535010 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 28 02:02:01.535016 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 28 02:02:01.535021 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 28 02:02:01.535027 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 28 02:02:01.535032 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 28 02:02:01.535039 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 28 02:02:01.535045 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 28 02:02:01.535050 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 28 02:02:01.535056 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 28 02:02:01.535061 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 28 02:02:01.535067 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 28 02:02:01.535072 kernel: iommu: Default domain type: Translated Apr 28 02:02:01.535078 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 28 02:02:01.535083 kernel: PCI: Using ACPI for IRQ routing Apr 28 02:02:01.535090 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 28 02:02:01.535096 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 28 02:02:01.535101 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 28 02:02:01.535157 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 28 02:02:01.535211 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 28 02:02:01.535266 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 28 02:02:01.535273 kernel: vgaarb: loaded Apr 28 02:02:01.535279 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 28 02:02:01.535284 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 28 02:02:01.535292 kernel: clocksource: Switched to clocksource kvm-clock Apr 28 02:02:01.535298 kernel: VFS: Disk quotas dquot_6.6.0 Apr 28 02:02:01.535303 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 28 02:02:01.535309 kernel: pnp: PnP ACPI init Apr 28 02:02:01.535368 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 28 02:02:01.535376 kernel: pnp: PnP ACPI: found 6 devices Apr 28 02:02:01.535381 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 28 02:02:01.535387 kernel: NET: Registered PF_INET protocol family Apr 28 02:02:01.535395 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 28 02:02:01.535400 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 28 02:02:01.535406 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 28 02:02:01.535411 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 28 02:02:01.535417 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 28 02:02:01.535423 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 28 02:02:01.535428 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 02:02:01.535434 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 02:02:01.535439 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 28 02:02:01.535447 kernel: NET: Registered PF_XDP protocol family Apr 28 02:02:01.535499 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 28 02:02:01.535668 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 28 02:02:01.535720 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 28 02:02:01.535768 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 28 02:02:01.535817 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 28 02:02:01.535866 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 28 02:02:01.535873 kernel: PCI: CLS 0 bytes, default 64 Apr 28 02:02:01.535881 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 28 02:02:01.535886 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 02:02:01.535954 kernel: Initialise system trusted keyrings Apr 28 02:02:01.535961 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 28 02:02:01.535967 kernel: Key type asymmetric registered Apr 28 02:02:01.535972 kernel: Asymmetric key parser 'x509' registered Apr 28 02:02:01.535978 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 28 02:02:01.535983 kernel: io scheduler mq-deadline registered Apr 28 02:02:01.535989 kernel: io scheduler kyber registered Apr 28 02:02:01.535996 kernel: io scheduler bfq registered Apr 28 02:02:01.536002 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 28 02:02:01.536008 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 28 02:02:01.536014 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 28 02:02:01.536019 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 28 02:02:01.536024 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 28 02:02:01.536030 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 28 02:02:01.536035 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 28 02:02:01.536041 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 28 02:02:01.536048 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 28 02:02:01.536110 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 28 02:02:01.536118 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 28 02:02:01.536169 kernel: rtc_cmos 00:04: registered as rtc0 Apr 28 02:02:01.536221 kernel: rtc_cmos 00:04: setting system clock to 2026-04-28T02:02:00 UTC (1777341720) Apr 28 02:02:01.536272 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 28 02:02:01.536279 kernel: intel_pstate: CPU model not supported Apr 28 02:02:01.536284 kernel: NET: Registered PF_INET6 protocol family Apr 28 02:02:01.536291 kernel: Segment Routing with IPv6 Apr 28 02:02:01.536297 kernel: In-situ OAM (IOAM) with IPv6 Apr 28 02:02:01.536302 kernel: NET: Registered PF_PACKET protocol family Apr 28 02:02:01.536308 kernel: Key type dns_resolver registered Apr 28 02:02:01.536314 kernel: IPI shorthand broadcast: enabled Apr 28 02:02:01.536319 kernel: sched_clock: Marking stable (3549114829, 883140570)->(4939487878, -507232479) Apr 28 02:02:01.536325 kernel: registered taskstats version 1 Apr 28 02:02:01.536330 kernel: Loading compiled-in X.509 certificates Apr 28 02:02:01.536336 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 40b5c5a01382737457e1eae3e889ae587960eb18' Apr 28 02:02:01.536343 kernel: Key type .fscrypt registered Apr 28 02:02:01.536348 kernel: Key type fscrypt-provisioning registered Apr 28 02:02:01.536354 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 28 02:02:01.536359 kernel: ima: Allocated hash algorithm: sha1 Apr 28 02:02:01.536365 kernel: ima: No architecture policies found Apr 28 02:02:01.536370 kernel: clk: Disabling unused clocks Apr 28 02:02:01.536376 kernel: Freeing unused kernel image (initmem) memory: 42884K Apr 28 02:02:01.536381 kernel: Write protecting the kernel read-only data: 36864k Apr 28 02:02:01.536387 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 28 02:02:01.536394 kernel: Run /init as init process Apr 28 02:02:01.536399 kernel: with arguments: Apr 28 02:02:01.536405 kernel: /init Apr 28 02:02:01.536411 kernel: with environment: Apr 28 02:02:01.536416 kernel: HOME=/ Apr 28 02:02:01.536421 kernel: TERM=linux Apr 28 02:02:01.536428 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 02:02:01.536436 systemd[1]: Detected virtualization kvm. Apr 28 02:02:01.536444 systemd[1]: Detected architecture x86-64. Apr 28 02:02:01.536450 systemd[1]: Running in initrd. Apr 28 02:02:01.536456 systemd[1]: No hostname configured, using default hostname. Apr 28 02:02:01.536461 systemd[1]: Hostname set to . Apr 28 02:02:01.536467 systemd[1]: Initializing machine ID from VM UUID. Apr 28 02:02:01.536473 systemd[1]: Queued start job for default target initrd.target. Apr 28 02:02:01.536479 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 02:02:01.536485 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 02:02:01.536492 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 28 02:02:01.536499 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 02:02:01.536514 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 28 02:02:01.536522 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 28 02:02:01.536645 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 28 02:02:01.536653 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 28 02:02:01.536659 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 02:02:01.536665 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 02:02:01.536671 systemd[1]: Reached target paths.target - Path Units. Apr 28 02:02:01.536677 systemd[1]: Reached target slices.target - Slice Units. Apr 28 02:02:01.536683 systemd[1]: Reached target swap.target - Swaps. Apr 28 02:02:01.536689 systemd[1]: Reached target timers.target - Timer Units. Apr 28 02:02:01.536695 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 02:02:01.536701 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 02:02:01.536709 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 02:02:01.536715 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 02:02:01.536722 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 02:02:01.536728 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 02:02:01.536734 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 02:02:01.536740 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 02:02:01.536746 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 28 02:02:01.536752 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 02:02:01.536759 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 28 02:02:01.536765 systemd[1]: Starting systemd-fsck-usr.service... Apr 28 02:02:01.536771 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 02:02:01.536777 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 02:02:01.536796 systemd-journald[195]: Collecting audit messages is disabled. Apr 28 02:02:01.536813 systemd-journald[195]: Journal started Apr 28 02:02:01.536830 systemd-journald[195]: Runtime Journal (/run/log/journal/e7ac927d4b48464bbf679b0c6ffe449d) is 6.0M, max 48.4M, 42.3M free. Apr 28 02:02:01.557498 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 02:02:01.566823 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 02:02:01.577183 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 28 02:02:01.587425 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 02:02:01.598169 systemd-modules-load[196]: Inserted module 'overlay' Apr 28 02:02:01.599075 systemd[1]: Finished systemd-fsck-usr.service. Apr 28 02:02:01.641384 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 02:02:02.279176 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 28 02:02:02.279205 kernel: Bridge firewalling registered Apr 28 02:02:01.683783 systemd-modules-load[196]: Inserted module 'br_netfilter' Apr 28 02:02:02.288062 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 02:02:02.299470 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 02:02:02.333479 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:02:02.334330 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 02:02:02.372419 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 02:02:02.401215 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 02:02:02.402876 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 02:02:02.428250 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 02:02:02.453237 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 02:02:02.464782 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 28 02:02:02.480890 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:02:02.502894 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 02:02:02.505820 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 02:02:02.544120 dracut-cmdline[229]: dracut-dracut-053 Apr 28 02:02:02.544120 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 02:02:02.619043 systemd-resolved[232]: Positive Trust Anchors: Apr 28 02:02:02.619109 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 02:02:02.619133 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 02:02:02.621966 systemd-resolved[232]: Defaulting to hostname 'linux'. Apr 28 02:02:02.623513 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 02:02:02.625493 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 02:02:02.816131 kernel: SCSI subsystem initialized Apr 28 02:02:02.832085 kernel: Loading iSCSI transport class v2.0-870. Apr 28 02:02:02.856039 kernel: iscsi: registered transport (tcp) Apr 28 02:02:02.960791 kernel: iscsi: registered transport (qla4xxx) Apr 28 02:02:02.961284 kernel: QLogic iSCSI HBA Driver Apr 28 02:02:03.034421 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 28 02:02:03.061147 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 28 02:02:03.117056 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 28 02:02:03.117193 kernel: device-mapper: uevent: version 1.0.3 Apr 28 02:02:03.130411 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 28 02:02:03.209247 kernel: raid6: avx512x4 gen() 27541 MB/s Apr 28 02:02:03.231181 kernel: raid6: avx512x2 gen() 27414 MB/s Apr 28 02:02:03.253213 kernel: raid6: avx512x1 gen() 25466 MB/s Apr 28 02:02:03.274164 kernel: raid6: avx2x4 gen() 22986 MB/s Apr 28 02:02:03.295243 kernel: raid6: avx2x2 gen() 25407 MB/s Apr 28 02:02:03.321066 kernel: raid6: avx2x1 gen() 22286 MB/s Apr 28 02:02:03.321207 kernel: raid6: using algorithm avx512x4 gen() 27541 MB/s Apr 28 02:02:03.346811 kernel: raid6: .... xor() 7702 MB/s, rmw enabled Apr 28 02:02:03.347018 kernel: raid6: using avx512x2 recovery algorithm Apr 28 02:02:03.384398 kernel: xor: automatically using best checksumming function avx Apr 28 02:02:03.707130 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 28 02:02:03.725311 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 28 02:02:03.762468 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 02:02:03.780348 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 28 02:02:03.794356 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 02:02:03.823141 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 28 02:02:03.863443 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Apr 28 02:02:03.924146 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 02:02:03.951223 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 02:02:04.010906 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 02:02:04.042112 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 28 02:02:04.064279 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 28 02:02:04.092427 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 02:02:04.106821 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 02:02:04.145384 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 02:02:04.187066 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 28 02:02:04.187249 kernel: cryptd: max_cpu_qlen set to 1000 Apr 28 02:02:04.196126 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 28 02:02:04.245852 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 28 02:02:04.246151 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 28 02:02:04.246162 kernel: GPT:9289727 != 19775487 Apr 28 02:02:04.246169 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 28 02:02:04.246176 kernel: GPT:9289727 != 19775487 Apr 28 02:02:04.246183 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 28 02:02:04.246190 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:02:04.222027 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 02:02:04.222078 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 02:02:04.272016 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 02:02:04.281851 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 02:02:04.281993 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:02:04.292078 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 02:02:04.305500 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 02:02:04.329463 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 28 02:02:04.410727 kernel: libata version 3.00 loaded. Apr 28 02:02:04.426856 kernel: AVX2 version of gcm_enc/dec engaged. Apr 28 02:02:04.436165 kernel: ahci 0000:00:1f.2: version 3.0 Apr 28 02:02:04.436392 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 28 02:02:04.436402 kernel: AES CTR mode by8 optimization enabled Apr 28 02:02:04.447188 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 28 02:02:04.447424 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 28 02:02:04.468879 kernel: scsi host0: ahci Apr 28 02:02:04.478036 kernel: scsi host1: ahci Apr 28 02:02:04.478258 kernel: scsi host2: ahci Apr 28 02:02:04.474090 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 28 02:02:05.230223 kernel: scsi host3: ahci Apr 28 02:02:05.230462 kernel: scsi host4: ahci Apr 28 02:02:05.241073 kernel: scsi host5: ahci Apr 28 02:02:05.241204 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 Apr 28 02:02:05.241225 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 Apr 28 02:02:05.241251 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 Apr 28 02:02:05.241263 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 Apr 28 02:02:05.241275 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 Apr 28 02:02:05.241287 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 Apr 28 02:02:05.241299 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (463) Apr 28 02:02:05.241310 kernel: BTRFS: device fsid c393bc7b-9362-4bef-afe6-6491ed4d6c93 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (474) Apr 28 02:02:05.241321 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 28 02:02:05.241331 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 28 02:02:05.241341 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 28 02:02:05.241349 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 28 02:02:05.241356 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 28 02:02:05.241363 kernel: ata3.00: applying bridge limits Apr 28 02:02:05.241370 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 28 02:02:05.241380 kernel: ata3.00: configured for UDMA/100 Apr 28 02:02:05.241391 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 28 02:02:05.241402 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 28 02:02:05.241516 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 28 02:02:05.242093 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 28 02:02:05.242103 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 28 02:02:05.230453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:02:05.266766 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 28 02:02:05.283165 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 02:02:05.302694 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 28 02:02:05.314309 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 28 02:02:05.366509 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 28 02:02:05.387744 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 02:02:05.418747 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:02:05.418772 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:02:05.418781 disk-uuid[568]: Primary Header is updated. Apr 28 02:02:05.418781 disk-uuid[568]: Secondary Entries is updated. Apr 28 02:02:05.418781 disk-uuid[568]: Secondary Header is updated. Apr 28 02:02:05.452778 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:02:05.476785 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 02:02:06.453839 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:02:06.456232 disk-uuid[569]: The operation has completed successfully. Apr 28 02:02:06.501507 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 28 02:02:06.502176 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 28 02:02:06.543126 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 28 02:02:06.561508 sh[592]: Success Apr 28 02:02:06.593839 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 28 02:02:06.659389 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 28 02:02:06.685853 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 28 02:02:06.692521 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 28 02:02:06.756209 kernel: BTRFS info (device dm-0): first mount of filesystem c393bc7b-9362-4bef-afe6-6491ed4d6c93 Apr 28 02:02:06.756335 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 28 02:02:06.756344 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 28 02:02:06.771296 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 28 02:02:06.771399 kernel: BTRFS info (device dm-0): using free space tree Apr 28 02:02:06.797417 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 28 02:02:06.811708 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 28 02:02:06.837320 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 28 02:02:06.858431 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 28 02:02:06.966891 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:02:06.966917 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 02:02:06.966924 kernel: BTRFS info (device vda6): using free space tree Apr 28 02:02:06.967010 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 02:02:06.989917 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 28 02:02:07.005415 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:02:07.013822 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 28 02:02:07.045197 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 28 02:02:07.168882 ignition[662]: Ignition 2.19.0 Apr 28 02:02:07.169046 ignition[662]: Stage: fetch-offline Apr 28 02:02:07.169090 ignition[662]: no configs at "/usr/lib/ignition/base.d" Apr 28 02:02:07.169100 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:02:07.169284 ignition[662]: parsed url from cmdline: "" Apr 28 02:02:07.169288 ignition[662]: no config URL provided Apr 28 02:02:07.169296 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 02:02:07.169305 ignition[662]: no config at "/usr/lib/ignition/user.ign" Apr 28 02:02:07.169334 ignition[662]: op(1): [started] loading QEMU firmware config module Apr 28 02:02:07.169340 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 28 02:02:07.202188 ignition[662]: op(1): [finished] loading QEMU firmware config module Apr 28 02:02:07.294710 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 02:02:07.320861 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 02:02:07.352853 systemd-networkd[780]: lo: Link UP Apr 28 02:02:07.352996 systemd-networkd[780]: lo: Gained carrier Apr 28 02:02:07.354263 systemd-networkd[780]: Enumeration completed Apr 28 02:02:07.354408 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 02:02:07.360186 systemd[1]: Reached target network.target - Network. Apr 28 02:02:07.360690 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:02:07.360693 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 02:02:07.375712 systemd-networkd[780]: eth0: Link UP Apr 28 02:02:07.375714 systemd-networkd[780]: eth0: Gained carrier Apr 28 02:02:07.375722 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:02:07.447824 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 02:02:08.100068 ignition[662]: parsing config with SHA512: dc40bf806e58a5446950267fc59fb279b2ab4d053508450e97a5feb50cc2bd90ef076d2b33af11e5b944dc2a36b11a45a62a6a627ffdbc08c1540791e0053b40 Apr 28 02:02:08.123483 unknown[662]: fetched base config from "system" Apr 28 02:02:08.124043 unknown[662]: fetched user config from "qemu" Apr 28 02:02:08.124516 ignition[662]: fetch-offline: fetch-offline passed Apr 28 02:02:08.124700 ignition[662]: Ignition finished successfully Apr 28 02:02:08.151303 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 02:02:08.164023 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 28 02:02:08.173430 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 28 02:02:08.240889 ignition[785]: Ignition 2.19.0 Apr 28 02:02:08.241017 ignition[785]: Stage: kargs Apr 28 02:02:08.241160 ignition[785]: no configs at "/usr/lib/ignition/base.d" Apr 28 02:02:08.241167 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:02:08.242045 ignition[785]: kargs: kargs passed Apr 28 02:02:08.242081 ignition[785]: Ignition finished successfully Apr 28 02:02:08.280464 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 28 02:02:08.306814 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 28 02:02:08.358342 ignition[793]: Ignition 2.19.0 Apr 28 02:02:08.358408 ignition[793]: Stage: disks Apr 28 02:02:08.361864 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 28 02:02:08.358731 ignition[793]: no configs at "/usr/lib/ignition/base.d" Apr 28 02:02:08.358739 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:02:08.359793 ignition[793]: disks: disks passed Apr 28 02:02:08.359826 ignition[793]: Ignition finished successfully Apr 28 02:02:08.386166 systemd-networkd[780]: eth0: Gained IPv6LL Apr 28 02:02:08.412242 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 28 02:02:08.421153 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 02:02:08.430787 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 02:02:08.438902 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 02:02:08.447169 systemd[1]: Reached target basic.target - Basic System. Apr 28 02:02:08.498160 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 28 02:02:08.531860 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 28 02:02:08.542218 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 28 02:02:08.576258 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 28 02:02:08.784858 kernel: EXT4-fs (vda9): mounted filesystem f590d1f8-5181-4682-9e04-fe65400dca5c r/w with ordered data mode. Quota mode: none. Apr 28 02:02:08.786320 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 28 02:02:08.787242 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 28 02:02:08.830225 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 02:02:08.836713 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 28 02:02:08.857206 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Apr 28 02:02:08.872124 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:02:08.872187 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 02:02:08.872200 kernel: BTRFS info (device vda6): using free space tree Apr 28 02:02:08.884094 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 28 02:02:08.884148 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 28 02:02:08.884177 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 02:02:08.955044 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 02:02:08.896393 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 28 02:02:08.934148 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 28 02:02:08.956723 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 02:02:09.042209 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Apr 28 02:02:09.054080 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Apr 28 02:02:09.064240 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Apr 28 02:02:09.073052 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Apr 28 02:02:09.297360 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 28 02:02:09.317305 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 28 02:02:09.331201 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 28 02:02:09.364821 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:02:09.345363 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 28 02:02:09.416910 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 28 02:02:09.445769 ignition[923]: INFO : Ignition 2.19.0 Apr 28 02:02:09.445769 ignition[923]: INFO : Stage: mount Apr 28 02:02:09.445769 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 02:02:09.445769 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:02:09.475243 ignition[923]: INFO : mount: mount passed Apr 28 02:02:09.475243 ignition[923]: INFO : Ignition finished successfully Apr 28 02:02:09.489833 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 28 02:02:09.519084 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 28 02:02:09.800175 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 02:02:09.845483 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Apr 28 02:02:09.845506 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:02:09.845514 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 02:02:09.845522 kernel: BTRFS info (device vda6): using free space tree Apr 28 02:02:09.866082 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 02:02:09.868924 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 02:02:09.986192 ignition[954]: INFO : Ignition 2.19.0 Apr 28 02:02:09.986192 ignition[954]: INFO : Stage: files Apr 28 02:02:09.986192 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 02:02:09.986192 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:02:10.023160 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Apr 28 02:02:10.034776 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 28 02:02:10.034776 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 28 02:02:10.064869 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 28 02:02:10.077870 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 28 02:02:10.089478 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 28 02:02:10.080869 unknown[954]: wrote ssh authorized keys file for user: core Apr 28 02:02:10.110723 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 02:02:10.110723 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 28 02:02:10.262674 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 28 02:02:10.561431 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 02:02:10.561431 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 28 02:02:10.561431 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 28 02:02:10.980820 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 28 02:02:12.118078 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 28 02:02:12.118078 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 28 02:02:12.157313 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 28 02:02:12.157313 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 28 02:02:12.157313 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 28 02:02:12.157313 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 02:02:12.157313 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 02:02:12.157313 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 02:02:12.157313 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 02:02:12.157313 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 02:02:12.157313 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 02:02:12.157313 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 02:02:12.157313 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 02:02:12.157313 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 02:02:12.157313 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 28 02:02:12.749936 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 28 02:02:17.458196 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 02:02:17.458196 ignition[954]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 28 02:02:17.488444 ignition[954]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 02:02:17.488444 ignition[954]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 02:02:17.488444 ignition[954]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 28 02:02:17.488444 ignition[954]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 28 02:02:17.488444 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 02:02:17.488444 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 02:02:17.488444 ignition[954]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 28 02:02:17.488444 ignition[954]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 28 02:02:17.600112 ignition[954]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 02:02:17.600112 ignition[954]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 02:02:17.600112 ignition[954]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 28 02:02:17.600112 ignition[954]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 28 02:02:17.600112 ignition[954]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 28 02:02:17.600112 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 28 02:02:17.600112 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 28 02:02:17.600112 ignition[954]: INFO : files: files passed Apr 28 02:02:17.600112 ignition[954]: INFO : Ignition finished successfully Apr 28 02:02:17.555334 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 28 02:02:17.664455 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 28 02:02:17.680176 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 28 02:02:17.696910 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 28 02:02:17.792926 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Apr 28 02:02:17.697103 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 28 02:02:17.816506 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 02:02:17.816506 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 28 02:02:17.744166 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 02:02:17.861344 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 02:02:17.764355 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 28 02:02:17.849174 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 28 02:02:17.946896 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 28 02:02:17.947223 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 28 02:02:17.966782 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 28 02:02:17.991373 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 28 02:02:17.991938 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 28 02:02:18.021090 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 28 02:02:18.061090 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 02:02:18.091320 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 28 02:02:18.126360 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 28 02:02:18.130882 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 02:02:18.152254 systemd[1]: Stopped target timers.target - Timer Units. Apr 28 02:02:18.166170 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 28 02:02:18.166276 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 02:02:18.186267 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 28 02:02:18.199175 systemd[1]: Stopped target basic.target - Basic System. Apr 28 02:02:18.216812 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 28 02:02:18.232465 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 02:02:18.248268 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 28 02:02:18.265827 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 28 02:02:18.282215 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 02:02:18.282412 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 28 02:02:18.318136 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 28 02:02:18.335372 systemd[1]: Stopped target swap.target - Swaps. Apr 28 02:02:18.343831 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 28 02:02:18.343935 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 28 02:02:18.375922 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 28 02:02:18.385885 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 02:02:18.392989 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 28 02:02:18.394088 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 02:02:18.410408 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 28 02:02:18.410509 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 28 02:02:18.445433 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 28 02:02:18.445861 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 02:02:18.453957 systemd[1]: Stopped target paths.target - Path Units. Apr 28 02:02:18.474255 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 28 02:02:18.478745 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 02:02:18.488963 systemd[1]: Stopped target slices.target - Slice Units. Apr 28 02:02:18.502854 systemd[1]: Stopped target sockets.target - Socket Units. Apr 28 02:02:18.516315 systemd[1]: iscsid.socket: Deactivated successfully. Apr 28 02:02:18.516399 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 02:02:18.531327 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 28 02:02:18.531407 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 02:02:18.548287 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 28 02:02:18.548417 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 02:02:18.564723 systemd[1]: ignition-files.service: Deactivated successfully. Apr 28 02:02:18.564813 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 28 02:02:18.635390 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 28 02:02:18.693154 ignition[1009]: INFO : Ignition 2.19.0 Apr 28 02:02:18.693154 ignition[1009]: INFO : Stage: umount Apr 28 02:02:18.693154 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 02:02:18.693154 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:02:18.693154 ignition[1009]: INFO : umount: umount passed Apr 28 02:02:18.693154 ignition[1009]: INFO : Ignition finished successfully Apr 28 02:02:18.639787 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 28 02:02:18.640081 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 02:02:18.721305 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 28 02:02:18.728841 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 28 02:02:18.729116 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 02:02:18.742447 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 28 02:02:18.742763 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 02:02:18.783253 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 28 02:02:18.783415 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 28 02:02:18.797251 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 28 02:02:18.799822 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 28 02:02:18.799967 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 28 02:02:18.815382 systemd[1]: Stopped target network.target - Network. Apr 28 02:02:18.827786 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 28 02:02:18.827839 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 28 02:02:18.843413 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 28 02:02:18.843451 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 28 02:02:18.851081 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 28 02:02:18.851123 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 28 02:02:18.874360 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 28 02:02:18.874419 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 28 02:02:18.881666 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 28 02:02:18.904944 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 28 02:02:18.949837 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 28 02:02:18.950339 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 28 02:02:18.950724 systemd-networkd[780]: eth0: DHCPv6 lease lost Apr 28 02:02:18.957516 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 28 02:02:18.957731 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 02:02:18.971343 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 28 02:02:18.971759 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 28 02:02:18.987335 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 28 02:02:18.987488 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 28 02:02:19.010342 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 28 02:02:19.010389 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 28 02:02:19.021397 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 28 02:02:19.021451 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 28 02:02:19.059261 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 28 02:02:19.064418 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 28 02:02:19.064478 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 02:02:19.088117 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 02:02:19.088155 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:02:19.195325 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 28 02:02:19.195416 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 28 02:02:19.206274 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 02:02:19.277805 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 28 02:02:19.278141 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 02:02:19.286988 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 28 02:02:19.287254 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 28 02:02:19.297098 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 28 02:02:19.297140 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 28 02:02:19.448183 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Apr 28 02:02:19.305091 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 28 02:02:19.305118 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 02:02:19.313385 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 28 02:02:19.313427 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 28 02:02:19.324154 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 28 02:02:19.325344 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 28 02:02:19.326385 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 02:02:19.326424 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 02:02:19.339833 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 28 02:02:19.340406 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 28 02:02:19.340692 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 02:02:19.349762 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 02:02:19.349808 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:02:19.363674 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 28 02:02:19.363808 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 28 02:02:19.364226 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 28 02:02:19.372971 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 28 02:02:19.397453 systemd[1]: Switching root. Apr 28 02:02:19.609105 systemd-journald[195]: Journal stopped Apr 28 02:02:21.600351 kernel: SELinux: policy capability network_peer_controls=1 Apr 28 02:02:21.600402 kernel: SELinux: policy capability open_perms=1 Apr 28 02:02:21.600411 kernel: SELinux: policy capability extended_socket_class=1 Apr 28 02:02:21.600419 kernel: SELinux: policy capability always_check_network=0 Apr 28 02:02:21.600426 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 28 02:02:21.600433 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 28 02:02:21.600444 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 28 02:02:21.600455 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 28 02:02:21.600463 kernel: audit: type=1403 audit(1777341739.726:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 28 02:02:21.600479 systemd[1]: Successfully loaded SELinux policy in 76.703ms. Apr 28 02:02:21.600494 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.783ms. Apr 28 02:02:21.600503 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 02:02:21.600511 systemd[1]: Detected virtualization kvm. Apr 28 02:02:21.600520 systemd[1]: Detected architecture x86-64. Apr 28 02:02:21.600651 systemd[1]: Detected first boot. Apr 28 02:02:21.600664 systemd[1]: Initializing machine ID from VM UUID. Apr 28 02:02:21.600673 kernel: hrtimer: interrupt took 5980761 ns Apr 28 02:02:21.600681 zram_generator::config[1052]: No configuration found. Apr 28 02:02:21.600690 systemd[1]: Populated /etc with preset unit settings. Apr 28 02:02:21.600699 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 28 02:02:21.600707 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 28 02:02:21.600716 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 28 02:02:21.600725 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 28 02:02:21.600733 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 28 02:02:21.600741 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 28 02:02:21.600749 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 28 02:02:21.600757 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 28 02:02:21.600765 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 28 02:02:21.600772 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 28 02:02:21.600780 systemd[1]: Created slice user.slice - User and Session Slice. Apr 28 02:02:21.600789 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 02:02:21.600797 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 02:02:21.600805 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 28 02:02:21.600813 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 28 02:02:21.600821 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 28 02:02:21.600829 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 02:02:21.600836 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 28 02:02:21.600844 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 02:02:21.600852 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 28 02:02:21.600862 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 28 02:02:21.600870 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 28 02:02:21.600878 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 28 02:02:21.600886 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 02:02:21.600894 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 02:02:21.600902 systemd[1]: Reached target slices.target - Slice Units. Apr 28 02:02:21.600910 systemd[1]: Reached target swap.target - Swaps. Apr 28 02:02:21.600917 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 28 02:02:21.600926 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 28 02:02:21.600934 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 02:02:21.600942 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 02:02:21.600950 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 02:02:21.600957 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 28 02:02:21.600966 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 28 02:02:21.600973 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 28 02:02:21.600981 systemd[1]: Mounting media.mount - External Media Directory... Apr 28 02:02:21.600989 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:02:21.600999 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 28 02:02:21.601007 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 28 02:02:21.601079 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 28 02:02:21.601092 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 28 02:02:21.601100 systemd[1]: Reached target machines.target - Containers. Apr 28 02:02:21.601108 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 28 02:02:21.601116 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 02:02:21.601124 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 02:02:21.601133 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 28 02:02:21.601141 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 02:02:21.601149 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 02:02:21.601156 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 02:02:21.601166 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 28 02:02:21.601173 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 02:02:21.601181 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 28 02:02:21.601189 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 28 02:02:21.601198 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 28 02:02:21.601205 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 28 02:02:21.601213 kernel: ACPI: bus type drm_connector registered Apr 28 02:02:21.601220 systemd[1]: Stopped systemd-fsck-usr.service. Apr 28 02:02:21.601227 kernel: loop: module loaded Apr 28 02:02:21.601234 kernel: fuse: init (API version 7.39) Apr 28 02:02:21.601241 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 02:02:21.601250 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 02:02:21.601258 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 28 02:02:21.601280 systemd-journald[1137]: Collecting audit messages is disabled. Apr 28 02:02:21.601298 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 28 02:02:21.601307 systemd-journald[1137]: Journal started Apr 28 02:02:21.601323 systemd-journald[1137]: Runtime Journal (/run/log/journal/e7ac927d4b48464bbf679b0c6ffe449d) is 6.0M, max 48.4M, 42.3M free. Apr 28 02:02:20.508343 systemd[1]: Queued start job for default target multi-user.target. Apr 28 02:02:20.542154 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 28 02:02:20.543131 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 28 02:02:20.543440 systemd[1]: systemd-journald.service: Consumed 2.144s CPU time. Apr 28 02:02:21.635360 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 02:02:21.635410 systemd[1]: verity-setup.service: Deactivated successfully. Apr 28 02:02:21.641791 systemd[1]: Stopped verity-setup.service. Apr 28 02:02:21.667898 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:02:21.676843 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 02:02:21.686145 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 28 02:02:21.694331 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 28 02:02:21.702807 systemd[1]: Mounted media.mount - External Media Directory. Apr 28 02:02:21.710275 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 28 02:02:21.718760 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 28 02:02:21.727140 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 28 02:02:21.734454 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 28 02:02:21.743739 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 02:02:21.753720 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 28 02:02:21.753948 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 28 02:02:21.763510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 02:02:21.764014 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 02:02:21.773235 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 02:02:21.773799 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 02:02:21.782336 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 02:02:21.782972 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 02:02:21.792350 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 28 02:02:21.792752 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 28 02:02:21.801769 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 02:02:21.801991 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 02:02:21.811237 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 02:02:21.820181 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 28 02:02:21.831172 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 28 02:02:21.841285 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 02:02:21.866738 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 02:02:21.887948 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 28 02:02:21.898226 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 28 02:02:21.906647 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 28 02:02:21.906726 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 02:02:21.915752 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 28 02:02:21.926764 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 28 02:02:21.936828 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 28 02:02:21.944460 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 02:02:21.947702 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 28 02:02:21.958299 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 28 02:02:21.968757 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 02:02:21.970205 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 28 02:02:21.978686 systemd-journald[1137]: Time spent on flushing to /var/log/journal/e7ac927d4b48464bbf679b0c6ffe449d is 56.548ms for 956 entries. Apr 28 02:02:21.978686 systemd-journald[1137]: System Journal (/var/log/journal/e7ac927d4b48464bbf679b0c6ffe449d) is 8.0M, max 195.6M, 187.6M free. Apr 28 02:02:22.407407 systemd-journald[1137]: Received client request to flush runtime journal. Apr 28 02:02:21.987283 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 02:02:21.992972 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 02:02:22.006143 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 28 02:02:22.021506 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 28 02:02:22.039458 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 28 02:02:22.052419 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 28 02:02:22.063770 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 28 02:02:22.074786 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 28 02:02:22.089008 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 28 02:02:22.100959 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 28 02:02:22.141427 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 28 02:02:22.460817 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 28 02:02:22.480839 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:02:22.532726 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 28 02:02:22.544259 kernel: loop0: detected capacity change from 0 to 142488 Apr 28 02:02:22.661233 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 28 02:02:22.668495 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 28 02:02:22.789181 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 28 02:02:22.885319 kernel: loop1: detected capacity change from 0 to 140768 Apr 28 02:02:22.902779 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 28 02:02:22.933441 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 02:02:22.986956 kernel: loop2: detected capacity change from 0 to 219192 Apr 28 02:02:23.115408 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Apr 28 02:02:23.115421 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Apr 28 02:02:23.152191 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 02:02:23.191732 kernel: loop3: detected capacity change from 0 to 142488 Apr 28 02:02:23.268162 kernel: loop4: detected capacity change from 0 to 140768 Apr 28 02:02:23.329691 kernel: loop5: detected capacity change from 0 to 219192 Apr 28 02:02:23.356911 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 28 02:02:23.357987 (sd-merge)[1191]: Merged extensions into '/usr'. Apr 28 02:02:23.370483 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Apr 28 02:02:23.370836 systemd[1]: Reloading... Apr 28 02:02:24.010729 zram_generator::config[1217]: No configuration found. Apr 28 02:02:24.214293 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 28 02:02:24.249938 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 02:02:24.293783 systemd[1]: Reloading finished in 922 ms. Apr 28 02:02:24.350995 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 28 02:02:24.361824 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 28 02:02:24.397277 systemd[1]: Starting ensure-sysext.service... Apr 28 02:02:24.405866 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 02:02:24.431784 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Apr 28 02:02:24.432184 systemd[1]: Reloading... Apr 28 02:02:24.918860 zram_generator::config[1289]: No configuration found. Apr 28 02:02:24.984393 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 02:02:24.984865 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 02:02:24.985864 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 02:02:24.986181 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Apr 28 02:02:24.986217 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Apr 28 02:02:24.988461 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 02:02:24.988699 systemd-tmpfiles[1258]: Skipping /boot Apr 28 02:02:24.998357 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 02:02:24.998448 systemd-tmpfiles[1258]: Skipping /boot Apr 28 02:02:25.097233 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 02:02:25.138343 systemd[1]: Reloading finished in 705 ms. Apr 28 02:02:25.158525 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 02:02:25.211250 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 02:02:25.225169 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 28 02:02:25.247497 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 28 02:02:25.263860 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 02:02:25.278874 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 28 02:02:25.290508 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:02:25.290746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 02:02:25.308126 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 02:02:25.317684 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 02:02:25.329410 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 02:02:25.337207 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 02:02:25.340289 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 28 02:02:25.347998 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:02:25.349416 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 28 02:02:25.360293 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 28 02:02:25.363939 augenrules[1346]: No rules Apr 28 02:02:25.370319 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 02:02:25.401867 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 02:02:25.401978 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 02:02:25.412795 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 02:02:25.413225 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 02:02:25.427230 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 02:02:25.427451 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 02:02:25.442858 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 02:02:25.443004 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 02:02:25.443143 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 02:02:25.444334 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 28 02:02:25.461408 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:02:25.461848 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 02:02:25.471214 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 02:02:26.357254 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 02:02:26.370201 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 02:02:26.382165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 02:02:26.382815 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 02:02:26.382999 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:02:26.387342 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 28 02:02:26.407019 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 28 02:02:26.432444 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 02:02:26.434441 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 02:02:26.444237 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 02:02:26.444489 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 02:02:26.457474 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 02:02:26.457811 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 02:02:26.476850 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:02:26.477175 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 02:02:26.477375 systemd-resolved[1332]: Positive Trust Anchors: Apr 28 02:02:26.477459 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 02:02:26.477484 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 02:02:26.485782 systemd-resolved[1332]: Defaulting to hostname 'linux'. Apr 28 02:02:26.486400 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 02:02:26.497300 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 02:02:26.507776 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 02:02:26.519794 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 02:02:26.528432 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 02:02:26.530802 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 02:02:26.548222 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 28 02:02:26.557187 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 02:02:26.557365 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:02:26.558293 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 02:02:26.568292 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 02:02:26.568499 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 02:02:26.578486 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 02:02:26.578763 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 02:02:26.589770 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 02:02:26.590177 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 02:02:26.601006 systemd-udevd[1374]: Using default interface naming scheme 'v255'. Apr 28 02:02:26.603834 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 02:02:26.604241 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 02:02:26.614465 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 28 02:02:26.627799 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 02:02:26.639274 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 02:02:26.639385 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 02:02:26.640733 systemd[1]: Finished ensure-sysext.service. Apr 28 02:02:26.648708 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 02:02:26.685409 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 02:02:26.707229 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 28 02:02:26.789019 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 28 02:02:26.893838 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 28 02:02:26.903522 systemd-networkd[1396]: lo: Link UP Apr 28 02:02:26.903716 systemd-networkd[1396]: lo: Gained carrier Apr 28 02:02:26.904490 systemd[1]: Reached target time-set.target - System Time Set. Apr 28 02:02:26.905148 systemd-networkd[1396]: Enumeration completed Apr 28 02:02:26.909443 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:02:26.909449 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 02:02:26.913257 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 02:02:26.913978 systemd-networkd[1396]: eth0: Link UP Apr 28 02:02:26.914019 systemd-networkd[1396]: eth0: Gained carrier Apr 28 02:02:26.914178 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:02:26.929161 systemd[1]: Reached target network.target - Network. Apr 28 02:02:26.944762 systemd-networkd[1396]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 02:02:26.947496 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Apr 28 02:02:26.949195 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 28 02:02:26.950708 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:02:27.778356 systemd-timesyncd[1397]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 28 02:02:27.778630 systemd-resolved[1332]: Clock change detected. Flushing caches. Apr 28 02:02:27.780747 systemd-timesyncd[1397]: Initial clock synchronization to Tue 2026-04-28 02:02:27.778179 UTC. Apr 28 02:02:27.840072 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1388) Apr 28 02:02:28.433783 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 28 02:02:28.447523 kernel: ACPI: button: Power Button [PWRF] Apr 28 02:02:28.494104 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 02:02:28.519133 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 28 02:02:28.549958 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 28 02:02:28.550594 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 28 02:02:28.550713 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 28 02:02:28.566313 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 28 02:02:28.576047 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 02:02:28.594213 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 28 02:02:28.674665 kernel: mousedev: PS/2 mouse device common for all mice Apr 28 02:02:29.107969 systemd-networkd[1396]: eth0: Gained IPv6LL Apr 28 02:02:29.200224 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 28 02:02:30.210325 systemd[1]: Reached target network-online.target - Network is Online. Apr 28 02:02:30.227171 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:02:30.255252 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 28 02:02:30.280357 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 28 02:02:30.421186 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 02:02:30.506268 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 28 02:02:30.519183 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 02:02:30.529268 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 02:02:30.539042 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 28 02:02:30.550130 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 28 02:02:30.561777 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 28 02:02:30.572040 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 28 02:02:30.585121 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 28 02:02:30.596782 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 28 02:02:30.597353 systemd[1]: Reached target paths.target - Path Units. Apr 28 02:02:30.605246 systemd[1]: Reached target timers.target - Timer Units. Apr 28 02:02:30.616624 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 28 02:02:30.640106 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 28 02:02:30.667662 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 28 02:02:30.688723 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 28 02:02:30.708876 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 28 02:02:30.759653 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 02:02:30.726353 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 02:02:30.742503 systemd[1]: Reached target basic.target - Basic System. Apr 28 02:02:30.757805 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 28 02:02:30.757845 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 28 02:02:30.760182 systemd[1]: Starting containerd.service - containerd container runtime... Apr 28 02:02:30.782593 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 28 02:02:30.800287 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 28 02:02:30.817860 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 28 02:02:30.837487 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 28 02:02:30.853703 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 28 02:02:30.868650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:02:30.876245 jq[1436]: false Apr 28 02:02:30.893669 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 28 02:02:30.910525 extend-filesystems[1437]: Found loop3 Apr 28 02:02:30.910525 extend-filesystems[1437]: Found loop4 Apr 28 02:02:30.910525 extend-filesystems[1437]: Found loop5 Apr 28 02:02:30.910525 extend-filesystems[1437]: Found sr0 Apr 28 02:02:30.910525 extend-filesystems[1437]: Found vda Apr 28 02:02:30.910525 extend-filesystems[1437]: Found vda1 Apr 28 02:02:30.910525 extend-filesystems[1437]: Found vda2 Apr 28 02:02:30.910525 extend-filesystems[1437]: Found vda3 Apr 28 02:02:30.910525 extend-filesystems[1437]: Found usr Apr 28 02:02:30.910525 extend-filesystems[1437]: Found vda4 Apr 28 02:02:30.910525 extend-filesystems[1437]: Found vda6 Apr 28 02:02:30.910525 extend-filesystems[1437]: Found vda7 Apr 28 02:02:30.910525 extend-filesystems[1437]: Found vda9 Apr 28 02:02:30.910525 extend-filesystems[1437]: Checking size of /dev/vda9 Apr 28 02:02:31.141056 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 28 02:02:31.141086 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1383) Apr 28 02:02:31.141097 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 28 02:02:31.141572 extend-filesystems[1437]: Resized partition /dev/vda9 Apr 28 02:02:30.925330 dbus-daemon[1435]: [system] SELinux support is enabled Apr 28 02:02:30.934204 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 28 02:02:31.171619 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Apr 28 02:02:31.060036 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 28 02:02:31.092283 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 28 02:02:31.131061 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 28 02:02:31.164063 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 28 02:02:31.176987 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 28 02:02:31.176987 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 28 02:02:31.176987 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 28 02:02:31.194672 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Apr 28 02:02:31.222555 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 28 02:02:31.223294 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 28 02:02:31.225260 systemd[1]: Starting update-engine.service - Update Engine... Apr 28 02:02:31.234821 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 28 02:02:31.243660 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 28 02:02:31.255028 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 28 02:02:31.269007 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 28 02:02:31.269817 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 28 02:02:31.270146 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 28 02:02:31.270617 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 28 02:02:31.273860 jq[1467]: true Apr 28 02:02:31.282016 systemd[1]: motdgen.service: Deactivated successfully. Apr 28 02:02:31.282248 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 28 02:02:31.290212 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 28 02:02:31.299306 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 28 02:02:31.299592 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 28 02:02:31.317902 systemd-logind[1463]: Watching system buttons on /dev/input/event1 (Power Button) Apr 28 02:02:31.317994 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 28 02:02:31.320861 systemd-logind[1463]: New seat seat0. Apr 28 02:02:31.324219 systemd[1]: Started systemd-logind.service - User Login Management. Apr 28 02:02:31.345491 jq[1472]: true Apr 28 02:02:31.351804 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 28 02:02:31.364513 dbus-daemon[1435]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 28 02:02:31.367308 tar[1471]: linux-amd64/LICENSE Apr 28 02:02:31.376900 tar[1471]: linux-amd64/helm Apr 28 02:02:31.371257 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 28 02:02:31.371868 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 28 02:02:31.381716 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 28 02:02:31.381887 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 28 02:02:31.399025 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 28 02:02:31.399233 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 28 02:02:31.408712 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 28 02:02:31.410725 update_engine[1466]: I20260428 02:02:31.409307 1466 main.cc:92] Flatcar Update Engine starting Apr 28 02:02:31.433546 systemd[1]: Started update-engine.service - Update Engine. Apr 28 02:02:31.444580 update_engine[1466]: I20260428 02:02:31.444201 1466 update_check_scheduler.cc:74] Next update check in 11m39s Apr 28 02:02:31.451850 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 28 02:02:31.470713 bash[1505]: Updated "/home/core/.ssh/authorized_keys" Apr 28 02:02:31.481872 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 28 02:02:31.492236 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 28 02:02:31.541546 locksmithd[1506]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 28 02:02:31.676190 containerd[1473]: time="2026-04-28T02:02:31.676049492Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 28 02:02:31.714566 containerd[1473]: time="2026-04-28T02:02:31.714179065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 28 02:02:31.719670 containerd[1473]: time="2026-04-28T02:02:31.719355507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:02:31.719670 containerd[1473]: time="2026-04-28T02:02:31.719604809Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 28 02:02:31.719670 containerd[1473]: time="2026-04-28T02:02:31.719629481Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 28 02:02:31.720019 containerd[1473]: time="2026-04-28T02:02:31.719767551Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 28 02:02:31.720019 containerd[1473]: time="2026-04-28T02:02:31.719780321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 28 02:02:31.720019 containerd[1473]: time="2026-04-28T02:02:31.719827238Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:02:31.720019 containerd[1473]: time="2026-04-28T02:02:31.719837849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 28 02:02:31.722066 containerd[1473]: time="2026-04-28T02:02:31.720075958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:02:31.722066 containerd[1473]: time="2026-04-28T02:02:31.720091578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 28 02:02:31.722066 containerd[1473]: time="2026-04-28T02:02:31.720104984Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:02:31.722066 containerd[1473]: time="2026-04-28T02:02:31.720111934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 28 02:02:31.722066 containerd[1473]: time="2026-04-28T02:02:31.720171076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 28 02:02:31.722066 containerd[1473]: time="2026-04-28T02:02:31.720311667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 28 02:02:31.722066 containerd[1473]: time="2026-04-28T02:02:31.721108303Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:02:31.722066 containerd[1473]: time="2026-04-28T02:02:31.721133465Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 28 02:02:31.722066 containerd[1473]: time="2026-04-28T02:02:31.721213559Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 28 02:02:31.722066 containerd[1473]: time="2026-04-28T02:02:31.721252817Z" level=info msg="metadata content store policy set" policy=shared Apr 28 02:02:31.738272 containerd[1473]: time="2026-04-28T02:02:31.736121156Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 28 02:02:31.738272 containerd[1473]: time="2026-04-28T02:02:31.736175990Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 28 02:02:31.738272 containerd[1473]: time="2026-04-28T02:02:31.736191428Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 28 02:02:31.738272 containerd[1473]: time="2026-04-28T02:02:31.736203675Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 28 02:02:31.738272 containerd[1473]: time="2026-04-28T02:02:31.736215271Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 28 02:02:31.738272 containerd[1473]: time="2026-04-28T02:02:31.736519370Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 28 02:02:31.738272 containerd[1473]: time="2026-04-28T02:02:31.736692855Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 28 02:02:31.738272 containerd[1473]: time="2026-04-28T02:02:31.736764967Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 28 02:02:31.738272 containerd[1473]: time="2026-04-28T02:02:31.736776166Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 28 02:02:31.738272 containerd[1473]: time="2026-04-28T02:02:31.736786009Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 28 02:02:31.738272 containerd[1473]: time="2026-04-28T02:02:31.736798053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 28 02:02:31.738272 containerd[1473]: time="2026-04-28T02:02:31.736807337Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 28 02:02:31.738272 containerd[1473]: time="2026-04-28T02:02:31.736817710Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 28 02:02:31.738272 containerd[1473]: time="2026-04-28T02:02:31.736827865Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 28 02:02:31.738758 containerd[1473]: time="2026-04-28T02:02:31.736837923Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 28 02:02:31.738758 containerd[1473]: time="2026-04-28T02:02:31.736855345Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 28 02:02:31.738758 containerd[1473]: time="2026-04-28T02:02:31.736864167Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 28 02:02:31.738758 containerd[1473]: time="2026-04-28T02:02:31.736874409Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 28 02:02:31.738758 containerd[1473]: time="2026-04-28T02:02:31.736888474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.738758 containerd[1473]: time="2026-04-28T02:02:31.736899980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.738758 containerd[1473]: time="2026-04-28T02:02:31.737001007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.738758 containerd[1473]: time="2026-04-28T02:02:31.737016055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.738758 containerd[1473]: time="2026-04-28T02:02:31.737026406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.738758 containerd[1473]: time="2026-04-28T02:02:31.737036332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.738758 containerd[1473]: time="2026-04-28T02:02:31.737045610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.738758 containerd[1473]: time="2026-04-28T02:02:31.737055822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.738758 containerd[1473]: time="2026-04-28T02:02:31.737065782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.738758 containerd[1473]: time="2026-04-28T02:02:31.737077152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.739007 containerd[1473]: time="2026-04-28T02:02:31.737086289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.739007 containerd[1473]: time="2026-04-28T02:02:31.737094973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.739007 containerd[1473]: time="2026-04-28T02:02:31.737104330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.739007 containerd[1473]: time="2026-04-28T02:02:31.737115204Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 28 02:02:31.739007 containerd[1473]: time="2026-04-28T02:02:31.737130318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.739007 containerd[1473]: time="2026-04-28T02:02:31.737138890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.739007 containerd[1473]: time="2026-04-28T02:02:31.737146820Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 28 02:02:31.739007 containerd[1473]: time="2026-04-28T02:02:31.737178513Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 28 02:02:31.739007 containerd[1473]: time="2026-04-28T02:02:31.737194752Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 28 02:02:31.739007 containerd[1473]: time="2026-04-28T02:02:31.737203419Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 28 02:02:31.739007 containerd[1473]: time="2026-04-28T02:02:31.737212170Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 28 02:02:31.739007 containerd[1473]: time="2026-04-28T02:02:31.737218624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.739007 containerd[1473]: time="2026-04-28T02:02:31.737227778Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 28 02:02:31.739007 containerd[1473]: time="2026-04-28T02:02:31.737239046Z" level=info msg="NRI interface is disabled by configuration." Apr 28 02:02:31.739250 containerd[1473]: time="2026-04-28T02:02:31.737247040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 28 02:02:31.739264 containerd[1473]: time="2026-04-28T02:02:31.737612264Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 28 02:02:31.739264 containerd[1473]: time="2026-04-28T02:02:31.737654003Z" level=info msg="Connect containerd service" Apr 28 02:02:31.739264 containerd[1473]: time="2026-04-28T02:02:31.737686547Z" level=info msg="using legacy CRI server" Apr 28 02:02:31.739264 containerd[1473]: time="2026-04-28T02:02:31.737691167Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 28 02:02:31.739264 containerd[1473]: time="2026-04-28T02:02:31.737787485Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 28 02:02:31.741558 containerd[1473]: time="2026-04-28T02:02:31.741114798Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 02:02:31.741856 containerd[1473]: time="2026-04-28T02:02:31.741828648Z" level=info msg="Start subscribing containerd event" Apr 28 02:02:31.742278 containerd[1473]: time="2026-04-28T02:02:31.742266064Z" level=info msg="Start recovering state" Apr 28 02:02:31.742357 containerd[1473]: time="2026-04-28T02:02:31.742349854Z" level=info msg="Start event monitor" Apr 28 02:02:31.742752 containerd[1473]: time="2026-04-28T02:02:31.742537317Z" level=info msg="Start snapshots syncer" Apr 28 02:02:31.742752 containerd[1473]: time="2026-04-28T02:02:31.742548661Z" level=info msg="Start cni network conf syncer for default" Apr 28 02:02:31.742752 containerd[1473]: time="2026-04-28T02:02:31.742555003Z" level=info msg="Start streaming server" Apr 28 02:02:31.746281 containerd[1473]: time="2026-04-28T02:02:31.744164651Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 28 02:02:31.749673 containerd[1473]: time="2026-04-28T02:02:31.748807740Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 28 02:02:31.755846 systemd[1]: Started containerd.service - containerd container runtime. Apr 28 02:02:31.757125 containerd[1473]: time="2026-04-28T02:02:31.756148007Z" level=info msg="containerd successfully booted in 0.085467s" Apr 28 02:02:31.816232 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 28 02:02:31.862620 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 28 02:02:31.881251 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 28 02:02:31.897053 systemd[1]: issuegen.service: Deactivated successfully. Apr 28 02:02:31.897249 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 28 02:02:31.915286 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 28 02:02:31.930170 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 28 02:02:31.942540 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 28 02:02:31.951678 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 28 02:02:31.960251 systemd[1]: Reached target getty.target - Login Prompts. Apr 28 02:02:32.146339 tar[1471]: linux-amd64/README.md Apr 28 02:02:32.168315 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 28 02:02:32.747257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:02:32.756205 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 28 02:02:32.763904 systemd[1]: Startup finished in 3.788s (kernel) + 18.815s (initrd) + 12.294s (userspace) = 34.898s. Apr 28 02:02:32.946711 (kubelet)[1547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 02:02:33.777541 kubelet[1547]: E0428 02:02:33.775892 1547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 02:02:33.779599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 02:02:33.779773 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 02:02:33.780222 systemd[1]: kubelet.service: Consumed 1.265s CPU time. Apr 28 02:02:38.504630 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 28 02:02:38.518937 systemd[1]: Started sshd@0-10.0.0.133:22-10.0.0.1:54128.service - OpenSSH per-connection server daemon (10.0.0.1:54128). Apr 28 02:02:38.701731 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 54128 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:02:38.709789 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:02:38.734229 systemd-logind[1463]: New session 1 of user core. Apr 28 02:02:38.735253 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 28 02:02:38.745349 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 28 02:02:38.790623 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 28 02:02:38.808926 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 28 02:02:38.817314 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 28 02:02:39.017305 systemd[1564]: Queued start job for default target default.target. Apr 28 02:02:39.034217 systemd[1564]: Created slice app.slice - User Application Slice. Apr 28 02:02:39.034313 systemd[1564]: Reached target paths.target - Paths. Apr 28 02:02:39.034334 systemd[1564]: Reached target timers.target - Timers. Apr 28 02:02:39.036922 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 28 02:02:39.061883 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 28 02:02:39.062131 systemd[1564]: Reached target sockets.target - Sockets. Apr 28 02:02:39.062141 systemd[1564]: Reached target basic.target - Basic System. Apr 28 02:02:39.062166 systemd[1564]: Reached target default.target - Main User Target. Apr 28 02:02:39.062185 systemd[1564]: Startup finished in 215ms. Apr 28 02:02:39.062533 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 28 02:02:39.064654 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 28 02:02:39.140585 systemd[1]: Started sshd@1-10.0.0.133:22-10.0.0.1:54132.service - OpenSSH per-connection server daemon (10.0.0.1:54132). Apr 28 02:02:39.268844 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 54132 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:02:39.271175 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:02:39.288199 systemd-logind[1463]: New session 2 of user core. Apr 28 02:02:39.305235 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 28 02:02:39.387324 sshd[1575]: pam_unix(sshd:session): session closed for user core Apr 28 02:02:39.394834 systemd[1]: sshd@1-10.0.0.133:22-10.0.0.1:54132.service: Deactivated successfully. Apr 28 02:02:39.396679 systemd[1]: session-2.scope: Deactivated successfully. Apr 28 02:02:39.397867 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. Apr 28 02:02:39.420865 systemd[1]: Started sshd@2-10.0.0.133:22-10.0.0.1:43182.service - OpenSSH per-connection server daemon (10.0.0.1:43182). Apr 28 02:02:39.423094 systemd-logind[1463]: Removed session 2. Apr 28 02:02:39.464492 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 43182 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:02:39.466651 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:02:39.487873 systemd-logind[1463]: New session 3 of user core. Apr 28 02:02:39.504704 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 28 02:02:39.570535 sshd[1582]: pam_unix(sshd:session): session closed for user core Apr 28 02:02:39.583179 systemd[1]: sshd@2-10.0.0.133:22-10.0.0.1:43182.service: Deactivated successfully. Apr 28 02:02:39.585161 systemd[1]: session-3.scope: Deactivated successfully. Apr 28 02:02:39.587189 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. Apr 28 02:02:39.588812 systemd[1]: Started sshd@3-10.0.0.133:22-10.0.0.1:43190.service - OpenSSH per-connection server daemon (10.0.0.1:43190). Apr 28 02:02:39.591646 systemd-logind[1463]: Removed session 3. Apr 28 02:02:39.634349 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 43190 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:02:39.637702 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:02:39.647914 systemd-logind[1463]: New session 4 of user core. Apr 28 02:02:39.666689 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 28 02:02:39.740740 sshd[1589]: pam_unix(sshd:session): session closed for user core Apr 28 02:02:39.746719 systemd[1]: sshd@3-10.0.0.133:22-10.0.0.1:43190.service: Deactivated successfully. Apr 28 02:02:39.748270 systemd[1]: session-4.scope: Deactivated successfully. Apr 28 02:02:39.752308 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Apr 28 02:02:39.765773 systemd[1]: Started sshd@4-10.0.0.133:22-10.0.0.1:43202.service - OpenSSH per-connection server daemon (10.0.0.1:43202). Apr 28 02:02:39.767755 systemd-logind[1463]: Removed session 4. Apr 28 02:02:39.811697 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 43202 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:02:39.814298 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:02:39.825296 systemd-logind[1463]: New session 5 of user core. Apr 28 02:02:39.839053 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 28 02:02:39.933146 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 28 02:02:39.933672 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 02:02:39.964774 sudo[1600]: pam_unix(sudo:session): session closed for user root Apr 28 02:02:39.970292 sshd[1596]: pam_unix(sshd:session): session closed for user core Apr 28 02:02:39.981190 systemd[1]: sshd@4-10.0.0.133:22-10.0.0.1:43202.service: Deactivated successfully. Apr 28 02:02:39.986232 systemd[1]: session-5.scope: Deactivated successfully. Apr 28 02:02:39.987753 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Apr 28 02:02:40.001295 systemd[1]: Started sshd@5-10.0.0.133:22-10.0.0.1:43210.service - OpenSSH per-connection server daemon (10.0.0.1:43210). Apr 28 02:02:40.003736 systemd-logind[1463]: Removed session 5. Apr 28 02:02:40.043703 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 43210 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:02:40.045107 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:02:40.052653 systemd-logind[1463]: New session 6 of user core. Apr 28 02:02:40.066680 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 28 02:02:40.140295 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 28 02:02:40.140775 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 02:02:40.155056 sudo[1609]: pam_unix(sudo:session): session closed for user root Apr 28 02:02:40.177569 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 28 02:02:40.178750 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 02:02:40.212704 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 28 02:02:40.226728 auditctl[1612]: No rules Apr 28 02:02:40.227611 systemd[1]: audit-rules.service: Deactivated successfully. Apr 28 02:02:40.227878 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 28 02:02:40.230603 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 02:02:40.310905 augenrules[1630]: No rules Apr 28 02:02:40.313826 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 02:02:40.315948 sudo[1608]: pam_unix(sudo:session): session closed for user root Apr 28 02:02:40.319085 sshd[1605]: pam_unix(sshd:session): session closed for user core Apr 28 02:02:40.328842 systemd[1]: sshd@5-10.0.0.133:22-10.0.0.1:43210.service: Deactivated successfully. Apr 28 02:02:40.330600 systemd[1]: session-6.scope: Deactivated successfully. Apr 28 02:02:40.332290 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Apr 28 02:02:40.346045 systemd[1]: Started sshd@6-10.0.0.133:22-10.0.0.1:43216.service - OpenSSH per-connection server daemon (10.0.0.1:43216). Apr 28 02:02:40.347823 systemd-logind[1463]: Removed session 6. Apr 28 02:02:40.395630 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 43216 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:02:40.397170 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:02:40.407537 systemd-logind[1463]: New session 7 of user core. Apr 28 02:02:40.432710 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 28 02:02:40.500727 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 28 02:02:40.501186 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 02:02:41.324587 (dockerd)[1662]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 28 02:02:41.324745 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 28 02:02:41.864623 dockerd[1662]: time="2026-04-28T02:02:41.864293097Z" level=info msg="Starting up" Apr 28 02:02:42.079091 dockerd[1662]: time="2026-04-28T02:02:42.078723445Z" level=info msg="Loading containers: start." Apr 28 02:02:42.449863 kernel: Initializing XFRM netlink socket Apr 28 02:02:42.707273 systemd-networkd[1396]: docker0: Link UP Apr 28 02:02:42.759181 dockerd[1662]: time="2026-04-28T02:02:42.758947891Z" level=info msg="Loading containers: done." Apr 28 02:02:42.801637 dockerd[1662]: time="2026-04-28T02:02:42.801180197Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 02:02:42.801637 dockerd[1662]: time="2026-04-28T02:02:42.801616932Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 28 02:02:42.802263 dockerd[1662]: time="2026-04-28T02:02:42.801940553Z" level=info msg="Daemon has completed initialization" Apr 28 02:02:42.913579 dockerd[1662]: time="2026-04-28T02:02:42.913127486Z" level=info msg="API listen on /run/docker.sock" Apr 28 02:02:42.914621 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 28 02:02:43.852545 containerd[1473]: time="2026-04-28T02:02:43.851858600Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 28 02:02:44.033333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 28 02:02:44.048216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:02:44.280079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:02:44.293347 (kubelet)[1817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 02:02:44.451357 kubelet[1817]: E0428 02:02:44.451184 1817 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 02:02:44.458083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 02:02:44.458287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 02:02:44.528263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3831024812.mount: Deactivated successfully. Apr 28 02:02:46.511512 containerd[1473]: time="2026-04-28T02:02:46.511156466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:46.514187 containerd[1473]: time="2026-04-28T02:02:46.513888949Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 28 02:02:46.516577 containerd[1473]: time="2026-04-28T02:02:46.516518451Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:46.520670 containerd[1473]: time="2026-04-28T02:02:46.520583395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:46.521712 containerd[1473]: time="2026-04-28T02:02:46.521588750Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 2.669614306s" Apr 28 02:02:46.521712 containerd[1473]: time="2026-04-28T02:02:46.521615960Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 28 02:02:46.523818 containerd[1473]: time="2026-04-28T02:02:46.523735964Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 28 02:02:48.387868 containerd[1473]: time="2026-04-28T02:02:48.386603911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:48.390009 containerd[1473]: time="2026-04-28T02:02:48.388955886Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 28 02:02:48.392811 containerd[1473]: time="2026-04-28T02:02:48.392258006Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:48.397720 containerd[1473]: time="2026-04-28T02:02:48.397538441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:48.399228 containerd[1473]: time="2026-04-28T02:02:48.398961234Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 1.875137029s" Apr 28 02:02:48.399228 containerd[1473]: time="2026-04-28T02:02:48.399117747Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 28 02:02:48.401644 containerd[1473]: time="2026-04-28T02:02:48.401540301Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 28 02:02:49.754660 containerd[1473]: time="2026-04-28T02:02:49.754537118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:49.756782 containerd[1473]: time="2026-04-28T02:02:49.756270465Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 28 02:02:49.757571 containerd[1473]: time="2026-04-28T02:02:49.757534083Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:49.762853 containerd[1473]: time="2026-04-28T02:02:49.762525144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:49.763887 containerd[1473]: time="2026-04-28T02:02:49.763777639Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.36214217s" Apr 28 02:02:49.763930 containerd[1473]: time="2026-04-28T02:02:49.763897322Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 28 02:02:49.765758 containerd[1473]: time="2026-04-28T02:02:49.765620464Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 28 02:02:50.852635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount715943867.mount: Deactivated successfully. Apr 28 02:02:51.671209 containerd[1473]: time="2026-04-28T02:02:51.670239439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:51.673862 containerd[1473]: time="2026-04-28T02:02:51.673655924Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 28 02:02:51.676509 containerd[1473]: time="2026-04-28T02:02:51.676229027Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:51.682018 containerd[1473]: time="2026-04-28T02:02:51.681897019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:51.683001 containerd[1473]: time="2026-04-28T02:02:51.682905349Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1.917260769s" Apr 28 02:02:51.683001 containerd[1473]: time="2026-04-28T02:02:51.682996274Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 28 02:02:51.685231 containerd[1473]: time="2026-04-28T02:02:51.684878539Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 28 02:02:52.169285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1970487897.mount: Deactivated successfully. Apr 28 02:02:54.559033 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 28 02:02:54.739162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:02:55.131334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:02:55.131563 (kubelet)[1959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 02:02:55.184599 containerd[1473]: time="2026-04-28T02:02:55.182656375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:55.184599 containerd[1473]: time="2026-04-28T02:02:55.184266047Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 28 02:02:55.186034 containerd[1473]: time="2026-04-28T02:02:55.185907836Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:55.189631 containerd[1473]: time="2026-04-28T02:02:55.189528293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:55.190634 containerd[1473]: time="2026-04-28T02:02:55.190326781Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 3.50533865s" Apr 28 02:02:55.190634 containerd[1473]: time="2026-04-28T02:02:55.190592395Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 28 02:02:55.202964 containerd[1473]: time="2026-04-28T02:02:55.202702320Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 28 02:02:55.506691 kubelet[1959]: E0428 02:02:55.505834 1959 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 02:02:55.511352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 02:02:55.511723 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 02:02:55.729959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028263200.mount: Deactivated successfully. Apr 28 02:02:55.752050 containerd[1473]: time="2026-04-28T02:02:55.751915370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:55.754722 containerd[1473]: time="2026-04-28T02:02:55.752694969Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 28 02:02:55.757747 containerd[1473]: time="2026-04-28T02:02:55.757178454Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:55.764296 containerd[1473]: time="2026-04-28T02:02:55.764158988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:55.765728 containerd[1473]: time="2026-04-28T02:02:55.765621875Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 561.748933ms" Apr 28 02:02:55.765728 containerd[1473]: time="2026-04-28T02:02:55.765650747Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 28 02:02:55.767702 containerd[1473]: time="2026-04-28T02:02:55.767514874Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 28 02:02:56.305234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount73177786.mount: Deactivated successfully. Apr 28 02:02:57.686163 containerd[1473]: time="2026-04-28T02:02:57.685972380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:57.686936 containerd[1473]: time="2026-04-28T02:02:57.686764575Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 28 02:02:57.687798 containerd[1473]: time="2026-04-28T02:02:57.687741298Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:57.690651 containerd[1473]: time="2026-04-28T02:02:57.690599015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:02:57.691741 containerd[1473]: time="2026-04-28T02:02:57.691697025Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.924157952s" Apr 28 02:02:57.691795 containerd[1473]: time="2026-04-28T02:02:57.691742602Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 28 02:03:00.412703 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:03:00.420654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:03:00.445987 systemd[1]: Reloading requested from client PID 2065 ('systemctl') (unit session-7.scope)... Apr 28 02:03:00.446020 systemd[1]: Reloading... Apr 28 02:03:00.512588 zram_generator::config[2104]: No configuration found. Apr 28 02:03:00.605061 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 02:03:00.655503 systemd[1]: Reloading finished in 209 ms. Apr 28 02:03:00.704502 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:03:00.707577 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 02:03:00.707780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:03:00.709231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:03:00.837291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:03:00.841285 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 02:03:00.892776 kubelet[2154]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 02:03:00.892776 kubelet[2154]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 02:03:00.893567 kubelet[2154]: I0428 02:03:00.892964 2154 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 02:03:01.447945 kubelet[2154]: I0428 02:03:01.447249 2154 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 28 02:03:01.447945 kubelet[2154]: I0428 02:03:01.447272 2154 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 02:03:01.450317 kubelet[2154]: I0428 02:03:01.450272 2154 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 28 02:03:01.450540 kubelet[2154]: I0428 02:03:01.450438 2154 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 02:03:01.450713 kubelet[2154]: I0428 02:03:01.450628 2154 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 02:03:01.502112 kubelet[2154]: I0428 02:03:01.501998 2154 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 02:03:01.502112 kubelet[2154]: E0428 02:03:01.502096 2154 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 02:03:01.507312 kubelet[2154]: E0428 02:03:01.507227 2154 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 02:03:01.507312 kubelet[2154]: I0428 02:03:01.507306 2154 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 28 02:03:01.513210 kubelet[2154]: I0428 02:03:01.513063 2154 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 28 02:03:01.514753 kubelet[2154]: I0428 02:03:01.514643 2154 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 02:03:01.514896 kubelet[2154]: I0428 02:03:01.514715 2154 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 02:03:01.514896 kubelet[2154]: I0428 02:03:01.514869 2154 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 02:03:01.514896 kubelet[2154]: I0428 02:03:01.514876 2154 container_manager_linux.go:306] "Creating device plugin manager" Apr 28 02:03:01.515121 kubelet[2154]: I0428 02:03:01.514946 2154 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 28 02:03:01.518171 kubelet[2154]: I0428 02:03:01.518063 2154 state_mem.go:36] "Initialized new in-memory state store" Apr 28 02:03:01.518749 kubelet[2154]: I0428 02:03:01.518337 2154 kubelet.go:475] "Attempting to sync node with API server" Apr 28 02:03:01.518749 kubelet[2154]: I0428 02:03:01.518443 2154 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 02:03:01.518749 kubelet[2154]: I0428 02:03:01.518520 2154 kubelet.go:387] "Adding apiserver pod source" Apr 28 02:03:01.518749 kubelet[2154]: I0428 02:03:01.518536 2154 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 02:03:01.522022 kubelet[2154]: E0428 02:03:01.521867 2154 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 02:03:01.522022 kubelet[2154]: I0428 02:03:01.521938 2154 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 02:03:01.522491 kubelet[2154]: E0428 02:03:01.522289 2154 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 02:03:01.522689 kubelet[2154]: I0428 02:03:01.522545 2154 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 02:03:01.522689 kubelet[2154]: I0428 02:03:01.522575 2154 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 28 02:03:01.522689 kubelet[2154]: W0428 02:03:01.522620 2154 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 28 02:03:01.527759 kubelet[2154]: I0428 02:03:01.527711 2154 server.go:1262] "Started kubelet" Apr 28 02:03:01.528187 kubelet[2154]: I0428 02:03:01.528104 2154 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 02:03:01.528229 kubelet[2154]: I0428 02:03:01.528188 2154 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 28 02:03:01.529336 kubelet[2154]: I0428 02:03:01.528267 2154 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 02:03:01.529336 kubelet[2154]: I0428 02:03:01.528460 2154 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 02:03:01.529336 kubelet[2154]: I0428 02:03:01.529284 2154 server.go:310] "Adding debug handlers to kubelet server" Apr 28 02:03:01.529834 kubelet[2154]: I0428 02:03:01.529785 2154 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 02:03:01.531696 kubelet[2154]: I0428 02:03:01.531577 2154 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 02:03:01.535049 kubelet[2154]: E0428 02:03:01.534219 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:01.535049 kubelet[2154]: I0428 02:03:01.534333 2154 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 28 02:03:01.535049 kubelet[2154]: I0428 02:03:01.534616 2154 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 28 02:03:01.535049 kubelet[2154]: I0428 02:03:01.534697 2154 reconciler.go:29] "Reconciler: start to sync state" Apr 28 02:03:01.535049 kubelet[2154]: E0428 02:03:01.534795 2154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="200ms" Apr 28 02:03:01.535049 kubelet[2154]: E0428 02:03:01.533925 2154 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa62eefc6a16f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 02:03:01.527631607 +0000 UTC m=+0.682010851,LastTimestamp:2026-04-28 02:03:01.527631607 +0000 UTC m=+0.682010851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 02:03:01.535779 kubelet[2154]: E0428 02:03:01.535236 2154 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 02:03:01.535779 kubelet[2154]: I0428 02:03:01.535556 2154 factory.go:223] Registration of the systemd container factory successfully Apr 28 02:03:01.535779 kubelet[2154]: E0428 02:03:01.535575 2154 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 02:03:01.535779 kubelet[2154]: I0428 02:03:01.535620 2154 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 02:03:01.537590 kubelet[2154]: I0428 02:03:01.537506 2154 factory.go:223] Registration of the containerd container factory successfully Apr 28 02:03:01.541538 kubelet[2154]: I0428 02:03:01.541498 2154 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 28 02:03:01.549216 kubelet[2154]: I0428 02:03:01.549154 2154 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 02:03:01.549216 kubelet[2154]: I0428 02:03:01.549202 2154 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 02:03:01.549216 kubelet[2154]: I0428 02:03:01.549215 2154 state_mem.go:36] "Initialized new in-memory state store" Apr 28 02:03:01.552010 kubelet[2154]: I0428 02:03:01.551923 2154 policy_none.go:49] "None policy: Start" Apr 28 02:03:01.552010 kubelet[2154]: I0428 02:03:01.551966 2154 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 28 02:03:01.552010 kubelet[2154]: I0428 02:03:01.551977 2154 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 28 02:03:01.553450 kubelet[2154]: I0428 02:03:01.553420 2154 policy_none.go:47] "Start" Apr 28 02:03:01.560887 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 28 02:03:01.561256 kubelet[2154]: I0428 02:03:01.561201 2154 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 28 02:03:01.561256 kubelet[2154]: I0428 02:03:01.561255 2154 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 28 02:03:01.561342 kubelet[2154]: I0428 02:03:01.561276 2154 kubelet.go:2428] "Starting kubelet main sync loop" Apr 28 02:03:01.561342 kubelet[2154]: E0428 02:03:01.561312 2154 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 02:03:01.562013 kubelet[2154]: E0428 02:03:01.561916 2154 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 02:03:01.575225 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 28 02:03:01.578302 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 28 02:03:01.590772 kubelet[2154]: E0428 02:03:01.590579 2154 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 02:03:01.591038 kubelet[2154]: I0428 02:03:01.590941 2154 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 02:03:01.591038 kubelet[2154]: I0428 02:03:01.590954 2154 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 02:03:01.591431 kubelet[2154]: I0428 02:03:01.591336 2154 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 02:03:01.592476 kubelet[2154]: E0428 02:03:01.592249 2154 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 02:03:01.592476 kubelet[2154]: E0428 02:03:01.592304 2154 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 02:03:01.680962 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 28 02:03:01.692119 kubelet[2154]: I0428 02:03:01.692027 2154 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 02:03:01.692460 kubelet[2154]: E0428 02:03:01.692355 2154 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Apr 28 02:03:01.693089 kubelet[2154]: E0428 02:03:01.693044 2154 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:03:01.695224 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 28 02:03:01.709253 kubelet[2154]: E0428 02:03:01.708986 2154 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:03:01.711629 systemd[1]: Created slice kubepods-burstable-pod6d447496ec098e8360c3694ee9947d9b.slice - libcontainer container kubepods-burstable-pod6d447496ec098e8360c3694ee9947d9b.slice. Apr 28 02:03:01.717319 kubelet[2154]: E0428 02:03:01.717237 2154 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:03:01.736627 kubelet[2154]: E0428 02:03:01.736351 2154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="400ms" Apr 28 02:03:01.837032 kubelet[2154]: I0428 02:03:01.836837 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:03:01.837032 kubelet[2154]: I0428 02:03:01.836915 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:03:01.837032 kubelet[2154]: I0428 02:03:01.836933 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 28 02:03:01.837032 kubelet[2154]: I0428 02:03:01.836948 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d447496ec098e8360c3694ee9947d9b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6d447496ec098e8360c3694ee9947d9b\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:03:01.837032 kubelet[2154]: I0428 02:03:01.836963 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d447496ec098e8360c3694ee9947d9b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6d447496ec098e8360c3694ee9947d9b\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:03:01.838002 kubelet[2154]: I0428 02:03:01.837000 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:03:01.838002 kubelet[2154]: I0428 02:03:01.837034 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:03:01.838002 kubelet[2154]: I0428 02:03:01.837050 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:03:01.838002 kubelet[2154]: I0428 02:03:01.837083 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d447496ec098e8360c3694ee9947d9b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6d447496ec098e8360c3694ee9947d9b\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:03:01.896109 kubelet[2154]: I0428 02:03:01.895990 2154 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 02:03:01.897154 kubelet[2154]: E0428 02:03:01.896362 2154 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Apr 28 02:03:01.997482 kubelet[2154]: E0428 02:03:01.997263 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:01.998886 containerd[1473]: time="2026-04-28T02:03:01.998782071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,}" Apr 28 02:03:02.013720 kubelet[2154]: E0428 02:03:02.013608 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:02.014955 containerd[1473]: time="2026-04-28T02:03:02.014803183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,}" Apr 28 02:03:02.020820 kubelet[2154]: E0428 02:03:02.020641 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:02.021633 containerd[1473]: time="2026-04-28T02:03:02.021580550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6d447496ec098e8360c3694ee9947d9b,Namespace:kube-system,Attempt:0,}" Apr 28 02:03:02.137672 kubelet[2154]: E0428 02:03:02.137462 2154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="800ms" Apr 28 02:03:02.300988 kubelet[2154]: I0428 02:03:02.300646 2154 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 02:03:02.301351 kubelet[2154]: E0428 02:03:02.301121 2154 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Apr 28 02:03:02.433044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809141667.mount: Deactivated successfully. Apr 28 02:03:02.440879 containerd[1473]: time="2026-04-28T02:03:02.440811488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:03:02.441793 containerd[1473]: time="2026-04-28T02:03:02.441748152Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 28 02:03:02.444897 containerd[1473]: time="2026-04-28T02:03:02.444792249Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:03:02.445987 containerd[1473]: time="2026-04-28T02:03:02.445882755Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:03:02.446821 containerd[1473]: time="2026-04-28T02:03:02.446758672Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 02:03:02.447845 containerd[1473]: time="2026-04-28T02:03:02.447733042Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:03:02.448752 containerd[1473]: time="2026-04-28T02:03:02.448700489Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 02:03:02.451695 containerd[1473]: time="2026-04-28T02:03:02.451664043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:03:02.453140 containerd[1473]: time="2026-04-28T02:03:02.453095992Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 438.201324ms" Apr 28 02:03:02.454162 containerd[1473]: time="2026-04-28T02:03:02.454014882Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 432.353801ms" Apr 28 02:03:02.455265 containerd[1473]: time="2026-04-28T02:03:02.455213803Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 456.330468ms" Apr 28 02:03:02.575115 containerd[1473]: time="2026-04-28T02:03:02.574608340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:03:02.575115 containerd[1473]: time="2026-04-28T02:03:02.574642351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:03:02.575115 containerd[1473]: time="2026-04-28T02:03:02.574653393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:03:02.575115 containerd[1473]: time="2026-04-28T02:03:02.574198876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:03:02.575115 containerd[1473]: time="2026-04-28T02:03:02.574593832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:03:02.575115 containerd[1473]: time="2026-04-28T02:03:02.574616275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:03:02.575115 containerd[1473]: time="2026-04-28T02:03:02.574847459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:03:02.575115 containerd[1473]: time="2026-04-28T02:03:02.574927696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:03:02.581565 containerd[1473]: time="2026-04-28T02:03:02.578914400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:03:02.581565 containerd[1473]: time="2026-04-28T02:03:02.578952643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:03:02.581565 containerd[1473]: time="2026-04-28T02:03:02.578964709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:03:02.581565 containerd[1473]: time="2026-04-28T02:03:02.579028091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:03:02.598591 systemd[1]: Started cri-containerd-7453875f9f3520e068d90a41ee9b660364638b8cfd850a85d8f001076328e577.scope - libcontainer container 7453875f9f3520e068d90a41ee9b660364638b8cfd850a85d8f001076328e577. Apr 28 02:03:02.605767 systemd[1]: Started cri-containerd-7abac49aef1e3d6dac8815f8568680ddb879111dd572297403e4ae2acbeee2de.scope - libcontainer container 7abac49aef1e3d6dac8815f8568680ddb879111dd572297403e4ae2acbeee2de. Apr 28 02:03:02.606660 systemd[1]: Started cri-containerd-934528cf1d96c8535c47eea802146b8564af328319ed4eaccd1c844b92ad3a65.scope - libcontainer container 934528cf1d96c8535c47eea802146b8564af328319ed4eaccd1c844b92ad3a65. Apr 28 02:03:02.658692 containerd[1473]: time="2026-04-28T02:03:02.658550595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7453875f9f3520e068d90a41ee9b660364638b8cfd850a85d8f001076328e577\"" Apr 28 02:03:02.661235 kubelet[2154]: E0428 02:03:02.661128 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:02.668205 containerd[1473]: time="2026-04-28T02:03:02.668141432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6d447496ec098e8360c3694ee9947d9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7abac49aef1e3d6dac8815f8568680ddb879111dd572297403e4ae2acbeee2de\"" Apr 28 02:03:02.668788 kubelet[2154]: E0428 02:03:02.668737 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:02.669156 containerd[1473]: time="2026-04-28T02:03:02.669107239Z" level=info msg="CreateContainer within sandbox \"7453875f9f3520e068d90a41ee9b660364638b8cfd850a85d8f001076328e577\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 28 02:03:02.675895 containerd[1473]: time="2026-04-28T02:03:02.675725052Z" level=info msg="CreateContainer within sandbox \"7abac49aef1e3d6dac8815f8568680ddb879111dd572297403e4ae2acbeee2de\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 28 02:03:02.681689 containerd[1473]: time="2026-04-28T02:03:02.681614823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"934528cf1d96c8535c47eea802146b8564af328319ed4eaccd1c844b92ad3a65\"" Apr 28 02:03:02.682511 kubelet[2154]: E0428 02:03:02.682454 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:02.686562 containerd[1473]: time="2026-04-28T02:03:02.686521388Z" level=info msg="CreateContainer within sandbox \"934528cf1d96c8535c47eea802146b8564af328319ed4eaccd1c844b92ad3a65\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 28 02:03:02.690095 containerd[1473]: time="2026-04-28T02:03:02.690044025Z" level=info msg="CreateContainer within sandbox \"7453875f9f3520e068d90a41ee9b660364638b8cfd850a85d8f001076328e577\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"89ced956c8be325a58e4fd665edd286bce48daa5ace69a6c3ca13a69017d134b\"" Apr 28 02:03:02.690890 containerd[1473]: time="2026-04-28T02:03:02.690846228Z" level=info msg="StartContainer for \"89ced956c8be325a58e4fd665edd286bce48daa5ace69a6c3ca13a69017d134b\"" Apr 28 02:03:02.699042 kubelet[2154]: E0428 02:03:02.698988 2154 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 02:03:02.699679 containerd[1473]: time="2026-04-28T02:03:02.699639813Z" level=info msg="CreateContainer within sandbox \"7abac49aef1e3d6dac8815f8568680ddb879111dd572297403e4ae2acbeee2de\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6358cd45ad54e2bdd0454eab319456c967cb6b7fd28fd69d18bc4127f6c65369\"" Apr 28 02:03:02.700491 containerd[1473]: time="2026-04-28T02:03:02.700024970Z" level=info msg="StartContainer for \"6358cd45ad54e2bdd0454eab319456c967cb6b7fd28fd69d18bc4127f6c65369\"" Apr 28 02:03:02.707441 containerd[1473]: time="2026-04-28T02:03:02.707249594Z" level=info msg="CreateContainer within sandbox \"934528cf1d96c8535c47eea802146b8564af328319ed4eaccd1c844b92ad3a65\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0935d77fdeac0dc8fd106ccd45e887155014e47592bfda4361c8b929155eb44c\"" Apr 28 02:03:02.708979 containerd[1473]: time="2026-04-28T02:03:02.708908070Z" level=info msg="StartContainer for \"0935d77fdeac0dc8fd106ccd45e887155014e47592bfda4361c8b929155eb44c\"" Apr 28 02:03:02.723752 systemd[1]: Started cri-containerd-89ced956c8be325a58e4fd665edd286bce48daa5ace69a6c3ca13a69017d134b.scope - libcontainer container 89ced956c8be325a58e4fd665edd286bce48daa5ace69a6c3ca13a69017d134b. Apr 28 02:03:02.726249 systemd[1]: Started cri-containerd-6358cd45ad54e2bdd0454eab319456c967cb6b7fd28fd69d18bc4127f6c65369.scope - libcontainer container 6358cd45ad54e2bdd0454eab319456c967cb6b7fd28fd69d18bc4127f6c65369. Apr 28 02:03:02.755007 systemd[1]: Started cri-containerd-0935d77fdeac0dc8fd106ccd45e887155014e47592bfda4361c8b929155eb44c.scope - libcontainer container 0935d77fdeac0dc8fd106ccd45e887155014e47592bfda4361c8b929155eb44c. Apr 28 02:03:02.782711 containerd[1473]: time="2026-04-28T02:03:02.782578031Z" level=info msg="StartContainer for \"89ced956c8be325a58e4fd665edd286bce48daa5ace69a6c3ca13a69017d134b\" returns successfully" Apr 28 02:03:02.782711 containerd[1473]: time="2026-04-28T02:03:02.782702935Z" level=info msg="StartContainer for \"6358cd45ad54e2bdd0454eab319456c967cb6b7fd28fd69d18bc4127f6c65369\" returns successfully" Apr 28 02:03:02.813240 containerd[1473]: time="2026-04-28T02:03:02.812859756Z" level=info msg="StartContainer for \"0935d77fdeac0dc8fd106ccd45e887155014e47592bfda4361c8b929155eb44c\" returns successfully" Apr 28 02:03:02.826988 kubelet[2154]: E0428 02:03:02.826845 2154 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 02:03:02.829056 kubelet[2154]: E0428 02:03:02.829012 2154 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 02:03:03.105507 kubelet[2154]: I0428 02:03:03.105265 2154 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 02:03:03.577261 kubelet[2154]: E0428 02:03:03.577035 2154 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:03:03.577261 kubelet[2154]: E0428 02:03:03.577231 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:03.579075 kubelet[2154]: E0428 02:03:03.578738 2154 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:03:03.579075 kubelet[2154]: E0428 02:03:03.578814 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:03.580281 kubelet[2154]: E0428 02:03:03.580218 2154 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:03:03.580423 kubelet[2154]: E0428 02:03:03.580322 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:03.966748 kubelet[2154]: E0428 02:03:03.966159 2154 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 28 02:03:04.054209 kubelet[2154]: I0428 02:03:04.054036 2154 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 02:03:04.054209 kubelet[2154]: E0428 02:03:04.054096 2154 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 28 02:03:04.066110 kubelet[2154]: E0428 02:03:04.065560 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:04.167880 kubelet[2154]: E0428 02:03:04.167585 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:04.268887 kubelet[2154]: E0428 02:03:04.268790 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:04.369208 kubelet[2154]: E0428 02:03:04.369039 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:04.471602 kubelet[2154]: E0428 02:03:04.471029 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:04.573706 kubelet[2154]: E0428 02:03:04.572819 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:04.584289 kubelet[2154]: E0428 02:03:04.584164 2154 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:03:04.584289 kubelet[2154]: E0428 02:03:04.584298 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:04.584716 kubelet[2154]: E0428 02:03:04.584507 2154 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:03:04.584716 kubelet[2154]: E0428 02:03:04.584577 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:04.584716 kubelet[2154]: E0428 02:03:04.584614 2154 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:03:04.584716 kubelet[2154]: E0428 02:03:04.584689 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:04.673589 kubelet[2154]: E0428 02:03:04.673294 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:04.774130 kubelet[2154]: E0428 02:03:04.773887 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:04.875330 kubelet[2154]: E0428 02:03:04.874772 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:04.976291 kubelet[2154]: E0428 02:03:04.976131 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:05.077136 kubelet[2154]: E0428 02:03:05.076873 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:05.177904 kubelet[2154]: E0428 02:03:05.177501 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:05.279166 kubelet[2154]: E0428 02:03:05.278915 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:05.380538 kubelet[2154]: E0428 02:03:05.380273 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:05.482130 kubelet[2154]: E0428 02:03:05.481662 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:05.583029 kubelet[2154]: E0428 02:03:05.582876 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:05.683902 kubelet[2154]: E0428 02:03:05.683698 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:05.785448 kubelet[2154]: E0428 02:03:05.785109 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:05.886709 kubelet[2154]: E0428 02:03:05.886470 2154 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:05.935633 kubelet[2154]: I0428 02:03:05.935279 2154 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:03:05.941884 kubelet[2154]: I0428 02:03:05.941583 2154 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 02:03:05.944906 kubelet[2154]: I0428 02:03:05.944885 2154 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 02:03:06.296457 systemd[1]: Reloading requested from client PID 2445 ('systemctl') (unit session-7.scope)... Apr 28 02:03:06.296489 systemd[1]: Reloading... Apr 28 02:03:06.355563 zram_generator::config[2480]: No configuration found. Apr 28 02:03:06.464847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 02:03:06.524331 kubelet[2154]: I0428 02:03:06.524105 2154 apiserver.go:52] "Watching apiserver" Apr 28 02:03:06.527556 kubelet[2154]: E0428 02:03:06.527331 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:06.527870 kubelet[2154]: E0428 02:03:06.527464 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:06.527870 kubelet[2154]: E0428 02:03:06.527861 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:06.535889 kubelet[2154]: I0428 02:03:06.535727 2154 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 28 02:03:06.537980 systemd[1]: Reloading finished in 241 ms. Apr 28 02:03:06.584113 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:03:06.602264 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 02:03:06.602619 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:03:06.602691 systemd[1]: kubelet.service: Consumed 1.175s CPU time, 124.8M memory peak, 0B memory swap peak. Apr 28 02:03:06.615031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:03:06.761065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:03:06.767778 (kubelet)[2529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 02:03:06.838333 kubelet[2529]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 02:03:06.838333 kubelet[2529]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 02:03:06.838333 kubelet[2529]: I0428 02:03:06.837778 2529 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 02:03:06.850013 kubelet[2529]: I0428 02:03:06.849843 2529 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 28 02:03:06.850013 kubelet[2529]: I0428 02:03:06.849899 2529 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 02:03:06.850013 kubelet[2529]: I0428 02:03:06.849985 2529 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 28 02:03:06.850013 kubelet[2529]: I0428 02:03:06.849995 2529 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 02:03:06.852710 kubelet[2529]: I0428 02:03:06.852652 2529 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 02:03:06.854576 kubelet[2529]: I0428 02:03:06.854472 2529 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 28 02:03:06.857024 kubelet[2529]: I0428 02:03:06.856970 2529 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 02:03:06.862176 kubelet[2529]: E0428 02:03:06.862030 2529 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 02:03:06.862176 kubelet[2529]: I0428 02:03:06.862165 2529 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 28 02:03:06.867709 kubelet[2529]: I0428 02:03:06.867544 2529 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 28 02:03:06.868036 kubelet[2529]: I0428 02:03:06.867889 2529 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 02:03:06.868108 kubelet[2529]: I0428 02:03:06.867945 2529 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 02:03:06.868108 kubelet[2529]: I0428 02:03:06.868062 2529 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 02:03:06.868108 kubelet[2529]: I0428 02:03:06.868069 2529 container_manager_linux.go:306] "Creating device plugin manager" Apr 28 02:03:06.868108 kubelet[2529]: I0428 02:03:06.868085 2529 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 28 02:03:06.868540 kubelet[2529]: I0428 02:03:06.868313 2529 state_mem.go:36] "Initialized new in-memory state store" Apr 28 02:03:06.868540 kubelet[2529]: I0428 02:03:06.868502 2529 kubelet.go:475] "Attempting to sync node with API server" Apr 28 02:03:06.868540 kubelet[2529]: I0428 02:03:06.868514 2529 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 02:03:06.868540 kubelet[2529]: I0428 02:03:06.868529 2529 kubelet.go:387] "Adding apiserver pod source" Apr 28 02:03:06.868540 kubelet[2529]: I0428 02:03:06.868537 2529 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 02:03:06.871476 kubelet[2529]: I0428 02:03:06.871319 2529 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 02:03:06.872058 kubelet[2529]: I0428 02:03:06.872021 2529 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 02:03:06.872058 kubelet[2529]: I0428 02:03:06.872051 2529 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 28 02:03:06.877447 kubelet[2529]: I0428 02:03:06.877348 2529 server.go:1262] "Started kubelet" Apr 28 02:03:06.880130 kubelet[2529]: I0428 02:03:06.880024 2529 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 02:03:06.881520 kubelet[2529]: I0428 02:03:06.881335 2529 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 02:03:06.883434 kubelet[2529]: I0428 02:03:06.882006 2529 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 02:03:06.883434 kubelet[2529]: I0428 02:03:06.883138 2529 server.go:310] "Adding debug handlers to kubelet server" Apr 28 02:03:06.883434 kubelet[2529]: I0428 02:03:06.883358 2529 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 28 02:03:06.883640 kubelet[2529]: E0428 02:03:06.883599 2529 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:03:06.884258 kubelet[2529]: I0428 02:03:06.884166 2529 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 28 02:03:06.886015 kubelet[2529]: I0428 02:03:06.884353 2529 reconciler.go:29] "Reconciler: start to sync state" Apr 28 02:03:06.888159 kubelet[2529]: I0428 02:03:06.887927 2529 factory.go:223] Registration of the systemd container factory successfully Apr 28 02:03:06.888701 kubelet[2529]: I0428 02:03:06.888197 2529 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 02:03:06.890640 kubelet[2529]: I0428 02:03:06.890602 2529 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 02:03:06.890751 kubelet[2529]: I0428 02:03:06.890655 2529 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 28 02:03:06.890875 kubelet[2529]: I0428 02:03:06.890823 2529 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 02:03:06.900289 kubelet[2529]: I0428 02:03:06.900015 2529 factory.go:223] Registration of the containerd container factory successfully Apr 28 02:03:06.901812 kubelet[2529]: E0428 02:03:06.901510 2529 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 02:03:06.919460 kubelet[2529]: I0428 02:03:06.919060 2529 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 28 02:03:06.922458 kubelet[2529]: I0428 02:03:06.922293 2529 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 28 02:03:06.922458 kubelet[2529]: I0428 02:03:06.922314 2529 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 28 02:03:06.922458 kubelet[2529]: I0428 02:03:06.922331 2529 kubelet.go:2428] "Starting kubelet main sync loop" Apr 28 02:03:06.922691 kubelet[2529]: E0428 02:03:06.922521 2529 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 02:03:06.959463 kubelet[2529]: I0428 02:03:06.959261 2529 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 02:03:06.959463 kubelet[2529]: I0428 02:03:06.959294 2529 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 02:03:06.959463 kubelet[2529]: I0428 02:03:06.959478 2529 state_mem.go:36] "Initialized new in-memory state store" Apr 28 02:03:06.960065 kubelet[2529]: I0428 02:03:06.959743 2529 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 28 02:03:06.960065 kubelet[2529]: I0428 02:03:06.959752 2529 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 28 02:03:06.960065 kubelet[2529]: I0428 02:03:06.959765 2529 policy_none.go:49] "None policy: Start" Apr 28 02:03:06.960065 kubelet[2529]: I0428 02:03:06.959773 2529 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 28 02:03:06.960065 kubelet[2529]: I0428 02:03:06.959779 2529 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 28 02:03:06.960065 kubelet[2529]: I0428 02:03:06.959855 2529 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 28 02:03:06.960065 kubelet[2529]: I0428 02:03:06.959861 2529 policy_none.go:47] "Start" Apr 28 02:03:06.978527 kubelet[2529]: E0428 02:03:06.978461 2529 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 02:03:06.978782 kubelet[2529]: I0428 02:03:06.978723 2529 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 02:03:06.978782 kubelet[2529]: I0428 02:03:06.978734 2529 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 02:03:06.980331 kubelet[2529]: E0428 02:03:06.980296 2529 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 02:03:06.980331 kubelet[2529]: I0428 02:03:06.980325 2529 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 02:03:07.026762 kubelet[2529]: I0428 02:03:07.026283 2529 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 02:03:07.026762 kubelet[2529]: I0428 02:03:07.026687 2529 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:03:07.026762 kubelet[2529]: I0428 02:03:07.026295 2529 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 02:03:07.047075 kubelet[2529]: E0428 02:03:07.046914 2529 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 28 02:03:07.047863 kubelet[2529]: E0428 02:03:07.047244 2529 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:03:07.047863 kubelet[2529]: E0428 02:03:07.047300 2529 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 28 02:03:07.086258 kubelet[2529]: I0428 02:03:07.086164 2529 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 02:03:07.095647 kubelet[2529]: I0428 02:03:07.095156 2529 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 28 02:03:07.095647 kubelet[2529]: I0428 02:03:07.095296 2529 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 02:03:07.188059 kubelet[2529]: I0428 02:03:07.187819 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d447496ec098e8360c3694ee9947d9b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6d447496ec098e8360c3694ee9947d9b\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:03:07.188059 kubelet[2529]: I0428 02:03:07.187898 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d447496ec098e8360c3694ee9947d9b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6d447496ec098e8360c3694ee9947d9b\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:03:07.188059 kubelet[2529]: I0428 02:03:07.187928 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d447496ec098e8360c3694ee9947d9b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6d447496ec098e8360c3694ee9947d9b\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:03:07.188059 kubelet[2529]: I0428 02:03:07.188017 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:03:07.188059 kubelet[2529]: I0428 02:03:07.188037 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:03:07.189305 kubelet[2529]: I0428 02:03:07.188056 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:03:07.189305 kubelet[2529]: I0428 02:03:07.188076 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:03:07.189305 kubelet[2529]: I0428 02:03:07.188158 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:03:07.189305 kubelet[2529]: I0428 02:03:07.188206 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 28 02:03:07.349942 kubelet[2529]: E0428 02:03:07.348217 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:07.349942 kubelet[2529]: E0428 02:03:07.348454 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:07.349942 kubelet[2529]: E0428 02:03:07.348182 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:07.356919 sudo[2570]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 28 02:03:07.357352 sudo[2570]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 28 02:03:08.044446 kubelet[2529]: I0428 02:03:07.979088 2529 apiserver.go:52] "Watching apiserver" Apr 28 02:03:08.240137 kubelet[2529]: E0428 02:03:08.239223 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:08.240137 kubelet[2529]: I0428 02:03:08.239226 2529 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 02:03:08.240137 kubelet[2529]: I0428 02:03:08.239687 2529 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 02:03:08.272315 kubelet[2529]: E0428 02:03:08.272028 2529 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 28 02:03:08.272315 kubelet[2529]: E0428 02:03:08.272530 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:08.273860 kubelet[2529]: E0428 02:03:08.273842 2529 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 28 02:03:08.274976 kubelet[2529]: E0428 02:03:08.274681 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:08.288065 kubelet[2529]: I0428 02:03:08.287677 2529 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 28 02:03:08.363069 kubelet[2529]: I0428 02:03:08.361719 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.361700005 podStartE2EDuration="3.361700005s" podCreationTimestamp="2026-04-28 02:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:03:08.287897774 +0000 UTC m=+1.513167365" watchObservedRunningTime="2026-04-28 02:03:08.361700005 +0000 UTC m=+1.586969554" Apr 28 02:03:08.376843 kubelet[2529]: I0428 02:03:08.376708 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.376685482 podStartE2EDuration="3.376685482s" podCreationTimestamp="2026-04-28 02:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:03:08.363671809 +0000 UTC m=+1.588941365" watchObservedRunningTime="2026-04-28 02:03:08.376685482 +0000 UTC m=+1.601955044" Apr 28 02:03:08.390059 kubelet[2529]: I0428 02:03:08.388928 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.388903154 podStartE2EDuration="3.388903154s" podCreationTimestamp="2026-04-28 02:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:03:08.378087934 +0000 UTC m=+1.603357494" watchObservedRunningTime="2026-04-28 02:03:08.388903154 +0000 UTC m=+1.614172711" Apr 28 02:03:08.866478 sudo[2570]: pam_unix(sudo:session): session closed for user root Apr 28 02:03:09.245344 kubelet[2529]: E0428 02:03:09.245198 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:09.248097 kubelet[2529]: E0428 02:03:09.246797 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:09.251931 kubelet[2529]: E0428 02:03:09.251070 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:10.163452 sudo[1642]: pam_unix(sudo:session): session closed for user root Apr 28 02:03:10.171288 sshd[1638]: pam_unix(sshd:session): session closed for user core Apr 28 02:03:10.179780 systemd[1]: sshd@6-10.0.0.133:22-10.0.0.1:43216.service: Deactivated successfully. Apr 28 02:03:10.181946 systemd[1]: session-7.scope: Deactivated successfully. Apr 28 02:03:10.182147 systemd[1]: session-7.scope: Consumed 6.471s CPU time, 160.5M memory peak, 0B memory swap peak. Apr 28 02:03:10.182866 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Apr 28 02:03:10.187409 systemd-logind[1463]: Removed session 7. Apr 28 02:03:13.351456 kubelet[2529]: I0428 02:03:13.350940 2529 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 28 02:03:13.354636 kubelet[2529]: I0428 02:03:13.353295 2529 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 28 02:03:13.354682 containerd[1473]: time="2026-04-28T02:03:13.353052207Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 28 02:03:13.671658 kubelet[2529]: E0428 02:03:13.671216 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:14.244681 kubelet[2529]: I0428 02:03:14.244502 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4949b79b-80c5-46ed-bd4d-3558e89e03a1-kube-proxy\") pod \"kube-proxy-m77w6\" (UID: \"4949b79b-80c5-46ed-bd4d-3558e89e03a1\") " pod="kube-system/kube-proxy-m77w6" Apr 28 02:03:14.244681 kubelet[2529]: I0428 02:03:14.244535 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4949b79b-80c5-46ed-bd4d-3558e89e03a1-xtables-lock\") pod \"kube-proxy-m77w6\" (UID: \"4949b79b-80c5-46ed-bd4d-3558e89e03a1\") " pod="kube-system/kube-proxy-m77w6" Apr 28 02:03:14.244681 kubelet[2529]: I0428 02:03:14.244561 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvr7r\" (UniqueName: \"kubernetes.io/projected/4949b79b-80c5-46ed-bd4d-3558e89e03a1-kube-api-access-fvr7r\") pod \"kube-proxy-m77w6\" (UID: \"4949b79b-80c5-46ed-bd4d-3558e89e03a1\") " pod="kube-system/kube-proxy-m77w6" Apr 28 02:03:14.244681 kubelet[2529]: I0428 02:03:14.244582 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4949b79b-80c5-46ed-bd4d-3558e89e03a1-lib-modules\") pod \"kube-proxy-m77w6\" (UID: \"4949b79b-80c5-46ed-bd4d-3558e89e03a1\") " pod="kube-system/kube-proxy-m77w6" Apr 28 02:03:14.257158 systemd[1]: Created slice kubepods-besteffort-pod4949b79b_80c5_46ed_bd4d_3558e89e03a1.slice - libcontainer container kubepods-besteffort-pod4949b79b_80c5_46ed_bd4d_3558e89e03a1.slice. Apr 28 02:03:14.263897 kubelet[2529]: E0428 02:03:14.263594 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:14.280356 systemd[1]: Created slice kubepods-burstable-pod20971217_5913_4bbf_9ab4_c8eb8a4c3642.slice - libcontainer container kubepods-burstable-pod20971217_5913_4bbf_9ab4_c8eb8a4c3642.slice. Apr 28 02:03:14.347850 kubelet[2529]: I0428 02:03:14.347703 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-xtables-lock\") pod \"cilium-j25pr\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " pod="kube-system/cilium-j25pr" Apr 28 02:03:14.347850 kubelet[2529]: I0428 02:03:14.347726 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-host-proc-sys-net\") pod \"cilium-j25pr\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " pod="kube-system/cilium-j25pr" Apr 28 02:03:14.347850 kubelet[2529]: I0428 02:03:14.347738 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf54z\" (UniqueName: \"kubernetes.io/projected/20971217-5913-4bbf-9ab4-c8eb8a4c3642-kube-api-access-vf54z\") pod \"cilium-j25pr\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " pod="kube-system/cilium-j25pr" Apr 28 02:03:14.347850 kubelet[2529]: I0428 02:03:14.347750 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-cilium-run\") pod \"cilium-j25pr\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " pod="kube-system/cilium-j25pr" Apr 28 02:03:14.347850 kubelet[2529]: I0428 02:03:14.347761 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20971217-5913-4bbf-9ab4-c8eb8a4c3642-cilium-config-path\") pod \"cilium-j25pr\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " pod="kube-system/cilium-j25pr" Apr 28 02:03:14.349039 kubelet[2529]: I0428 02:03:14.347774 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-host-proc-sys-kernel\") pod \"cilium-j25pr\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " pod="kube-system/cilium-j25pr" Apr 28 02:03:14.349039 kubelet[2529]: I0428 02:03:14.347784 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20971217-5913-4bbf-9ab4-c8eb8a4c3642-hubble-tls\") pod \"cilium-j25pr\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " pod="kube-system/cilium-j25pr" Apr 28 02:03:14.349039 kubelet[2529]: I0428 02:03:14.347810 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-hostproc\") pod \"cilium-j25pr\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " pod="kube-system/cilium-j25pr" Apr 28 02:03:14.349039 kubelet[2529]: I0428 02:03:14.347820 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-cilium-cgroup\") pod \"cilium-j25pr\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " pod="kube-system/cilium-j25pr" Apr 28 02:03:14.349039 kubelet[2529]: I0428 02:03:14.347895 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-cni-path\") pod \"cilium-j25pr\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " pod="kube-system/cilium-j25pr" Apr 28 02:03:14.349039 kubelet[2529]: I0428 02:03:14.347906 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-lib-modules\") pod \"cilium-j25pr\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " pod="kube-system/cilium-j25pr" Apr 28 02:03:14.349244 kubelet[2529]: I0428 02:03:14.347999 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20971217-5913-4bbf-9ab4-c8eb8a4c3642-clustermesh-secrets\") pod \"cilium-j25pr\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " pod="kube-system/cilium-j25pr" Apr 28 02:03:14.349244 kubelet[2529]: I0428 02:03:14.348021 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-bpf-maps\") pod \"cilium-j25pr\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " pod="kube-system/cilium-j25pr" Apr 28 02:03:14.349244 kubelet[2529]: I0428 02:03:14.348130 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-etc-cni-netd\") pod \"cilium-j25pr\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " pod="kube-system/cilium-j25pr" Apr 28 02:03:14.566002 systemd[1]: Created slice kubepods-besteffort-pode79f4351_2402_451b_9b19_de5c8871a487.slice - libcontainer container kubepods-besteffort-pode79f4351_2402_451b_9b19_de5c8871a487.slice. Apr 28 02:03:14.586534 kubelet[2529]: E0428 02:03:14.583831 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:14.587422 containerd[1473]: time="2026-04-28T02:03:14.587246438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m77w6,Uid:4949b79b-80c5-46ed-bd4d-3558e89e03a1,Namespace:kube-system,Attempt:0,}" Apr 28 02:03:14.589921 kubelet[2529]: E0428 02:03:14.589663 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:14.590440 containerd[1473]: time="2026-04-28T02:03:14.590239327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j25pr,Uid:20971217-5913-4bbf-9ab4-c8eb8a4c3642,Namespace:kube-system,Attempt:0,}" Apr 28 02:03:14.643180 containerd[1473]: time="2026-04-28T02:03:14.641877569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:03:14.643180 containerd[1473]: time="2026-04-28T02:03:14.642157222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:03:14.643180 containerd[1473]: time="2026-04-28T02:03:14.642172328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:03:14.644856 containerd[1473]: time="2026-04-28T02:03:14.644137940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:03:14.647063 containerd[1473]: time="2026-04-28T02:03:14.645351363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:03:14.650181 containerd[1473]: time="2026-04-28T02:03:14.649071723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:03:14.650181 containerd[1473]: time="2026-04-28T02:03:14.649143668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:03:14.650181 containerd[1473]: time="2026-04-28T02:03:14.649473172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:03:14.651758 kubelet[2529]: I0428 02:03:14.651525 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blkp4\" (UniqueName: \"kubernetes.io/projected/e79f4351-2402-451b-9b19-de5c8871a487-kube-api-access-blkp4\") pod \"cilium-operator-6f9c7c5859-fdpbc\" (UID: \"e79f4351-2402-451b-9b19-de5c8871a487\") " pod="kube-system/cilium-operator-6f9c7c5859-fdpbc" Apr 28 02:03:14.651758 kubelet[2529]: I0428 02:03:14.651596 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e79f4351-2402-451b-9b19-de5c8871a487-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-fdpbc\" (UID: \"e79f4351-2402-451b-9b19-de5c8871a487\") " pod="kube-system/cilium-operator-6f9c7c5859-fdpbc" Apr 28 02:03:14.681736 systemd[1]: Started cri-containerd-520eae72d5c30d7c2dd5f33c6e0a7d41e9a937580ab10e63aee7bc5b00fc97a4.scope - libcontainer container 520eae72d5c30d7c2dd5f33c6e0a7d41e9a937580ab10e63aee7bc5b00fc97a4. Apr 28 02:03:14.686141 systemd[1]: Started cri-containerd-b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9.scope - libcontainer container b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9. Apr 28 02:03:14.716732 containerd[1473]: time="2026-04-28T02:03:14.716572083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m77w6,Uid:4949b79b-80c5-46ed-bd4d-3558e89e03a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"520eae72d5c30d7c2dd5f33c6e0a7d41e9a937580ab10e63aee7bc5b00fc97a4\"" Apr 28 02:03:14.718270 kubelet[2529]: E0428 02:03:14.717871 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:14.725706 containerd[1473]: time="2026-04-28T02:03:14.725358711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j25pr,Uid:20971217-5913-4bbf-9ab4-c8eb8a4c3642,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9\"" Apr 28 02:03:14.727573 kubelet[2529]: E0428 02:03:14.727460 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:14.728656 containerd[1473]: time="2026-04-28T02:03:14.728564738Z" level=info msg="CreateContainer within sandbox \"520eae72d5c30d7c2dd5f33c6e0a7d41e9a937580ab10e63aee7bc5b00fc97a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 28 02:03:14.728656 containerd[1473]: time="2026-04-28T02:03:14.728604622Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 28 02:03:14.758029 containerd[1473]: time="2026-04-28T02:03:14.757848791Z" level=info msg="CreateContainer within sandbox \"520eae72d5c30d7c2dd5f33c6e0a7d41e9a937580ab10e63aee7bc5b00fc97a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8db82c43ac4a3c67972c042bb074df3c2ad1c037dab2a9ebdba561d9a6969ddb\"" Apr 28 02:03:14.758925 containerd[1473]: time="2026-04-28T02:03:14.758890829Z" level=info msg="StartContainer for \"8db82c43ac4a3c67972c042bb074df3c2ad1c037dab2a9ebdba561d9a6969ddb\"" Apr 28 02:03:14.803669 systemd[1]: Started cri-containerd-8db82c43ac4a3c67972c042bb074df3c2ad1c037dab2a9ebdba561d9a6969ddb.scope - libcontainer container 8db82c43ac4a3c67972c042bb074df3c2ad1c037dab2a9ebdba561d9a6969ddb. Apr 28 02:03:14.839844 containerd[1473]: time="2026-04-28T02:03:14.838628962Z" level=info msg="StartContainer for \"8db82c43ac4a3c67972c042bb074df3c2ad1c037dab2a9ebdba561d9a6969ddb\" returns successfully" Apr 28 02:03:14.887051 kubelet[2529]: E0428 02:03:14.886888 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:14.887718 containerd[1473]: time="2026-04-28T02:03:14.887480792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-fdpbc,Uid:e79f4351-2402-451b-9b19-de5c8871a487,Namespace:kube-system,Attempt:0,}" Apr 28 02:03:14.926199 containerd[1473]: time="2026-04-28T02:03:14.925914798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:03:14.926199 containerd[1473]: time="2026-04-28T02:03:14.925982133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:03:14.926199 containerd[1473]: time="2026-04-28T02:03:14.926006411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:03:14.926199 containerd[1473]: time="2026-04-28T02:03:14.926110075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:03:14.955749 systemd[1]: Started cri-containerd-4af16fa9847008101780e79609bd3fa1e44ad9fc2792bc046cbaf815ccf157cb.scope - libcontainer container 4af16fa9847008101780e79609bd3fa1e44ad9fc2792bc046cbaf815ccf157cb. Apr 28 02:03:15.005132 containerd[1473]: time="2026-04-28T02:03:15.004903454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-fdpbc,Uid:e79f4351-2402-451b-9b19-de5c8871a487,Namespace:kube-system,Attempt:0,} returns sandbox id \"4af16fa9847008101780e79609bd3fa1e44ad9fc2792bc046cbaf815ccf157cb\"" Apr 28 02:03:15.007304 kubelet[2529]: E0428 02:03:15.007069 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:15.267706 kubelet[2529]: E0428 02:03:15.267336 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:15.278314 kubelet[2529]: E0428 02:03:15.277991 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:16.872152 update_engine[1466]: I20260428 02:03:16.870658 1466 update_attempter.cc:509] Updating boot flags... Apr 28 02:03:16.932512 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2914) Apr 28 02:03:17.007129 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2915) Apr 28 02:03:17.644898 kubelet[2529]: E0428 02:03:17.644669 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:17.645869 kubelet[2529]: I0428 02:03:17.645824 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m77w6" podStartSLOduration=3.645812648 podStartE2EDuration="3.645812648s" podCreationTimestamp="2026-04-28 02:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:03:15.282707302 +0000 UTC m=+8.507976862" watchObservedRunningTime="2026-04-28 02:03:17.645812648 +0000 UTC m=+10.871082207" Apr 28 02:03:18.285740 kubelet[2529]: E0428 02:03:18.285592 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:18.729594 kubelet[2529]: E0428 02:03:18.728236 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:21.456060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3980877897.mount: Deactivated successfully. Apr 28 02:03:23.497718 containerd[1473]: time="2026-04-28T02:03:23.497484652Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:03:23.498901 containerd[1473]: time="2026-04-28T02:03:23.498835169Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 28 02:03:23.499729 containerd[1473]: time="2026-04-28T02:03:23.499686438Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:03:23.501429 containerd[1473]: time="2026-04-28T02:03:23.501278045Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.772645409s" Apr 28 02:03:23.501429 containerd[1473]: time="2026-04-28T02:03:23.501333236Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 28 02:03:23.503811 containerd[1473]: time="2026-04-28T02:03:23.503629239Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 28 02:03:23.509523 containerd[1473]: time="2026-04-28T02:03:23.509306398Z" level=info msg="CreateContainer within sandbox \"b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 28 02:03:23.528066 containerd[1473]: time="2026-04-28T02:03:23.527976669Z" level=info msg="CreateContainer within sandbox \"b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e\"" Apr 28 02:03:23.529242 containerd[1473]: time="2026-04-28T02:03:23.528587132Z" level=info msg="StartContainer for \"3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e\"" Apr 28 02:03:23.592206 systemd[1]: Started cri-containerd-3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e.scope - libcontainer container 3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e. Apr 28 02:03:23.669039 containerd[1473]: time="2026-04-28T02:03:23.668816452Z" level=info msg="StartContainer for \"3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e\" returns successfully" Apr 28 02:03:23.685142 systemd[1]: cri-containerd-3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e.scope: Deactivated successfully. Apr 28 02:03:23.810866 containerd[1473]: time="2026-04-28T02:03:23.809686557Z" level=info msg="shim disconnected" id=3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e namespace=k8s.io Apr 28 02:03:23.810866 containerd[1473]: time="2026-04-28T02:03:23.809756897Z" level=warning msg="cleaning up after shim disconnected" id=3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e namespace=k8s.io Apr 28 02:03:23.810866 containerd[1473]: time="2026-04-28T02:03:23.809767109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:03:24.309264 kubelet[2529]: E0428 02:03:24.309025 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:24.316304 containerd[1473]: time="2026-04-28T02:03:24.316015784Z" level=info msg="CreateContainer within sandbox \"b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 28 02:03:24.343892 containerd[1473]: time="2026-04-28T02:03:24.343638477Z" level=info msg="CreateContainer within sandbox \"b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c\"" Apr 28 02:03:24.344590 containerd[1473]: time="2026-04-28T02:03:24.344503568Z" level=info msg="StartContainer for \"f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c\"" Apr 28 02:03:24.401076 systemd[1]: Started cri-containerd-f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c.scope - libcontainer container f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c. Apr 28 02:03:24.457302 containerd[1473]: time="2026-04-28T02:03:24.457118999Z" level=info msg="StartContainer for \"f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c\" returns successfully" Apr 28 02:03:24.498158 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 02:03:24.499014 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:03:24.499139 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 28 02:03:24.505921 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 02:03:24.506486 systemd[1]: cri-containerd-f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c.scope: Deactivated successfully. Apr 28 02:03:24.524352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e-rootfs.mount: Deactivated successfully. Apr 28 02:03:24.533698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c-rootfs.mount: Deactivated successfully. Apr 28 02:03:24.542563 containerd[1473]: time="2026-04-28T02:03:24.542190056Z" level=info msg="shim disconnected" id=f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c namespace=k8s.io Apr 28 02:03:24.542563 containerd[1473]: time="2026-04-28T02:03:24.542278077Z" level=warning msg="cleaning up after shim disconnected" id=f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c namespace=k8s.io Apr 28 02:03:24.542563 containerd[1473]: time="2026-04-28T02:03:24.542318334Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:03:24.552328 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:03:25.159769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2396078138.mount: Deactivated successfully. Apr 28 02:03:25.317174 kubelet[2529]: E0428 02:03:25.316883 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:25.329291 containerd[1473]: time="2026-04-28T02:03:25.328888037Z" level=info msg="CreateContainer within sandbox \"b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 28 02:03:25.377002 containerd[1473]: time="2026-04-28T02:03:25.376617523Z" level=info msg="CreateContainer within sandbox \"b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c\"" Apr 28 02:03:25.378321 containerd[1473]: time="2026-04-28T02:03:25.378014475Z" level=info msg="StartContainer for \"145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c\"" Apr 28 02:03:25.448347 systemd[1]: Started cri-containerd-145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c.scope - libcontainer container 145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c. Apr 28 02:03:25.492094 containerd[1473]: time="2026-04-28T02:03:25.491611761Z" level=info msg="StartContainer for \"145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c\" returns successfully" Apr 28 02:03:25.493695 systemd[1]: cri-containerd-145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c.scope: Deactivated successfully. Apr 28 02:03:25.532192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c-rootfs.mount: Deactivated successfully. Apr 28 02:03:25.565888 containerd[1473]: time="2026-04-28T02:03:25.565702632Z" level=info msg="shim disconnected" id=145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c namespace=k8s.io Apr 28 02:03:25.565888 containerd[1473]: time="2026-04-28T02:03:25.565791649Z" level=warning msg="cleaning up after shim disconnected" id=145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c namespace=k8s.io Apr 28 02:03:25.565888 containerd[1473]: time="2026-04-28T02:03:25.565799495Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:03:25.818564 containerd[1473]: time="2026-04-28T02:03:25.818165097Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:03:25.819598 containerd[1473]: time="2026-04-28T02:03:25.819198937Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 28 02:03:25.820807 containerd[1473]: time="2026-04-28T02:03:25.820777319Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:03:25.822909 containerd[1473]: time="2026-04-28T02:03:25.822785948Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.319130789s" Apr 28 02:03:25.822909 containerd[1473]: time="2026-04-28T02:03:25.822842725Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 28 02:03:25.831123 containerd[1473]: time="2026-04-28T02:03:25.830946957Z" level=info msg="CreateContainer within sandbox \"4af16fa9847008101780e79609bd3fa1e44ad9fc2792bc046cbaf815ccf157cb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 28 02:03:25.844486 containerd[1473]: time="2026-04-28T02:03:25.844285577Z" level=info msg="CreateContainer within sandbox \"4af16fa9847008101780e79609bd3fa1e44ad9fc2792bc046cbaf815ccf157cb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb\"" Apr 28 02:03:25.845204 containerd[1473]: time="2026-04-28T02:03:25.845057701Z" level=info msg="StartContainer for \"b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb\"" Apr 28 02:03:25.900815 systemd[1]: Started cri-containerd-b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb.scope - libcontainer container b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb. Apr 28 02:03:25.935323 containerd[1473]: time="2026-04-28T02:03:25.935143159Z" level=info msg="StartContainer for \"b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb\" returns successfully" Apr 28 02:03:26.330579 kubelet[2529]: E0428 02:03:26.330294 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:26.342693 kubelet[2529]: E0428 02:03:26.342516 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:26.343119 containerd[1473]: time="2026-04-28T02:03:26.343087829Z" level=info msg="CreateContainer within sandbox \"b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 28 02:03:26.371487 containerd[1473]: time="2026-04-28T02:03:26.369593807Z" level=info msg="CreateContainer within sandbox \"b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69\"" Apr 28 02:03:26.371487 containerd[1473]: time="2026-04-28T02:03:26.370276620Z" level=info msg="StartContainer for \"f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69\"" Apr 28 02:03:26.451051 systemd[1]: Started cri-containerd-f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69.scope - libcontainer container f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69. Apr 28 02:03:26.493001 systemd[1]: cri-containerd-f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69.scope: Deactivated successfully. Apr 28 02:03:26.496850 containerd[1473]: time="2026-04-28T02:03:26.496647530Z" level=info msg="StartContainer for \"f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69\" returns successfully" Apr 28 02:03:26.534946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69-rootfs.mount: Deactivated successfully. Apr 28 02:03:26.544892 containerd[1473]: time="2026-04-28T02:03:26.544804262Z" level=info msg="shim disconnected" id=f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69 namespace=k8s.io Apr 28 02:03:26.544892 containerd[1473]: time="2026-04-28T02:03:26.544880566Z" level=warning msg="cleaning up after shim disconnected" id=f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69 namespace=k8s.io Apr 28 02:03:26.544892 containerd[1473]: time="2026-04-28T02:03:26.544889130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:03:27.343645 kubelet[2529]: E0428 02:03:27.343205 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:27.343645 kubelet[2529]: E0428 02:03:27.343618 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:27.352241 containerd[1473]: time="2026-04-28T02:03:27.352142726Z" level=info msg="CreateContainer within sandbox \"b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 28 02:03:27.368679 kubelet[2529]: I0428 02:03:27.368559 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-fdpbc" podStartSLOduration=2.553490386 podStartE2EDuration="13.368547422s" podCreationTimestamp="2026-04-28 02:03:14 +0000 UTC" firstStartedPulling="2026-04-28 02:03:15.008971157 +0000 UTC m=+8.234240706" lastFinishedPulling="2026-04-28 02:03:25.824028194 +0000 UTC m=+19.049297742" observedRunningTime="2026-04-28 02:03:26.403166585 +0000 UTC m=+19.628436133" watchObservedRunningTime="2026-04-28 02:03:27.368547422 +0000 UTC m=+20.593816981" Apr 28 02:03:27.378771 containerd[1473]: time="2026-04-28T02:03:27.378695567Z" level=info msg="CreateContainer within sandbox \"b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc\"" Apr 28 02:03:27.379681 containerd[1473]: time="2026-04-28T02:03:27.379544795Z" level=info msg="StartContainer for \"917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc\"" Apr 28 02:03:27.428647 systemd[1]: Started cri-containerd-917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc.scope - libcontainer container 917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc. Apr 28 02:03:27.491799 containerd[1473]: time="2026-04-28T02:03:27.491610414Z" level=info msg="StartContainer for \"917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc\" returns successfully" Apr 28 02:03:27.555665 systemd[1]: run-containerd-runc-k8s.io-917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc-runc.Xz0JIn.mount: Deactivated successfully. Apr 28 02:03:27.682630 kubelet[2529]: I0428 02:03:27.682234 2529 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 28 02:03:27.736926 systemd[1]: Created slice kubepods-burstable-pod9dbdbe79_d6f0_488e_8274_30d8fc714b2e.slice - libcontainer container kubepods-burstable-pod9dbdbe79_d6f0_488e_8274_30d8fc714b2e.slice. Apr 28 02:03:27.758017 systemd[1]: Created slice kubepods-burstable-pod245702ae_f3d3_432b_adb0_6bb9640770d2.slice - libcontainer container kubepods-burstable-pod245702ae_f3d3_432b_adb0_6bb9640770d2.slice. Apr 28 02:03:27.883551 kubelet[2529]: I0428 02:03:27.882478 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz9fc\" (UniqueName: \"kubernetes.io/projected/245702ae-f3d3-432b-adb0-6bb9640770d2-kube-api-access-wz9fc\") pod \"coredns-66bc5c9577-dc6ht\" (UID: \"245702ae-f3d3-432b-adb0-6bb9640770d2\") " pod="kube-system/coredns-66bc5c9577-dc6ht" Apr 28 02:03:27.891631 kubelet[2529]: I0428 02:03:27.885251 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dbdbe79-d6f0-488e-8274-30d8fc714b2e-config-volume\") pod \"coredns-66bc5c9577-zhjg6\" (UID: \"9dbdbe79-d6f0-488e-8274-30d8fc714b2e\") " pod="kube-system/coredns-66bc5c9577-zhjg6" Apr 28 02:03:27.891631 kubelet[2529]: I0428 02:03:27.887948 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhzwg\" (UniqueName: \"kubernetes.io/projected/9dbdbe79-d6f0-488e-8274-30d8fc714b2e-kube-api-access-rhzwg\") pod \"coredns-66bc5c9577-zhjg6\" (UID: \"9dbdbe79-d6f0-488e-8274-30d8fc714b2e\") " pod="kube-system/coredns-66bc5c9577-zhjg6" Apr 28 02:03:27.891631 kubelet[2529]: I0428 02:03:27.889645 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/245702ae-f3d3-432b-adb0-6bb9640770d2-config-volume\") pod \"coredns-66bc5c9577-dc6ht\" (UID: \"245702ae-f3d3-432b-adb0-6bb9640770d2\") " pod="kube-system/coredns-66bc5c9577-dc6ht" Apr 28 02:03:28.054321 kubelet[2529]: E0428 02:03:28.054110 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:28.057534 containerd[1473]: time="2026-04-28T02:03:28.055642345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zhjg6,Uid:9dbdbe79-d6f0-488e-8274-30d8fc714b2e,Namespace:kube-system,Attempt:0,}" Apr 28 02:03:28.072474 kubelet[2529]: E0428 02:03:28.069898 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:28.073108 containerd[1473]: time="2026-04-28T02:03:28.070698862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dc6ht,Uid:245702ae-f3d3-432b-adb0-6bb9640770d2,Namespace:kube-system,Attempt:0,}" Apr 28 02:03:28.350831 kubelet[2529]: E0428 02:03:28.350470 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:28.382722 kubelet[2529]: I0428 02:03:28.382600 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j25pr" podStartSLOduration=5.607875222 podStartE2EDuration="14.382587662s" podCreationTimestamp="2026-04-28 02:03:14 +0000 UTC" firstStartedPulling="2026-04-28 02:03:14.728048113 +0000 UTC m=+7.953317660" lastFinishedPulling="2026-04-28 02:03:23.502760546 +0000 UTC m=+16.728030100" observedRunningTime="2026-04-28 02:03:28.382337533 +0000 UTC m=+21.607607085" watchObservedRunningTime="2026-04-28 02:03:28.382587662 +0000 UTC m=+21.607857224" Apr 28 02:03:29.354718 kubelet[2529]: E0428 02:03:29.354617 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:29.654087 systemd-networkd[1396]: cilium_host: Link UP Apr 28 02:03:29.654228 systemd-networkd[1396]: cilium_net: Link UP Apr 28 02:03:29.654328 systemd-networkd[1396]: cilium_net: Gained carrier Apr 28 02:03:29.654498 systemd-networkd[1396]: cilium_host: Gained carrier Apr 28 02:03:29.778329 systemd-networkd[1396]: cilium_vxlan: Link UP Apr 28 02:03:29.778640 systemd-networkd[1396]: cilium_vxlan: Gained carrier Apr 28 02:03:29.969999 systemd-networkd[1396]: cilium_host: Gained IPv6LL Apr 28 02:03:30.025559 kernel: NET: Registered PF_ALG protocol family Apr 28 02:03:30.288949 systemd-networkd[1396]: cilium_net: Gained IPv6LL Apr 28 02:03:30.357661 kubelet[2529]: E0428 02:03:30.357540 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:30.902806 systemd-networkd[1396]: lxc_health: Link UP Apr 28 02:03:30.915315 systemd-networkd[1396]: lxc_health: Gained carrier Apr 28 02:03:31.167698 systemd-networkd[1396]: lxcbda318f5b860: Link UP Apr 28 02:03:31.175437 kernel: eth0: renamed from tmp5fc6c Apr 28 02:03:31.187109 systemd-networkd[1396]: lxcbda318f5b860: Gained carrier Apr 28 02:03:31.190508 systemd-networkd[1396]: lxc50696da37fe8: Link UP Apr 28 02:03:31.197528 kernel: eth0: renamed from tmp28918 Apr 28 02:03:31.202973 systemd-networkd[1396]: lxc50696da37fe8: Gained carrier Apr 28 02:03:31.773598 systemd-networkd[1396]: cilium_vxlan: Gained IPv6LL Apr 28 02:03:32.528967 systemd-networkd[1396]: lxc50696da37fe8: Gained IPv6LL Apr 28 02:03:32.588578 kubelet[2529]: E0428 02:03:32.588493 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:32.786166 systemd-networkd[1396]: lxc_health: Gained IPv6LL Apr 28 02:03:33.104845 systemd-networkd[1396]: lxcbda318f5b860: Gained IPv6LL Apr 28 02:03:34.773804 systemd[1]: Started sshd@7-10.0.0.133:22-10.0.0.1:55054.service - OpenSSH per-connection server daemon (10.0.0.1:55054). Apr 28 02:03:34.838685 sshd[3764]: Accepted publickey for core from 10.0.0.1 port 55054 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:03:34.840341 sshd[3764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:03:34.847837 systemd-logind[1463]: New session 8 of user core. Apr 28 02:03:34.855726 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 28 02:03:35.019265 sshd[3764]: pam_unix(sshd:session): session closed for user core Apr 28 02:03:35.022879 systemd[1]: sshd@7-10.0.0.133:22-10.0.0.1:55054.service: Deactivated successfully. Apr 28 02:03:35.024943 systemd[1]: session-8.scope: Deactivated successfully. Apr 28 02:03:35.025931 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Apr 28 02:03:35.027305 systemd-logind[1463]: Removed session 8. Apr 28 02:03:35.406139 containerd[1473]: time="2026-04-28T02:03:35.405298181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:03:35.406139 containerd[1473]: time="2026-04-28T02:03:35.405521429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:03:35.406139 containerd[1473]: time="2026-04-28T02:03:35.405533995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:03:35.406139 containerd[1473]: time="2026-04-28T02:03:35.405598782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:03:35.447261 systemd[1]: Started cri-containerd-28918278a1947b963abe0a11f4cb29aec94b5e1277792a9c2190e40c1a64314e.scope - libcontainer container 28918278a1947b963abe0a11f4cb29aec94b5e1277792a9c2190e40c1a64314e. Apr 28 02:03:35.451268 containerd[1473]: time="2026-04-28T02:03:35.450846512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:03:35.451268 containerd[1473]: time="2026-04-28T02:03:35.450950084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:03:35.451268 containerd[1473]: time="2026-04-28T02:03:35.450962891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:03:35.451268 containerd[1473]: time="2026-04-28T02:03:35.451080699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:03:35.484892 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 02:03:35.499880 systemd[1]: Started cri-containerd-5fc6c5d1371e910402431e378df5f744fc9707f5cc6e0317501b01b426d8f793.scope - libcontainer container 5fc6c5d1371e910402431e378df5f744fc9707f5cc6e0317501b01b426d8f793. Apr 28 02:03:35.533131 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 02:03:35.538448 containerd[1473]: time="2026-04-28T02:03:35.534522640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dc6ht,Uid:245702ae-f3d3-432b-adb0-6bb9640770d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"28918278a1947b963abe0a11f4cb29aec94b5e1277792a9c2190e40c1a64314e\"" Apr 28 02:03:35.542643 kubelet[2529]: E0428 02:03:35.542156 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:35.554350 containerd[1473]: time="2026-04-28T02:03:35.554221854Z" level=info msg="CreateContainer within sandbox \"28918278a1947b963abe0a11f4cb29aec94b5e1277792a9c2190e40c1a64314e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 02:03:35.579503 containerd[1473]: time="2026-04-28T02:03:35.579277863Z" level=info msg="CreateContainer within sandbox \"28918278a1947b963abe0a11f4cb29aec94b5e1277792a9c2190e40c1a64314e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c5f28e7a1bf216efae0dcb846f4fc6e9780fdb575a5cc9866e31b061d4ee8da7\"" Apr 28 02:03:35.582461 containerd[1473]: time="2026-04-28T02:03:35.581445451Z" level=info msg="StartContainer for \"c5f28e7a1bf216efae0dcb846f4fc6e9780fdb575a5cc9866e31b061d4ee8da7\"" Apr 28 02:03:35.608863 containerd[1473]: time="2026-04-28T02:03:35.608343027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zhjg6,Uid:9dbdbe79-d6f0-488e-8274-30d8fc714b2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fc6c5d1371e910402431e378df5f744fc9707f5cc6e0317501b01b426d8f793\"" Apr 28 02:03:35.612542 kubelet[2529]: E0428 02:03:35.612449 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:35.619458 containerd[1473]: time="2026-04-28T02:03:35.619285889Z" level=info msg="CreateContainer within sandbox \"5fc6c5d1371e910402431e378df5f744fc9707f5cc6e0317501b01b426d8f793\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 02:03:35.644282 containerd[1473]: time="2026-04-28T02:03:35.644132609Z" level=info msg="CreateContainer within sandbox \"5fc6c5d1371e910402431e378df5f744fc9707f5cc6e0317501b01b426d8f793\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1bcefd8e10e62a428e05f053cebfc8fec1d15911bb7129d90ec1f427757c6ce8\"" Apr 28 02:03:35.648277 containerd[1473]: time="2026-04-28T02:03:35.648142934Z" level=info msg="StartContainer for \"1bcefd8e10e62a428e05f053cebfc8fec1d15911bb7129d90ec1f427757c6ce8\"" Apr 28 02:03:35.655268 systemd[1]: Started cri-containerd-c5f28e7a1bf216efae0dcb846f4fc6e9780fdb575a5cc9866e31b061d4ee8da7.scope - libcontainer container c5f28e7a1bf216efae0dcb846f4fc6e9780fdb575a5cc9866e31b061d4ee8da7. Apr 28 02:03:35.722524 containerd[1473]: time="2026-04-28T02:03:35.722303577Z" level=info msg="StartContainer for \"c5f28e7a1bf216efae0dcb846f4fc6e9780fdb575a5cc9866e31b061d4ee8da7\" returns successfully" Apr 28 02:03:35.730617 systemd[1]: Started cri-containerd-1bcefd8e10e62a428e05f053cebfc8fec1d15911bb7129d90ec1f427757c6ce8.scope - libcontainer container 1bcefd8e10e62a428e05f053cebfc8fec1d15911bb7129d90ec1f427757c6ce8. Apr 28 02:03:35.794080 containerd[1473]: time="2026-04-28T02:03:35.793844859Z" level=info msg="StartContainer for \"1bcefd8e10e62a428e05f053cebfc8fec1d15911bb7129d90ec1f427757c6ce8\" returns successfully" Apr 28 02:03:36.385950 kubelet[2529]: E0428 02:03:36.385314 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:36.390781 kubelet[2529]: E0428 02:03:36.390609 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:36.650227 kubelet[2529]: I0428 02:03:36.650026 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zhjg6" podStartSLOduration=22.649966227 podStartE2EDuration="22.649966227s" podCreationTimestamp="2026-04-28 02:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:03:36.411832048 +0000 UTC m=+29.637101596" watchObservedRunningTime="2026-04-28 02:03:36.649966227 +0000 UTC m=+29.875235779" Apr 28 02:03:37.393202 kubelet[2529]: E0428 02:03:37.393038 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:37.393202 kubelet[2529]: E0428 02:03:37.393121 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:38.417978 kubelet[2529]: E0428 02:03:38.417728 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:38.417978 kubelet[2529]: E0428 02:03:38.417883 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:39.594263 kubelet[2529]: I0428 02:03:39.594094 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 02:03:39.595672 kubelet[2529]: E0428 02:03:39.594963 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:39.616311 kubelet[2529]: I0428 02:03:39.616112 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dc6ht" podStartSLOduration=25.616099966 podStartE2EDuration="25.616099966s" podCreationTimestamp="2026-04-28 02:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:03:36.678759869 +0000 UTC m=+29.904029421" watchObservedRunningTime="2026-04-28 02:03:39.616099966 +0000 UTC m=+32.841369524" Apr 28 02:03:40.034991 systemd[1]: Started sshd@8-10.0.0.133:22-10.0.0.1:58342.service - OpenSSH per-connection server daemon (10.0.0.1:58342). Apr 28 02:03:40.114432 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 58342 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:03:40.116113 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:03:40.122716 systemd-logind[1463]: New session 9 of user core. Apr 28 02:03:40.133599 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 28 02:03:40.308564 sshd[3960]: pam_unix(sshd:session): session closed for user core Apr 28 02:03:40.316027 systemd[1]: sshd@8-10.0.0.133:22-10.0.0.1:58342.service: Deactivated successfully. Apr 28 02:03:40.318147 systemd[1]: session-9.scope: Deactivated successfully. Apr 28 02:03:40.320597 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Apr 28 02:03:40.324595 systemd-logind[1463]: Removed session 9. Apr 28 02:03:40.424331 kubelet[2529]: E0428 02:03:40.424073 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:03:45.329978 systemd[1]: Started sshd@9-10.0.0.133:22-10.0.0.1:58348.service - OpenSSH per-connection server daemon (10.0.0.1:58348). Apr 28 02:03:45.438737 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 58348 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:03:45.445944 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:03:45.466592 systemd-logind[1463]: New session 10 of user core. Apr 28 02:03:45.472995 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 28 02:03:45.655335 sshd[3977]: pam_unix(sshd:session): session closed for user core Apr 28 02:03:45.659144 systemd[1]: sshd@9-10.0.0.133:22-10.0.0.1:58348.service: Deactivated successfully. Apr 28 02:03:45.660678 systemd[1]: session-10.scope: Deactivated successfully. Apr 28 02:03:45.661727 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Apr 28 02:03:45.663169 systemd-logind[1463]: Removed session 10. Apr 28 02:03:50.696338 systemd[1]: Started sshd@10-10.0.0.133:22-10.0.0.1:34536.service - OpenSSH per-connection server daemon (10.0.0.1:34536). Apr 28 02:03:50.733468 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 34536 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:03:50.735900 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:03:50.742514 systemd-logind[1463]: New session 11 of user core. Apr 28 02:03:50.750102 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 28 02:03:50.888154 sshd[3992]: pam_unix(sshd:session): session closed for user core Apr 28 02:03:50.897850 systemd[1]: sshd@10-10.0.0.133:22-10.0.0.1:34536.service: Deactivated successfully. Apr 28 02:03:50.899502 systemd[1]: session-11.scope: Deactivated successfully. Apr 28 02:03:50.901094 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Apr 28 02:03:50.904185 systemd[1]: Started sshd@11-10.0.0.133:22-10.0.0.1:34546.service - OpenSSH per-connection server daemon (10.0.0.1:34546). Apr 28 02:03:50.906260 systemd-logind[1463]: Removed session 11. Apr 28 02:03:50.953328 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 34546 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:03:50.954642 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:03:50.959661 systemd-logind[1463]: New session 12 of user core. Apr 28 02:03:50.970481 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 28 02:03:51.168886 sshd[4008]: pam_unix(sshd:session): session closed for user core Apr 28 02:03:51.195182 systemd[1]: Started sshd@12-10.0.0.133:22-10.0.0.1:34556.service - OpenSSH per-connection server daemon (10.0.0.1:34556). Apr 28 02:03:51.196315 systemd[1]: sshd@11-10.0.0.133:22-10.0.0.1:34546.service: Deactivated successfully. Apr 28 02:03:51.203658 systemd[1]: session-12.scope: Deactivated successfully. Apr 28 02:03:51.207826 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Apr 28 02:03:51.225495 systemd-logind[1463]: Removed session 12. Apr 28 02:03:51.267125 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 34556 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:03:51.269341 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:03:51.282840 systemd-logind[1463]: New session 13 of user core. Apr 28 02:03:51.293641 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 28 02:03:51.489029 sshd[4019]: pam_unix(sshd:session): session closed for user core Apr 28 02:03:51.493331 systemd[1]: sshd@12-10.0.0.133:22-10.0.0.1:34556.service: Deactivated successfully. Apr 28 02:03:51.494945 systemd[1]: session-13.scope: Deactivated successfully. Apr 28 02:03:51.499525 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Apr 28 02:03:51.502345 systemd-logind[1463]: Removed session 13. Apr 28 02:03:56.504275 systemd[1]: Started sshd@13-10.0.0.133:22-10.0.0.1:34566.service - OpenSSH per-connection server daemon (10.0.0.1:34566). Apr 28 02:03:56.564899 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 34566 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:03:56.569327 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:03:56.581105 systemd-logind[1463]: New session 14 of user core. Apr 28 02:03:56.600762 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 28 02:03:56.843701 sshd[4035]: pam_unix(sshd:session): session closed for user core Apr 28 02:03:56.848057 systemd[1]: sshd@13-10.0.0.133:22-10.0.0.1:34566.service: Deactivated successfully. Apr 28 02:03:56.849721 systemd[1]: session-14.scope: Deactivated successfully. Apr 28 02:03:56.850924 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Apr 28 02:03:56.853258 systemd-logind[1463]: Removed session 14. Apr 28 02:04:01.867070 systemd[1]: Started sshd@14-10.0.0.133:22-10.0.0.1:56030.service - OpenSSH per-connection server daemon (10.0.0.1:56030). Apr 28 02:04:01.927303 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 56030 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:04:01.929514 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:04:01.936349 systemd-logind[1463]: New session 15 of user core. Apr 28 02:04:01.948723 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 28 02:04:02.147880 sshd[4050]: pam_unix(sshd:session): session closed for user core Apr 28 02:04:02.156124 systemd[1]: sshd@14-10.0.0.133:22-10.0.0.1:56030.service: Deactivated successfully. Apr 28 02:04:02.157564 systemd[1]: session-15.scope: Deactivated successfully. Apr 28 02:04:02.159143 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Apr 28 02:04:02.166854 systemd[1]: Started sshd@15-10.0.0.133:22-10.0.0.1:56034.service - OpenSSH per-connection server daemon (10.0.0.1:56034). Apr 28 02:04:02.168752 systemd-logind[1463]: Removed session 15. Apr 28 02:04:02.207533 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 56034 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:04:02.211099 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:04:02.217139 systemd-logind[1463]: New session 16 of user core. Apr 28 02:04:02.229696 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 28 02:04:02.538821 sshd[4064]: pam_unix(sshd:session): session closed for user core Apr 28 02:04:02.548230 systemd[1]: sshd@15-10.0.0.133:22-10.0.0.1:56034.service: Deactivated successfully. Apr 28 02:04:02.551728 systemd[1]: session-16.scope: Deactivated successfully. Apr 28 02:04:02.554573 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Apr 28 02:04:02.568444 systemd[1]: Started sshd@16-10.0.0.133:22-10.0.0.1:56036.service - OpenSSH per-connection server daemon (10.0.0.1:56036). Apr 28 02:04:02.570458 systemd-logind[1463]: Removed session 16. Apr 28 02:04:02.613756 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 56036 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:04:02.615601 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:04:02.621618 systemd-logind[1463]: New session 17 of user core. Apr 28 02:04:02.632689 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 28 02:04:03.266336 sshd[4077]: pam_unix(sshd:session): session closed for user core Apr 28 02:04:03.285671 systemd[1]: sshd@16-10.0.0.133:22-10.0.0.1:56036.service: Deactivated successfully. Apr 28 02:04:03.289089 systemd[1]: session-17.scope: Deactivated successfully. Apr 28 02:04:03.294747 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Apr 28 02:04:03.306956 systemd[1]: Started sshd@17-10.0.0.133:22-10.0.0.1:56044.service - OpenSSH per-connection server daemon (10.0.0.1:56044). Apr 28 02:04:03.309329 systemd-logind[1463]: Removed session 17. Apr 28 02:04:03.348594 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 56044 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:04:03.350092 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:04:03.355907 systemd-logind[1463]: New session 18 of user core. Apr 28 02:04:03.370563 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 28 02:04:03.671063 sshd[4098]: pam_unix(sshd:session): session closed for user core Apr 28 02:04:03.678107 systemd[1]: sshd@17-10.0.0.133:22-10.0.0.1:56044.service: Deactivated successfully. Apr 28 02:04:03.679648 systemd[1]: session-18.scope: Deactivated successfully. Apr 28 02:04:03.683035 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Apr 28 02:04:03.691822 systemd[1]: Started sshd@18-10.0.0.133:22-10.0.0.1:56054.service - OpenSSH per-connection server daemon (10.0.0.1:56054). Apr 28 02:04:03.693436 systemd-logind[1463]: Removed session 18. Apr 28 02:04:03.734440 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 56054 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:04:03.736492 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:04:03.743048 systemd-logind[1463]: New session 19 of user core. Apr 28 02:04:03.755565 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 28 02:04:03.906936 sshd[4111]: pam_unix(sshd:session): session closed for user core Apr 28 02:04:03.913972 systemd[1]: sshd@18-10.0.0.133:22-10.0.0.1:56054.service: Deactivated successfully. Apr 28 02:04:03.916091 systemd[1]: session-19.scope: Deactivated successfully. Apr 28 02:04:03.919056 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Apr 28 02:04:03.921709 systemd-logind[1463]: Removed session 19. Apr 28 02:04:08.941939 systemd[1]: Started sshd@19-10.0.0.133:22-10.0.0.1:56068.service - OpenSSH per-connection server daemon (10.0.0.1:56068). Apr 28 02:04:08.974103 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 56068 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:04:08.975181 sshd[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:04:08.980115 systemd-logind[1463]: New session 20 of user core. Apr 28 02:04:08.990647 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 28 02:04:09.144273 sshd[4130]: pam_unix(sshd:session): session closed for user core Apr 28 02:04:09.148034 systemd[1]: sshd@19-10.0.0.133:22-10.0.0.1:56068.service: Deactivated successfully. Apr 28 02:04:09.149922 systemd[1]: session-20.scope: Deactivated successfully. Apr 28 02:04:09.151987 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Apr 28 02:04:09.157510 systemd-logind[1463]: Removed session 20. Apr 28 02:04:14.156450 systemd[1]: Started sshd@20-10.0.0.133:22-10.0.0.1:33644.service - OpenSSH per-connection server daemon (10.0.0.1:33644). Apr 28 02:04:14.200940 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 33644 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:04:14.203686 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:04:14.212145 systemd-logind[1463]: New session 21 of user core. Apr 28 02:04:14.223055 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 28 02:04:14.369292 sshd[4146]: pam_unix(sshd:session): session closed for user core Apr 28 02:04:14.373580 systemd[1]: sshd@20-10.0.0.133:22-10.0.0.1:33644.service: Deactivated successfully. Apr 28 02:04:14.375229 systemd[1]: session-21.scope: Deactivated successfully. Apr 28 02:04:14.376227 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Apr 28 02:04:14.378127 systemd-logind[1463]: Removed session 21. Apr 28 02:04:19.381925 systemd[1]: Started sshd@21-10.0.0.133:22-10.0.0.1:33650.service - OpenSSH per-connection server daemon (10.0.0.1:33650). Apr 28 02:04:19.426155 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 33650 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:04:19.427794 sshd[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:04:19.435480 systemd-logind[1463]: New session 22 of user core. Apr 28 02:04:19.441675 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 28 02:04:19.590844 sshd[4162]: pam_unix(sshd:session): session closed for user core Apr 28 02:04:19.597750 systemd[1]: sshd@21-10.0.0.133:22-10.0.0.1:33650.service: Deactivated successfully. Apr 28 02:04:19.599669 systemd[1]: session-22.scope: Deactivated successfully. Apr 28 02:04:19.601076 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. Apr 28 02:04:19.609224 systemd[1]: Started sshd@22-10.0.0.133:22-10.0.0.1:52224.service - OpenSSH per-connection server daemon (10.0.0.1:52224). Apr 28 02:04:19.611007 systemd-logind[1463]: Removed session 22. Apr 28 02:04:19.646738 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 52224 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:04:19.647979 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:04:19.653106 systemd-logind[1463]: New session 23 of user core. Apr 28 02:04:19.664737 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 28 02:04:19.924352 kubelet[2529]: E0428 02:04:19.924049 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:04:21.082726 containerd[1473]: time="2026-04-28T02:04:21.082546067Z" level=info msg="StopContainer for \"b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb\" with timeout 30 (s)" Apr 28 02:04:21.083341 containerd[1473]: time="2026-04-28T02:04:21.083167335Z" level=info msg="Stop container \"b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb\" with signal terminated" Apr 28 02:04:21.164331 systemd[1]: cri-containerd-b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb.scope: Deactivated successfully. Apr 28 02:04:21.173629 containerd[1473]: time="2026-04-28T02:04:21.171998182Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 02:04:21.219845 containerd[1473]: time="2026-04-28T02:04:21.219336991Z" level=info msg="StopContainer for \"917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc\" with timeout 2 (s)" Apr 28 02:04:21.221759 containerd[1473]: time="2026-04-28T02:04:21.221465439Z" level=info msg="Stop container \"917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc\" with signal terminated" Apr 28 02:04:21.236474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb-rootfs.mount: Deactivated successfully. Apr 28 02:04:21.247834 systemd-networkd[1396]: lxc_health: Link DOWN Apr 28 02:04:21.252310 containerd[1473]: time="2026-04-28T02:04:21.248226372Z" level=info msg="shim disconnected" id=b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb namespace=k8s.io Apr 28 02:04:21.252310 containerd[1473]: time="2026-04-28T02:04:21.248283948Z" level=warning msg="cleaning up after shim disconnected" id=b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb namespace=k8s.io Apr 28 02:04:21.252310 containerd[1473]: time="2026-04-28T02:04:21.248290480Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:04:21.247842 systemd-networkd[1396]: lxc_health: Lost carrier Apr 28 02:04:21.274009 systemd[1]: cri-containerd-917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc.scope: Deactivated successfully. Apr 28 02:04:21.274732 systemd[1]: cri-containerd-917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc.scope: Consumed 8.820s CPU time. Apr 28 02:04:21.303742 containerd[1473]: time="2026-04-28T02:04:21.303331870Z" level=info msg="StopContainer for \"b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb\" returns successfully" Apr 28 02:04:21.305431 containerd[1473]: time="2026-04-28T02:04:21.304549229Z" level=info msg="StopPodSandbox for \"4af16fa9847008101780e79609bd3fa1e44ad9fc2792bc046cbaf815ccf157cb\"" Apr 28 02:04:21.305431 containerd[1473]: time="2026-04-28T02:04:21.304637891Z" level=info msg="Container to stop \"b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:04:21.311295 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4af16fa9847008101780e79609bd3fa1e44ad9fc2792bc046cbaf815ccf157cb-shm.mount: Deactivated successfully. Apr 28 02:04:21.322227 systemd[1]: cri-containerd-4af16fa9847008101780e79609bd3fa1e44ad9fc2792bc046cbaf815ccf157cb.scope: Deactivated successfully. Apr 28 02:04:21.327185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc-rootfs.mount: Deactivated successfully. Apr 28 02:04:21.342350 containerd[1473]: time="2026-04-28T02:04:21.341200251Z" level=info msg="shim disconnected" id=917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc namespace=k8s.io Apr 28 02:04:21.342350 containerd[1473]: time="2026-04-28T02:04:21.341309733Z" level=warning msg="cleaning up after shim disconnected" id=917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc namespace=k8s.io Apr 28 02:04:21.342350 containerd[1473]: time="2026-04-28T02:04:21.341318277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:04:21.364326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4af16fa9847008101780e79609bd3fa1e44ad9fc2792bc046cbaf815ccf157cb-rootfs.mount: Deactivated successfully. Apr 28 02:04:21.374564 containerd[1473]: time="2026-04-28T02:04:21.374476529Z" level=info msg="shim disconnected" id=4af16fa9847008101780e79609bd3fa1e44ad9fc2792bc046cbaf815ccf157cb namespace=k8s.io Apr 28 02:04:21.375220 containerd[1473]: time="2026-04-28T02:04:21.374983221Z" level=warning msg="cleaning up after shim disconnected" id=4af16fa9847008101780e79609bd3fa1e44ad9fc2792bc046cbaf815ccf157cb namespace=k8s.io Apr 28 02:04:21.375220 containerd[1473]: time="2026-04-28T02:04:21.374996459Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:04:21.378688 containerd[1473]: time="2026-04-28T02:04:21.378536727Z" level=info msg="StopContainer for \"917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc\" returns successfully" Apr 28 02:04:21.379239 containerd[1473]: time="2026-04-28T02:04:21.379214928Z" level=info msg="StopPodSandbox for \"b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9\"" Apr 28 02:04:21.379267 containerd[1473]: time="2026-04-28T02:04:21.379243571Z" level=info msg="Container to stop \"145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:04:21.379267 containerd[1473]: time="2026-04-28T02:04:21.379252409Z" level=info msg="Container to stop \"f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:04:21.379267 containerd[1473]: time="2026-04-28T02:04:21.379259387Z" level=info msg="Container to stop \"917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:04:21.379267 containerd[1473]: time="2026-04-28T02:04:21.379266070Z" level=info msg="Container to stop \"3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:04:21.379780 containerd[1473]: time="2026-04-28T02:04:21.379272446Z" level=info msg="Container to stop \"f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:04:21.387245 systemd[1]: cri-containerd-b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9.scope: Deactivated successfully. Apr 28 02:04:21.403547 containerd[1473]: time="2026-04-28T02:04:21.401040548Z" level=warning msg="cleanup warnings time=\"2026-04-28T02:04:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 02:04:21.412962 containerd[1473]: time="2026-04-28T02:04:21.412839376Z" level=info msg="TearDown network for sandbox \"4af16fa9847008101780e79609bd3fa1e44ad9fc2792bc046cbaf815ccf157cb\" successfully" Apr 28 02:04:21.412962 containerd[1473]: time="2026-04-28T02:04:21.412922564Z" level=info msg="StopPodSandbox for \"4af16fa9847008101780e79609bd3fa1e44ad9fc2792bc046cbaf815ccf157cb\" returns successfully" Apr 28 02:04:21.426513 containerd[1473]: time="2026-04-28T02:04:21.426229853Z" level=info msg="shim disconnected" id=b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9 namespace=k8s.io Apr 28 02:04:21.426513 containerd[1473]: time="2026-04-28T02:04:21.426274100Z" level=warning msg="cleaning up after shim disconnected" id=b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9 namespace=k8s.io Apr 28 02:04:21.426513 containerd[1473]: time="2026-04-28T02:04:21.426280040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:04:21.454509 containerd[1473]: time="2026-04-28T02:04:21.454446826Z" level=info msg="TearDown network for sandbox \"b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9\" successfully" Apr 28 02:04:21.454509 containerd[1473]: time="2026-04-28T02:04:21.454500847Z" level=info msg="StopPodSandbox for \"b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9\" returns successfully" Apr 28 02:04:21.480988 kubelet[2529]: I0428 02:04:21.480798 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-cni-path" (OuterVolumeSpecName: "cni-path") pod "20971217-5913-4bbf-9ab4-c8eb8a4c3642" (UID: "20971217-5913-4bbf-9ab4-c8eb8a4c3642"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:04:21.480988 kubelet[2529]: I0428 02:04:21.480959 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-cni-path\") pod \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " Apr 28 02:04:21.480988 kubelet[2529]: I0428 02:04:21.480980 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20971217-5913-4bbf-9ab4-c8eb8a4c3642-cilium-config-path\") pod \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " Apr 28 02:04:21.480988 kubelet[2529]: I0428 02:04:21.480998 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf54z\" (UniqueName: \"kubernetes.io/projected/20971217-5913-4bbf-9ab4-c8eb8a4c3642-kube-api-access-vf54z\") pod \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " Apr 28 02:04:21.481958 kubelet[2529]: I0428 02:04:21.481011 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20971217-5913-4bbf-9ab4-c8eb8a4c3642-clustermesh-secrets\") pod \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " Apr 28 02:04:21.481958 kubelet[2529]: I0428 02:04:21.481023 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-lib-modules\") pod \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " Apr 28 02:04:21.481958 kubelet[2529]: I0428 02:04:21.481037 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blkp4\" (UniqueName: \"kubernetes.io/projected/e79f4351-2402-451b-9b19-de5c8871a487-kube-api-access-blkp4\") pod \"e79f4351-2402-451b-9b19-de5c8871a487\" (UID: \"e79f4351-2402-451b-9b19-de5c8871a487\") " Apr 28 02:04:21.481958 kubelet[2529]: I0428 02:04:21.481048 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-hostproc\") pod \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " Apr 28 02:04:21.481958 kubelet[2529]: I0428 02:04:21.481060 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20971217-5913-4bbf-9ab4-c8eb8a4c3642-hubble-tls\") pod \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " Apr 28 02:04:21.481958 kubelet[2529]: I0428 02:04:21.481069 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-cilium-cgroup\") pod \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " Apr 28 02:04:21.482073 kubelet[2529]: I0428 02:04:21.481080 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-host-proc-sys-kernel\") pod \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " Apr 28 02:04:21.482073 kubelet[2529]: I0428 02:04:21.481089 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-etc-cni-netd\") pod \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " Apr 28 02:04:21.482073 kubelet[2529]: I0428 02:04:21.481102 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-host-proc-sys-net\") pod \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " Apr 28 02:04:21.482073 kubelet[2529]: I0428 02:04:21.481114 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e79f4351-2402-451b-9b19-de5c8871a487-cilium-config-path\") pod \"e79f4351-2402-451b-9b19-de5c8871a487\" (UID: \"e79f4351-2402-451b-9b19-de5c8871a487\") " Apr 28 02:04:21.482073 kubelet[2529]: I0428 02:04:21.481124 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-cilium-run\") pod \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " Apr 28 02:04:21.482073 kubelet[2529]: I0428 02:04:21.481133 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-xtables-lock\") pod \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " Apr 28 02:04:21.482186 kubelet[2529]: I0428 02:04:21.481142 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-bpf-maps\") pod \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\" (UID: \"20971217-5913-4bbf-9ab4-c8eb8a4c3642\") " Apr 28 02:04:21.482186 kubelet[2529]: I0428 02:04:21.481167 2529 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 28 02:04:21.482186 kubelet[2529]: I0428 02:04:21.481182 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "20971217-5913-4bbf-9ab4-c8eb8a4c3642" (UID: "20971217-5913-4bbf-9ab4-c8eb8a4c3642"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:04:21.483503 kubelet[2529]: I0428 02:04:21.482688 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "20971217-5913-4bbf-9ab4-c8eb8a4c3642" (UID: "20971217-5913-4bbf-9ab4-c8eb8a4c3642"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:04:21.483503 kubelet[2529]: I0428 02:04:21.482954 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20971217-5913-4bbf-9ab4-c8eb8a4c3642-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "20971217-5913-4bbf-9ab4-c8eb8a4c3642" (UID: "20971217-5913-4bbf-9ab4-c8eb8a4c3642"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 02:04:21.483503 kubelet[2529]: I0428 02:04:21.482982 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "20971217-5913-4bbf-9ab4-c8eb8a4c3642" (UID: "20971217-5913-4bbf-9ab4-c8eb8a4c3642"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:04:21.483503 kubelet[2529]: I0428 02:04:21.482993 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "20971217-5913-4bbf-9ab4-c8eb8a4c3642" (UID: "20971217-5913-4bbf-9ab4-c8eb8a4c3642"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:04:21.483503 kubelet[2529]: I0428 02:04:21.483004 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "20971217-5913-4bbf-9ab4-c8eb8a4c3642" (UID: "20971217-5913-4bbf-9ab4-c8eb8a4c3642"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:04:21.483746 kubelet[2529]: I0428 02:04:21.483283 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-hostproc" (OuterVolumeSpecName: "hostproc") pod "20971217-5913-4bbf-9ab4-c8eb8a4c3642" (UID: "20971217-5913-4bbf-9ab4-c8eb8a4c3642"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:04:21.486101 kubelet[2529]: I0428 02:04:21.485036 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e79f4351-2402-451b-9b19-de5c8871a487-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e79f4351-2402-451b-9b19-de5c8871a487" (UID: "e79f4351-2402-451b-9b19-de5c8871a487"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 02:04:21.486101 kubelet[2529]: I0428 02:04:21.485130 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "20971217-5913-4bbf-9ab4-c8eb8a4c3642" (UID: "20971217-5913-4bbf-9ab4-c8eb8a4c3642"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:04:21.486101 kubelet[2529]: I0428 02:04:21.485144 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "20971217-5913-4bbf-9ab4-c8eb8a4c3642" (UID: "20971217-5913-4bbf-9ab4-c8eb8a4c3642"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:04:21.486101 kubelet[2529]: I0428 02:04:21.485226 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "20971217-5913-4bbf-9ab4-c8eb8a4c3642" (UID: "20971217-5913-4bbf-9ab4-c8eb8a4c3642"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:04:21.491333 kubelet[2529]: I0428 02:04:21.491208 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20971217-5913-4bbf-9ab4-c8eb8a4c3642-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "20971217-5913-4bbf-9ab4-c8eb8a4c3642" (UID: "20971217-5913-4bbf-9ab4-c8eb8a4c3642"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 28 02:04:21.491799 kubelet[2529]: I0428 02:04:21.491729 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e79f4351-2402-451b-9b19-de5c8871a487-kube-api-access-blkp4" (OuterVolumeSpecName: "kube-api-access-blkp4") pod "e79f4351-2402-451b-9b19-de5c8871a487" (UID: "e79f4351-2402-451b-9b19-de5c8871a487"). InnerVolumeSpecName "kube-api-access-blkp4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 02:04:21.491950 kubelet[2529]: I0428 02:04:21.491836 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20971217-5913-4bbf-9ab4-c8eb8a4c3642-kube-api-access-vf54z" (OuterVolumeSpecName: "kube-api-access-vf54z") pod "20971217-5913-4bbf-9ab4-c8eb8a4c3642" (UID: "20971217-5913-4bbf-9ab4-c8eb8a4c3642"). InnerVolumeSpecName "kube-api-access-vf54z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 02:04:21.493455 kubelet[2529]: I0428 02:04:21.493279 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20971217-5913-4bbf-9ab4-c8eb8a4c3642-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "20971217-5913-4bbf-9ab4-c8eb8a4c3642" (UID: "20971217-5913-4bbf-9ab4-c8eb8a4c3642"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 02:04:21.582185 kubelet[2529]: I0428 02:04:21.581869 2529 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 28 02:04:21.582185 kubelet[2529]: I0428 02:04:21.582024 2529 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-blkp4\" (UniqueName: \"kubernetes.io/projected/e79f4351-2402-451b-9b19-de5c8871a487-kube-api-access-blkp4\") on node \"localhost\" DevicePath \"\"" Apr 28 02:04:21.582185 kubelet[2529]: I0428 02:04:21.582041 2529 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 28 02:04:21.582185 kubelet[2529]: I0428 02:04:21.582050 2529 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20971217-5913-4bbf-9ab4-c8eb8a4c3642-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 28 02:04:21.582185 kubelet[2529]: I0428 02:04:21.582057 2529 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 28 02:04:21.582185 kubelet[2529]: I0428 02:04:21.582064 2529 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 28 02:04:21.582185 kubelet[2529]: I0428 02:04:21.582071 2529 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 28 02:04:21.582185 kubelet[2529]: I0428 02:04:21.582077 2529 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 28 02:04:21.583669 kubelet[2529]: I0428 02:04:21.582084 2529 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e79f4351-2402-451b-9b19-de5c8871a487-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 28 02:04:21.583669 kubelet[2529]: I0428 02:04:21.582091 2529 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 28 02:04:21.583669 kubelet[2529]: I0428 02:04:21.582097 2529 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 28 02:04:21.583669 kubelet[2529]: I0428 02:04:21.582102 2529 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20971217-5913-4bbf-9ab4-c8eb8a4c3642-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 28 02:04:21.583669 kubelet[2529]: I0428 02:04:21.582109 2529 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20971217-5913-4bbf-9ab4-c8eb8a4c3642-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 28 02:04:21.583669 kubelet[2529]: I0428 02:04:21.582115 2529 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vf54z\" (UniqueName: \"kubernetes.io/projected/20971217-5913-4bbf-9ab4-c8eb8a4c3642-kube-api-access-vf54z\") on node \"localhost\" DevicePath \"\"" Apr 28 02:04:21.583669 kubelet[2529]: I0428 02:04:21.582121 2529 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20971217-5913-4bbf-9ab4-c8eb8a4c3642-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 28 02:04:21.635520 kubelet[2529]: I0428 02:04:21.634918 2529 scope.go:117] "RemoveContainer" containerID="b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb" Apr 28 02:04:21.645027 containerd[1473]: time="2026-04-28T02:04:21.644319567Z" level=info msg="RemoveContainer for \"b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb\"" Apr 28 02:04:21.645843 systemd[1]: Removed slice kubepods-besteffort-pode79f4351_2402_451b_9b19_de5c8871a487.slice - libcontainer container kubepods-besteffort-pode79f4351_2402_451b_9b19_de5c8871a487.slice. Apr 28 02:04:21.653765 containerd[1473]: time="2026-04-28T02:04:21.653038622Z" level=info msg="RemoveContainer for \"b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb\" returns successfully" Apr 28 02:04:21.654162 kubelet[2529]: I0428 02:04:21.653719 2529 scope.go:117] "RemoveContainer" containerID="b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb" Apr 28 02:04:21.658511 containerd[1473]: time="2026-04-28T02:04:21.658319456Z" level=error msg="ContainerStatus for \"b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb\": not found" Apr 28 02:04:21.658496 systemd[1]: Removed slice kubepods-burstable-pod20971217_5913_4bbf_9ab4_c8eb8a4c3642.slice - libcontainer container kubepods-burstable-pod20971217_5913_4bbf_9ab4_c8eb8a4c3642.slice. Apr 28 02:04:21.659208 systemd[1]: kubepods-burstable-pod20971217_5913_4bbf_9ab4_c8eb8a4c3642.slice: Consumed 8.977s CPU time. Apr 28 02:04:21.680535 kubelet[2529]: E0428 02:04:21.680122 2529 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb\": not found" containerID="b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb" Apr 28 02:04:21.681961 kubelet[2529]: I0428 02:04:21.680340 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb"} err="failed to get container status \"b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb\": rpc error: code = NotFound desc = an error occurred when try to find container \"b861876ee35b955ab1e6d8d979145196b99486088210004b6b4771e95a613edb\": not found" Apr 28 02:04:21.681961 kubelet[2529]: I0428 02:04:21.681687 2529 scope.go:117] "RemoveContainer" containerID="917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc" Apr 28 02:04:21.684760 containerd[1473]: time="2026-04-28T02:04:21.684693796Z" level=info msg="RemoveContainer for \"917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc\"" Apr 28 02:04:21.701249 containerd[1473]: time="2026-04-28T02:04:21.700285391Z" level=info msg="RemoveContainer for \"917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc\" returns successfully" Apr 28 02:04:21.709504 kubelet[2529]: I0428 02:04:21.706120 2529 scope.go:117] "RemoveContainer" containerID="f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69" Apr 28 02:04:21.715283 containerd[1473]: time="2026-04-28T02:04:21.715068160Z" level=info msg="RemoveContainer for \"f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69\"" Apr 28 02:04:21.727562 containerd[1473]: time="2026-04-28T02:04:21.727296765Z" level=info msg="RemoveContainer for \"f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69\" returns successfully" Apr 28 02:04:21.729324 kubelet[2529]: I0428 02:04:21.729300 2529 scope.go:117] "RemoveContainer" containerID="145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c" Apr 28 02:04:21.735737 containerd[1473]: time="2026-04-28T02:04:21.735705806Z" level=info msg="RemoveContainer for \"145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c\"" Apr 28 02:04:21.756270 containerd[1473]: time="2026-04-28T02:04:21.754183789Z" level=info msg="RemoveContainer for \"145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c\" returns successfully" Apr 28 02:04:21.756828 kubelet[2529]: I0428 02:04:21.754896 2529 scope.go:117] "RemoveContainer" containerID="f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c" Apr 28 02:04:21.757845 containerd[1473]: time="2026-04-28T02:04:21.757724625Z" level=info msg="RemoveContainer for \"f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c\"" Apr 28 02:04:21.762791 containerd[1473]: time="2026-04-28T02:04:21.762567095Z" level=info msg="RemoveContainer for \"f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c\" returns successfully" Apr 28 02:04:21.763291 kubelet[2529]: I0428 02:04:21.763259 2529 scope.go:117] "RemoveContainer" containerID="3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e" Apr 28 02:04:21.765489 containerd[1473]: time="2026-04-28T02:04:21.765310434Z" level=info msg="RemoveContainer for \"3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e\"" Apr 28 02:04:21.772853 containerd[1473]: time="2026-04-28T02:04:21.772510888Z" level=info msg="RemoveContainer for \"3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e\" returns successfully" Apr 28 02:04:21.773882 kubelet[2529]: I0428 02:04:21.773678 2529 scope.go:117] "RemoveContainer" containerID="917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc" Apr 28 02:04:21.774347 containerd[1473]: time="2026-04-28T02:04:21.774257338Z" level=error msg="ContainerStatus for \"917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc\": not found" Apr 28 02:04:21.775966 kubelet[2529]: E0428 02:04:21.775487 2529 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc\": not found" containerID="917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc" Apr 28 02:04:21.775966 kubelet[2529]: I0428 02:04:21.775646 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc"} err="failed to get container status \"917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"917be8d28163b4fe3ba084fcae2ccb2033f3d068e5275f4616b0990e036097fc\": not found" Apr 28 02:04:21.775966 kubelet[2529]: I0428 02:04:21.775726 2529 scope.go:117] "RemoveContainer" containerID="f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69" Apr 28 02:04:21.776944 containerd[1473]: time="2026-04-28T02:04:21.776490458Z" level=error msg="ContainerStatus for \"f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69\": not found" Apr 28 02:04:21.777185 kubelet[2529]: E0428 02:04:21.777121 2529 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69\": not found" containerID="f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69" Apr 28 02:04:21.777185 kubelet[2529]: I0428 02:04:21.777144 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69"} err="failed to get container status \"f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69\": rpc error: code = NotFound desc = an error occurred when try to find container \"f307136200bbe4eb96b4fa2120eb4fbd76aeeb8961ad1362be33a9bc04e76b69\": not found" Apr 28 02:04:21.777185 kubelet[2529]: I0428 02:04:21.777158 2529 scope.go:117] "RemoveContainer" containerID="145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c" Apr 28 02:04:21.777664 containerd[1473]: time="2026-04-28T02:04:21.777531531Z" level=error msg="ContainerStatus for \"145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c\": not found" Apr 28 02:04:21.778020 kubelet[2529]: E0428 02:04:21.777854 2529 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c\": not found" containerID="145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c" Apr 28 02:04:21.778020 kubelet[2529]: I0428 02:04:21.777879 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c"} err="failed to get container status \"145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"145a5ef190d14e4a4186d329a14edcde5a76e905e856ab33c2ca819d7116bf4c\": not found" Apr 28 02:04:21.778020 kubelet[2529]: I0428 02:04:21.777915 2529 scope.go:117] "RemoveContainer" containerID="f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c" Apr 28 02:04:21.778474 containerd[1473]: time="2026-04-28T02:04:21.778312762Z" level=error msg="ContainerStatus for \"f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c\": not found" Apr 28 02:04:21.778739 kubelet[2529]: E0428 02:04:21.778553 2529 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c\": not found" containerID="f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c" Apr 28 02:04:21.778739 kubelet[2529]: I0428 02:04:21.778702 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c"} err="failed to get container status \"f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0e412848cafa0df6f16d6895c8d47a9ba53681840ccd46001fd55d48ecba11c\": not found" Apr 28 02:04:21.778739 kubelet[2529]: I0428 02:04:21.778719 2529 scope.go:117] "RemoveContainer" containerID="3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e" Apr 28 02:04:21.779064 containerd[1473]: time="2026-04-28T02:04:21.779039277Z" level=error msg="ContainerStatus for \"3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e\": not found" Apr 28 02:04:21.779741 kubelet[2529]: E0428 02:04:21.779247 2529 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e\": not found" containerID="3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e" Apr 28 02:04:21.779741 kubelet[2529]: I0428 02:04:21.779261 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e"} err="failed to get container status \"3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d37ea3e69788f29fccdc9fc5af8db3b5f11159160a3017aab30e2218386fb3e\": not found" Apr 28 02:04:22.043813 kubelet[2529]: E0428 02:04:22.043559 2529 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 02:04:22.095358 systemd[1]: var-lib-kubelet-pods-e79f4351\x2d2402\x2d451b\x2d9b19\x2dde5c8871a487-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dblkp4.mount: Deactivated successfully. Apr 28 02:04:22.096279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9-rootfs.mount: Deactivated successfully. Apr 28 02:04:22.096331 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b5a0cc3524ccd6c95fdce555a866c5e4973d3cbe56bdfc00923133c2378c2ef9-shm.mount: Deactivated successfully. Apr 28 02:04:22.096617 systemd[1]: var-lib-kubelet-pods-20971217\x2d5913\x2d4bbf\x2d9ab4\x2dc8eb8a4c3642-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvf54z.mount: Deactivated successfully. Apr 28 02:04:22.096697 systemd[1]: var-lib-kubelet-pods-20971217\x2d5913\x2d4bbf\x2d9ab4\x2dc8eb8a4c3642-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 28 02:04:22.096735 systemd[1]: var-lib-kubelet-pods-20971217\x2d5913\x2d4bbf\x2d9ab4\x2dc8eb8a4c3642-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 28 02:04:22.926980 kubelet[2529]: I0428 02:04:22.926829 2529 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20971217-5913-4bbf-9ab4-c8eb8a4c3642" path="/var/lib/kubelet/pods/20971217-5913-4bbf-9ab4-c8eb8a4c3642/volumes" Apr 28 02:04:22.927812 kubelet[2529]: I0428 02:04:22.927758 2529 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e79f4351-2402-451b-9b19-de5c8871a487" path="/var/lib/kubelet/pods/e79f4351-2402-451b-9b19-de5c8871a487/volumes" Apr 28 02:04:23.006299 sshd[4176]: pam_unix(sshd:session): session closed for user core Apr 28 02:04:23.021837 systemd[1]: sshd@22-10.0.0.133:22-10.0.0.1:52224.service: Deactivated successfully. Apr 28 02:04:23.023312 systemd[1]: session-23.scope: Deactivated successfully. Apr 28 02:04:23.025204 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. Apr 28 02:04:23.029930 systemd[1]: Started sshd@23-10.0.0.133:22-10.0.0.1:52230.service - OpenSSH per-connection server daemon (10.0.0.1:52230). Apr 28 02:04:23.031151 systemd-logind[1463]: Removed session 23. Apr 28 02:04:23.072717 sshd[4341]: Accepted publickey for core from 10.0.0.1 port 52230 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:04:23.074215 sshd[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:04:23.080803 systemd-logind[1463]: New session 24 of user core. Apr 28 02:04:23.095722 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 28 02:04:23.795918 sshd[4341]: pam_unix(sshd:session): session closed for user core Apr 28 02:04:23.811263 systemd[1]: sshd@23-10.0.0.133:22-10.0.0.1:52230.service: Deactivated successfully. Apr 28 02:04:23.818839 systemd[1]: session-24.scope: Deactivated successfully. Apr 28 02:04:23.821340 systemd-logind[1463]: Session 24 logged out. Waiting for processes to exit. Apr 28 02:04:23.831616 systemd[1]: Started sshd@24-10.0.0.133:22-10.0.0.1:52234.service - OpenSSH per-connection server daemon (10.0.0.1:52234). Apr 28 02:04:23.839074 systemd-logind[1463]: Removed session 24. Apr 28 02:04:23.854014 systemd[1]: Created slice kubepods-burstable-pod02169a02_9a4e_4d7c_b828_48ee7e385e7a.slice - libcontainer container kubepods-burstable-pod02169a02_9a4e_4d7c_b828_48ee7e385e7a.slice. Apr 28 02:04:23.892460 sshd[4354]: Accepted publickey for core from 10.0.0.1 port 52234 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:04:23.896152 sshd[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:04:23.909209 kubelet[2529]: I0428 02:04:23.909037 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02169a02-9a4e-4d7c-b828-48ee7e385e7a-lib-modules\") pod \"cilium-wckjt\" (UID: \"02169a02-9a4e-4d7c-b828-48ee7e385e7a\") " pod="kube-system/cilium-wckjt" Apr 28 02:04:23.909209 kubelet[2529]: I0428 02:04:23.909113 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02169a02-9a4e-4d7c-b828-48ee7e385e7a-host-proc-sys-net\") pod \"cilium-wckjt\" (UID: \"02169a02-9a4e-4d7c-b828-48ee7e385e7a\") " pod="kube-system/cilium-wckjt" Apr 28 02:04:23.909209 kubelet[2529]: I0428 02:04:23.909128 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjv5b\" (UniqueName: \"kubernetes.io/projected/02169a02-9a4e-4d7c-b828-48ee7e385e7a-kube-api-access-qjv5b\") pod \"cilium-wckjt\" (UID: \"02169a02-9a4e-4d7c-b828-48ee7e385e7a\") " pod="kube-system/cilium-wckjt" Apr 28 02:04:23.909209 kubelet[2529]: I0428 02:04:23.909141 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02169a02-9a4e-4d7c-b828-48ee7e385e7a-cni-path\") pod \"cilium-wckjt\" (UID: \"02169a02-9a4e-4d7c-b828-48ee7e385e7a\") " pod="kube-system/cilium-wckjt" Apr 28 02:04:23.909209 kubelet[2529]: I0428 02:04:23.909199 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02169a02-9a4e-4d7c-b828-48ee7e385e7a-clustermesh-secrets\") pod \"cilium-wckjt\" (UID: \"02169a02-9a4e-4d7c-b828-48ee7e385e7a\") " pod="kube-system/cilium-wckjt" Apr 28 02:04:23.911080 kubelet[2529]: I0428 02:04:23.909210 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02169a02-9a4e-4d7c-b828-48ee7e385e7a-cilium-config-path\") pod \"cilium-wckjt\" (UID: \"02169a02-9a4e-4d7c-b828-48ee7e385e7a\") " pod="kube-system/cilium-wckjt" Apr 28 02:04:23.911080 kubelet[2529]: I0428 02:04:23.909221 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02169a02-9a4e-4d7c-b828-48ee7e385e7a-hubble-tls\") pod \"cilium-wckjt\" (UID: \"02169a02-9a4e-4d7c-b828-48ee7e385e7a\") " pod="kube-system/cilium-wckjt" Apr 28 02:04:23.911080 kubelet[2529]: I0428 02:04:23.909238 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02169a02-9a4e-4d7c-b828-48ee7e385e7a-cilium-cgroup\") pod \"cilium-wckjt\" (UID: \"02169a02-9a4e-4d7c-b828-48ee7e385e7a\") " pod="kube-system/cilium-wckjt" Apr 28 02:04:23.911080 kubelet[2529]: I0428 02:04:23.909259 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02169a02-9a4e-4d7c-b828-48ee7e385e7a-etc-cni-netd\") pod \"cilium-wckjt\" (UID: \"02169a02-9a4e-4d7c-b828-48ee7e385e7a\") " pod="kube-system/cilium-wckjt" Apr 28 02:04:23.911080 kubelet[2529]: I0428 02:04:23.909269 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02169a02-9a4e-4d7c-b828-48ee7e385e7a-xtables-lock\") pod \"cilium-wckjt\" (UID: \"02169a02-9a4e-4d7c-b828-48ee7e385e7a\") " pod="kube-system/cilium-wckjt" Apr 28 02:04:23.911080 kubelet[2529]: I0428 02:04:23.909280 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02169a02-9a4e-4d7c-b828-48ee7e385e7a-host-proc-sys-kernel\") pod \"cilium-wckjt\" (UID: \"02169a02-9a4e-4d7c-b828-48ee7e385e7a\") " pod="kube-system/cilium-wckjt" Apr 28 02:04:23.911218 kubelet[2529]: I0428 02:04:23.909294 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02169a02-9a4e-4d7c-b828-48ee7e385e7a-cilium-run\") pod \"cilium-wckjt\" (UID: \"02169a02-9a4e-4d7c-b828-48ee7e385e7a\") " pod="kube-system/cilium-wckjt" Apr 28 02:04:23.911218 kubelet[2529]: I0428 02:04:23.909303 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02169a02-9a4e-4d7c-b828-48ee7e385e7a-hostproc\") pod \"cilium-wckjt\" (UID: \"02169a02-9a4e-4d7c-b828-48ee7e385e7a\") " pod="kube-system/cilium-wckjt" Apr 28 02:04:23.911218 kubelet[2529]: I0428 02:04:23.909315 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/02169a02-9a4e-4d7c-b828-48ee7e385e7a-cilium-ipsec-secrets\") pod \"cilium-wckjt\" (UID: \"02169a02-9a4e-4d7c-b828-48ee7e385e7a\") " pod="kube-system/cilium-wckjt" Apr 28 02:04:23.911218 kubelet[2529]: I0428 02:04:23.909326 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02169a02-9a4e-4d7c-b828-48ee7e385e7a-bpf-maps\") pod \"cilium-wckjt\" (UID: \"02169a02-9a4e-4d7c-b828-48ee7e385e7a\") " pod="kube-system/cilium-wckjt" Apr 28 02:04:23.916633 systemd-logind[1463]: New session 25 of user core. Apr 28 02:04:23.927316 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 28 02:04:23.992013 sshd[4354]: pam_unix(sshd:session): session closed for user core Apr 28 02:04:23.998941 systemd[1]: sshd@24-10.0.0.133:22-10.0.0.1:52234.service: Deactivated successfully. Apr 28 02:04:24.001228 systemd[1]: session-25.scope: Deactivated successfully. Apr 28 02:04:24.003514 systemd-logind[1463]: Session 25 logged out. Waiting for processes to exit. Apr 28 02:04:24.017016 systemd[1]: Started sshd@25-10.0.0.133:22-10.0.0.1:52240.service - OpenSSH per-connection server daemon (10.0.0.1:52240). Apr 28 02:04:24.026620 systemd-logind[1463]: Removed session 25. Apr 28 02:04:24.057328 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 52240 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 02:04:24.060007 sshd[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:04:24.071110 systemd-logind[1463]: New session 26 of user core. Apr 28 02:04:24.084778 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 28 02:04:24.185688 kubelet[2529]: E0428 02:04:24.184710 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:04:24.189094 containerd[1473]: time="2026-04-28T02:04:24.186144104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wckjt,Uid:02169a02-9a4e-4d7c-b828-48ee7e385e7a,Namespace:kube-system,Attempt:0,}" Apr 28 02:04:24.272669 containerd[1473]: time="2026-04-28T02:04:24.271711749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:04:24.272669 containerd[1473]: time="2026-04-28T02:04:24.271808444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:04:24.272669 containerd[1473]: time="2026-04-28T02:04:24.271831429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:04:24.273852 containerd[1473]: time="2026-04-28T02:04:24.272128673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:04:24.309228 systemd[1]: Started cri-containerd-fd5ed2981cfbaf8933b04359f21875767c91d021b0efb8938e9eab5b5bb34452.scope - libcontainer container fd5ed2981cfbaf8933b04359f21875767c91d021b0efb8938e9eab5b5bb34452. Apr 28 02:04:24.383957 containerd[1473]: time="2026-04-28T02:04:24.383796107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wckjt,Uid:02169a02-9a4e-4d7c-b828-48ee7e385e7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd5ed2981cfbaf8933b04359f21875767c91d021b0efb8938e9eab5b5bb34452\"" Apr 28 02:04:24.385167 kubelet[2529]: E0428 02:04:24.385061 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:04:24.393900 containerd[1473]: time="2026-04-28T02:04:24.392897691Z" level=info msg="CreateContainer within sandbox \"fd5ed2981cfbaf8933b04359f21875767c91d021b0efb8938e9eab5b5bb34452\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 28 02:04:24.420250 containerd[1473]: time="2026-04-28T02:04:24.419960852Z" level=info msg="CreateContainer within sandbox \"fd5ed2981cfbaf8933b04359f21875767c91d021b0efb8938e9eab5b5bb34452\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"95cb3723ec3cc3d497697d7fb2592eb2bc05abbf418d648fc3915a9c175125ef\"" Apr 28 02:04:24.420998 containerd[1473]: time="2026-04-28T02:04:24.420890304Z" level=info msg="StartContainer for \"95cb3723ec3cc3d497697d7fb2592eb2bc05abbf418d648fc3915a9c175125ef\"" Apr 28 02:04:24.488061 systemd[1]: Started cri-containerd-95cb3723ec3cc3d497697d7fb2592eb2bc05abbf418d648fc3915a9c175125ef.scope - libcontainer container 95cb3723ec3cc3d497697d7fb2592eb2bc05abbf418d648fc3915a9c175125ef. Apr 28 02:04:24.573232 containerd[1473]: time="2026-04-28T02:04:24.572730379Z" level=info msg="StartContainer for \"95cb3723ec3cc3d497697d7fb2592eb2bc05abbf418d648fc3915a9c175125ef\" returns successfully" Apr 28 02:04:24.594745 systemd[1]: cri-containerd-95cb3723ec3cc3d497697d7fb2592eb2bc05abbf418d648fc3915a9c175125ef.scope: Deactivated successfully. Apr 28 02:04:24.673547 kubelet[2529]: E0428 02:04:24.673214 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:04:24.674218 containerd[1473]: time="2026-04-28T02:04:24.674132537Z" level=info msg="shim disconnected" id=95cb3723ec3cc3d497697d7fb2592eb2bc05abbf418d648fc3915a9c175125ef namespace=k8s.io Apr 28 02:04:24.674218 containerd[1473]: time="2026-04-28T02:04:24.674210920Z" level=warning msg="cleaning up after shim disconnected" id=95cb3723ec3cc3d497697d7fb2592eb2bc05abbf418d648fc3915a9c175125ef namespace=k8s.io Apr 28 02:04:24.674218 containerd[1473]: time="2026-04-28T02:04:24.674218944Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:04:25.684512 kubelet[2529]: E0428 02:04:25.684481 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:04:25.703750 containerd[1473]: time="2026-04-28T02:04:25.703201136Z" level=info msg="CreateContainer within sandbox \"fd5ed2981cfbaf8933b04359f21875767c91d021b0efb8938e9eab5b5bb34452\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 28 02:04:25.767845 containerd[1473]: time="2026-04-28T02:04:25.767756180Z" level=info msg="CreateContainer within sandbox \"fd5ed2981cfbaf8933b04359f21875767c91d021b0efb8938e9eab5b5bb34452\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"17c3b95cf925914f27a433f23f35f7ac6de11ae9b7fe15c11f733a0be8b99cb1\"" Apr 28 02:04:25.770691 containerd[1473]: time="2026-04-28T02:04:25.769484668Z" level=info msg="StartContainer for \"17c3b95cf925914f27a433f23f35f7ac6de11ae9b7fe15c11f733a0be8b99cb1\"" Apr 28 02:04:25.834510 systemd[1]: Started cri-containerd-17c3b95cf925914f27a433f23f35f7ac6de11ae9b7fe15c11f733a0be8b99cb1.scope - libcontainer container 17c3b95cf925914f27a433f23f35f7ac6de11ae9b7fe15c11f733a0be8b99cb1. Apr 28 02:04:25.882769 containerd[1473]: time="2026-04-28T02:04:25.881977547Z" level=info msg="StartContainer for \"17c3b95cf925914f27a433f23f35f7ac6de11ae9b7fe15c11f733a0be8b99cb1\" returns successfully" Apr 28 02:04:25.893850 systemd[1]: cri-containerd-17c3b95cf925914f27a433f23f35f7ac6de11ae9b7fe15c11f733a0be8b99cb1.scope: Deactivated successfully. Apr 28 02:04:25.933499 containerd[1473]: time="2026-04-28T02:04:25.933306015Z" level=info msg="shim disconnected" id=17c3b95cf925914f27a433f23f35f7ac6de11ae9b7fe15c11f733a0be8b99cb1 namespace=k8s.io Apr 28 02:04:25.934118 containerd[1473]: time="2026-04-28T02:04:25.934047390Z" level=warning msg="cleaning up after shim disconnected" id=17c3b95cf925914f27a433f23f35f7ac6de11ae9b7fe15c11f733a0be8b99cb1 namespace=k8s.io Apr 28 02:04:25.934118 containerd[1473]: time="2026-04-28T02:04:25.934114637Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:04:26.020979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17c3b95cf925914f27a433f23f35f7ac6de11ae9b7fe15c11f733a0be8b99cb1-rootfs.mount: Deactivated successfully. Apr 28 02:04:26.693473 kubelet[2529]: E0428 02:04:26.692883 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:04:26.705034 containerd[1473]: time="2026-04-28T02:04:26.703335425Z" level=info msg="CreateContainer within sandbox \"fd5ed2981cfbaf8933b04359f21875767c91d021b0efb8938e9eab5b5bb34452\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 28 02:04:26.725510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3667552992.mount: Deactivated successfully. Apr 28 02:04:26.728630 containerd[1473]: time="2026-04-28T02:04:26.728483320Z" level=info msg="CreateContainer within sandbox \"fd5ed2981cfbaf8933b04359f21875767c91d021b0efb8938e9eab5b5bb34452\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"415b753af2c644f5e404048b61b9bd8d4af27e52c81c9d366d85ce0d19e841f5\"" Apr 28 02:04:26.729646 containerd[1473]: time="2026-04-28T02:04:26.729626343Z" level=info msg="StartContainer for \"415b753af2c644f5e404048b61b9bd8d4af27e52c81c9d366d85ce0d19e841f5\"" Apr 28 02:04:26.797713 systemd[1]: Started cri-containerd-415b753af2c644f5e404048b61b9bd8d4af27e52c81c9d366d85ce0d19e841f5.scope - libcontainer container 415b753af2c644f5e404048b61b9bd8d4af27e52c81c9d366d85ce0d19e841f5. Apr 28 02:04:26.839941 containerd[1473]: time="2026-04-28T02:04:26.839748314Z" level=info msg="StartContainer for \"415b753af2c644f5e404048b61b9bd8d4af27e52c81c9d366d85ce0d19e841f5\" returns successfully" Apr 28 02:04:26.841256 systemd[1]: cri-containerd-415b753af2c644f5e404048b61b9bd8d4af27e52c81c9d366d85ce0d19e841f5.scope: Deactivated successfully. Apr 28 02:04:26.889438 containerd[1473]: time="2026-04-28T02:04:26.889096476Z" level=info msg="shim disconnected" id=415b753af2c644f5e404048b61b9bd8d4af27e52c81c9d366d85ce0d19e841f5 namespace=k8s.io Apr 28 02:04:26.889438 containerd[1473]: time="2026-04-28T02:04:26.889186127Z" level=warning msg="cleaning up after shim disconnected" id=415b753af2c644f5e404048b61b9bd8d4af27e52c81c9d366d85ce0d19e841f5 namespace=k8s.io Apr 28 02:04:26.889438 containerd[1473]: time="2026-04-28T02:04:26.889193951Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:04:27.020981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-415b753af2c644f5e404048b61b9bd8d4af27e52c81c9d366d85ce0d19e841f5-rootfs.mount: Deactivated successfully. Apr 28 02:04:27.045783 kubelet[2529]: E0428 02:04:27.045674 2529 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 02:04:27.705053 kubelet[2529]: E0428 02:04:27.704829 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:04:27.718719 containerd[1473]: time="2026-04-28T02:04:27.717139853Z" level=info msg="CreateContainer within sandbox \"fd5ed2981cfbaf8933b04359f21875767c91d021b0efb8938e9eab5b5bb34452\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 28 02:04:27.743810 containerd[1473]: time="2026-04-28T02:04:27.743674579Z" level=info msg="CreateContainer within sandbox \"fd5ed2981cfbaf8933b04359f21875767c91d021b0efb8938e9eab5b5bb34452\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a7d4861dec947a8c10efa13da040cddabe13f891c75b0ba7904fadc2a996ec82\"" Apr 28 02:04:27.746028 containerd[1473]: time="2026-04-28T02:04:27.745861816Z" level=info msg="StartContainer for \"a7d4861dec947a8c10efa13da040cddabe13f891c75b0ba7904fadc2a996ec82\"" Apr 28 02:04:27.805035 systemd[1]: Started cri-containerd-a7d4861dec947a8c10efa13da040cddabe13f891c75b0ba7904fadc2a996ec82.scope - libcontainer container a7d4861dec947a8c10efa13da040cddabe13f891c75b0ba7904fadc2a996ec82. Apr 28 02:04:27.842105 systemd[1]: cri-containerd-a7d4861dec947a8c10efa13da040cddabe13f891c75b0ba7904fadc2a996ec82.scope: Deactivated successfully. Apr 28 02:04:27.845005 containerd[1473]: time="2026-04-28T02:04:27.844314152Z" level=info msg="StartContainer for \"a7d4861dec947a8c10efa13da040cddabe13f891c75b0ba7904fadc2a996ec82\" returns successfully" Apr 28 02:04:27.890485 containerd[1473]: time="2026-04-28T02:04:27.890301606Z" level=info msg="shim disconnected" id=a7d4861dec947a8c10efa13da040cddabe13f891c75b0ba7904fadc2a996ec82 namespace=k8s.io Apr 28 02:04:27.890485 containerd[1473]: time="2026-04-28T02:04:27.890518628Z" level=warning msg="cleaning up after shim disconnected" id=a7d4861dec947a8c10efa13da040cddabe13f891c75b0ba7904fadc2a996ec82 namespace=k8s.io Apr 28 02:04:27.890485 containerd[1473]: time="2026-04-28T02:04:27.890527502Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:04:27.929268 containerd[1473]: time="2026-04-28T02:04:27.929060944Z" level=warning msg="cleanup warnings time=\"2026-04-28T02:04:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 02:04:28.022267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7d4861dec947a8c10efa13da040cddabe13f891c75b0ba7904fadc2a996ec82-rootfs.mount: Deactivated successfully. Apr 28 02:04:28.716067 kubelet[2529]: E0428 02:04:28.715914 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:04:28.736678 containerd[1473]: time="2026-04-28T02:04:28.735699134Z" level=info msg="CreateContainer within sandbox \"fd5ed2981cfbaf8933b04359f21875767c91d021b0efb8938e9eab5b5bb34452\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 28 02:04:28.780806 containerd[1473]: time="2026-04-28T02:04:28.780611067Z" level=info msg="CreateContainer within sandbox \"fd5ed2981cfbaf8933b04359f21875767c91d021b0efb8938e9eab5b5bb34452\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8b5aa7839a0d4a12bec661ad7fddc98cf4e0035f34775441e08d40b5a3f208fe\"" Apr 28 02:04:28.782518 containerd[1473]: time="2026-04-28T02:04:28.782226207Z" level=info msg="StartContainer for \"8b5aa7839a0d4a12bec661ad7fddc98cf4e0035f34775441e08d40b5a3f208fe\"" Apr 28 02:04:28.842665 systemd[1]: Started cri-containerd-8b5aa7839a0d4a12bec661ad7fddc98cf4e0035f34775441e08d40b5a3f208fe.scope - libcontainer container 8b5aa7839a0d4a12bec661ad7fddc98cf4e0035f34775441e08d40b5a3f208fe. Apr 28 02:04:28.885763 containerd[1473]: time="2026-04-28T02:04:28.885489452Z" level=info msg="StartContainer for \"8b5aa7839a0d4a12bec661ad7fddc98cf4e0035f34775441e08d40b5a3f208fe\" returns successfully" Apr 28 02:04:28.927265 kubelet[2529]: I0428 02:04:28.925359 2529 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-28T02:04:28Z","lastTransitionTime":"2026-04-28T02:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 28 02:04:29.230661 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 28 02:04:29.753270 kubelet[2529]: E0428 02:04:29.752856 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:04:29.783252 kubelet[2529]: I0428 02:04:29.782992 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wckjt" podStartSLOduration=6.782979901 podStartE2EDuration="6.782979901s" podCreationTimestamp="2026-04-28 02:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:04:29.782780257 +0000 UTC m=+83.008049821" watchObservedRunningTime="2026-04-28 02:04:29.782979901 +0000 UTC m=+83.008249485" Apr 28 02:04:29.924210 kubelet[2529]: E0428 02:04:29.924009 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:04:30.757165 kubelet[2529]: E0428 02:04:30.757082 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:04:33.486651 systemd-networkd[1396]: lxc_health: Link UP Apr 28 02:04:33.496628 systemd-networkd[1396]: lxc_health: Gained carrier Apr 28 02:04:34.184249 kubelet[2529]: E0428 02:04:34.184218 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:04:34.779880 kubelet[2529]: E0428 02:04:34.779782 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:04:35.505668 systemd-networkd[1396]: lxc_health: Gained IPv6LL Apr 28 02:04:35.649115 systemd[1]: run-containerd-runc-k8s.io-8b5aa7839a0d4a12bec661ad7fddc98cf4e0035f34775441e08d40b5a3f208fe-runc.cwvQFG.mount: Deactivated successfully. Apr 28 02:04:35.787843 kubelet[2529]: E0428 02:04:35.786068 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:04:35.924643 kubelet[2529]: E0428 02:04:35.924547 2529 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:04:37.951903 kubelet[2529]: E0428 02:04:37.951872 2529 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47906->127.0.0.1:41505: write tcp 127.0.0.1:47906->127.0.0.1:41505: write: broken pipe Apr 28 02:04:40.113003 sshd[4362]: pam_unix(sshd:session): session closed for user core Apr 28 02:04:40.116212 systemd[1]: sshd@25-10.0.0.133:22-10.0.0.1:52240.service: Deactivated successfully. Apr 28 02:04:40.117879 systemd[1]: session-26.scope: Deactivated successfully. Apr 28 02:04:40.119313 systemd-logind[1463]: Session 26 logged out. Waiting for processes to exit. Apr 28 02:04:40.121098 systemd-logind[1463]: Removed session 26.