Apr 28 02:14:43.882822 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 27 22:40:10 -00 2026 Apr 28 02:14:43.882850 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 02:14:43.882865 kernel: BIOS-provided physical RAM map: Apr 28 02:14:43.882872 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 28 02:14:43.882880 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 28 02:14:43.882888 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 28 02:14:43.882897 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 28 02:14:43.882906 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 28 02:14:43.882914 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 28 02:14:43.882923 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 28 02:14:43.882932 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 28 02:14:43.882939 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 28 02:14:43.882947 kernel: NX (Execute Disable) protection: active Apr 28 02:14:43.882956 kernel: APIC: Static calls initialized Apr 28 02:14:43.882966 kernel: SMBIOS 2.8 present. Apr 28 02:14:43.882977 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 28 02:14:43.882985 kernel: Hypervisor detected: KVM Apr 28 02:14:43.882994 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 28 02:14:43.883003 kernel: kvm-clock: using sched offset of 4544872972 cycles Apr 28 02:14:43.883012 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 28 02:14:43.883021 kernel: tsc: Detected 2793.438 MHz processor Apr 28 02:14:43.883030 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 28 02:14:43.883039 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 28 02:14:43.883048 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 28 02:14:43.883058 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 28 02:14:43.883066 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 28 02:14:43.883075 kernel: Using GB pages for direct mapping Apr 28 02:14:43.883083 kernel: ACPI: Early table checksum verification disabled Apr 28 02:14:43.883092 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 28 02:14:43.883100 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:14:43.883109 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:14:43.883118 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:14:43.883127 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 28 02:14:43.883137 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:14:43.883146 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:14:43.883155 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:14:43.883163 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:14:43.883172 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 28 02:14:43.883180 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 28 02:14:43.883189 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 28 02:14:43.883204 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 28 02:14:43.883213 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 28 02:14:43.883222 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 28 02:14:43.883231 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 28 02:14:43.883241 kernel: No NUMA configuration found Apr 28 02:14:43.883249 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 28 02:14:43.883259 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 28 02:14:43.883270 kernel: Zone ranges: Apr 28 02:14:43.883279 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 28 02:14:43.883289 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 28 02:14:43.883298 kernel: Normal empty Apr 28 02:14:43.883307 kernel: Movable zone start for each node Apr 28 02:14:43.883316 kernel: Early memory node ranges Apr 28 02:14:43.883349 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 28 02:14:43.883360 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 28 02:14:43.883370 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 28 02:14:43.883378 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 02:14:43.883390 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 28 02:14:43.883398 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 28 02:14:43.883408 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 28 02:14:43.883417 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 28 02:14:43.883425 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 28 02:14:43.883433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 28 02:14:43.883443 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 28 02:14:43.883451 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 28 02:14:43.883460 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 28 02:14:43.883471 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 28 02:14:43.883479 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 28 02:14:43.883488 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 28 02:14:43.883497 kernel: TSC deadline timer available Apr 28 02:14:43.883506 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 28 02:14:43.883515 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 28 02:14:43.883524 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 28 02:14:43.883533 kernel: kvm-guest: setup PV sched yield Apr 28 02:14:43.883541 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 28 02:14:43.883553 kernel: Booting paravirtualized kernel on KVM Apr 28 02:14:43.883561 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 28 02:14:43.883571 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 28 02:14:43.883579 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 28 02:14:43.883588 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 28 02:14:43.883597 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 28 02:14:43.883605 kernel: kvm-guest: PV spinlocks enabled Apr 28 02:14:43.883613 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 28 02:14:43.883623 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 02:14:43.883634 kernel: random: crng init done Apr 28 02:14:43.883643 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 28 02:14:43.883652 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 28 02:14:43.883660 kernel: Fallback order for Node 0: 0 Apr 28 02:14:43.883669 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 28 02:14:43.883677 kernel: Policy zone: DMA32 Apr 28 02:14:43.883686 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 28 02:14:43.883733 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 137896K reserved, 0K cma-reserved) Apr 28 02:14:43.883746 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 28 02:14:43.883755 kernel: ftrace: allocating 37996 entries in 149 pages Apr 28 02:14:43.883764 kernel: ftrace: allocated 149 pages with 4 groups Apr 28 02:14:43.883773 kernel: Dynamic Preempt: voluntary Apr 28 02:14:43.883782 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 28 02:14:43.883792 kernel: rcu: RCU event tracing is enabled. Apr 28 02:14:43.883801 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 28 02:14:43.883810 kernel: Trampoline variant of Tasks RCU enabled. Apr 28 02:14:43.883819 kernel: Rude variant of Tasks RCU enabled. Apr 28 02:14:43.883829 kernel: Tracing variant of Tasks RCU enabled. Apr 28 02:14:43.883841 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 28 02:14:43.883850 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 28 02:14:43.883860 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 28 02:14:43.883870 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 28 02:14:43.883879 kernel: Console: colour VGA+ 80x25 Apr 28 02:14:43.883888 kernel: printk: console [ttyS0] enabled Apr 28 02:14:43.883897 kernel: ACPI: Core revision 20230628 Apr 28 02:14:43.883907 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 28 02:14:43.883917 kernel: APIC: Switch to symmetric I/O mode setup Apr 28 02:14:43.883928 kernel: x2apic enabled Apr 28 02:14:43.883937 kernel: APIC: Switched APIC routing to: physical x2apic Apr 28 02:14:43.883946 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 28 02:14:43.883956 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 28 02:14:43.883965 kernel: kvm-guest: setup PV IPIs Apr 28 02:14:43.883974 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 28 02:14:43.883983 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 02:14:43.884002 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 28 02:14:43.884011 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 28 02:14:43.884021 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 28 02:14:43.884030 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 28 02:14:43.884042 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 28 02:14:43.884052 kernel: Spectre V2 : Mitigation: Retpolines Apr 28 02:14:43.884062 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 28 02:14:43.884073 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 28 02:14:43.884083 kernel: RETBleed: Vulnerable Apr 28 02:14:43.884094 kernel: Speculative Store Bypass: Vulnerable Apr 28 02:14:43.884104 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 28 02:14:43.884115 kernel: GDS: Unknown: Dependent on hypervisor status Apr 28 02:14:43.884124 kernel: active return thunk: its_return_thunk Apr 28 02:14:43.884134 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 28 02:14:43.884145 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 28 02:14:43.884156 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 28 02:14:43.884165 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 28 02:14:43.884174 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 28 02:14:43.884185 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 28 02:14:43.884194 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 28 02:14:43.884202 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 28 02:14:43.884211 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 28 02:14:43.884220 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 28 02:14:43.884229 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 28 02:14:43.884239 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 28 02:14:43.884248 kernel: Freeing SMP alternatives memory: 32K Apr 28 02:14:43.884258 kernel: pid_max: default: 32768 minimum: 301 Apr 28 02:14:43.884269 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 28 02:14:43.884279 kernel: landlock: Up and running. Apr 28 02:14:43.884288 kernel: SELinux: Initializing. Apr 28 02:14:43.884297 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 02:14:43.884307 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 02:14:43.884317 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 28 02:14:43.884359 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 02:14:43.884369 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 02:14:43.884379 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 02:14:43.884391 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 28 02:14:43.884400 kernel: signal: max sigframe size: 3632 Apr 28 02:14:43.884410 kernel: rcu: Hierarchical SRCU implementation. Apr 28 02:14:43.884421 kernel: rcu: Max phase no-delay instances is 400. Apr 28 02:14:43.884431 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 28 02:14:43.884440 kernel: smp: Bringing up secondary CPUs ... Apr 28 02:14:43.884450 kernel: smpboot: x86: Booting SMP configuration: Apr 28 02:14:43.884460 kernel: .... node #0, CPUs: #1 #2 #3 Apr 28 02:14:43.884469 kernel: smp: Brought up 1 node, 4 CPUs Apr 28 02:14:43.884482 kernel: smpboot: Max logical packages: 1 Apr 28 02:14:43.884491 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 28 02:14:43.884501 kernel: devtmpfs: initialized Apr 28 02:14:43.884510 kernel: x86/mm: Memory block size: 128MB Apr 28 02:14:43.884521 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 28 02:14:43.884531 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 28 02:14:43.884542 kernel: pinctrl core: initialized pinctrl subsystem Apr 28 02:14:43.884553 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 28 02:14:43.884565 kernel: audit: initializing netlink subsys (disabled) Apr 28 02:14:43.884578 kernel: audit: type=2000 audit(1777342483.209:1): state=initialized audit_enabled=0 res=1 Apr 28 02:14:43.884588 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 28 02:14:43.884599 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 28 02:14:43.884610 kernel: cpuidle: using governor menu Apr 28 02:14:43.884621 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 28 02:14:43.884631 kernel: dca service started, version 1.12.1 Apr 28 02:14:43.884641 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 28 02:14:43.884652 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 28 02:14:43.884663 kernel: PCI: Using configuration type 1 for base access Apr 28 02:14:43.884676 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 28 02:14:43.884687 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 28 02:14:43.884737 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 28 02:14:43.884749 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 28 02:14:43.884761 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 28 02:14:43.884771 kernel: ACPI: Added _OSI(Module Device) Apr 28 02:14:43.884782 kernel: ACPI: Added _OSI(Processor Device) Apr 28 02:14:43.884793 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 28 02:14:43.884803 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 28 02:14:43.884816 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 28 02:14:43.884827 kernel: ACPI: Interpreter enabled Apr 28 02:14:43.884837 kernel: ACPI: PM: (supports S0 S3 S5) Apr 28 02:14:43.884847 kernel: ACPI: Using IOAPIC for interrupt routing Apr 28 02:14:43.884858 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 28 02:14:43.884869 kernel: PCI: Using E820 reservations for host bridge windows Apr 28 02:14:43.884880 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 28 02:14:43.884889 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 28 02:14:43.885049 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 28 02:14:43.885153 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 28 02:14:43.885242 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 28 02:14:43.885254 kernel: PCI host bridge to bus 0000:00 Apr 28 02:14:43.885369 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 28 02:14:43.885451 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 28 02:14:43.885530 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 28 02:14:43.885612 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 28 02:14:43.885690 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 28 02:14:43.885839 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 28 02:14:43.885919 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 28 02:14:43.886021 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 28 02:14:43.886118 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 28 02:14:43.886210 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 28 02:14:43.886297 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 28 02:14:43.886410 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 28 02:14:43.886495 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 28 02:14:43.886590 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 28 02:14:43.886678 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 28 02:14:43.886952 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 28 02:14:43.887048 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 28 02:14:43.887144 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 28 02:14:43.887231 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 28 02:14:43.887320 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 28 02:14:43.887440 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 28 02:14:43.887536 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 28 02:14:43.887626 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 28 02:14:43.887753 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 28 02:14:43.887845 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 28 02:14:43.887934 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 28 02:14:43.888029 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 28 02:14:43.888117 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 28 02:14:43.888220 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 28 02:14:43.888310 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 28 02:14:43.888431 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 28 02:14:43.888527 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 28 02:14:43.888613 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 28 02:14:43.888626 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 28 02:14:43.888636 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 28 02:14:43.888645 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 28 02:14:43.888655 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 28 02:14:43.888668 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 28 02:14:43.888679 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 28 02:14:43.888689 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 28 02:14:43.888734 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 28 02:14:43.888744 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 28 02:14:43.888754 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 28 02:14:43.888763 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 28 02:14:43.888773 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 28 02:14:43.888784 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 28 02:14:43.888797 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 28 02:14:43.888807 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 28 02:14:43.888818 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 28 02:14:43.888829 kernel: iommu: Default domain type: Translated Apr 28 02:14:43.888839 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 28 02:14:43.888849 kernel: PCI: Using ACPI for IRQ routing Apr 28 02:14:43.888860 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 28 02:14:43.888870 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 28 02:14:43.888880 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 28 02:14:43.889014 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 28 02:14:43.889109 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 28 02:14:43.889199 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 28 02:14:43.889212 kernel: vgaarb: loaded Apr 28 02:14:43.889223 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 28 02:14:43.889233 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 28 02:14:43.889243 kernel: clocksource: Switched to clocksource kvm-clock Apr 28 02:14:43.889253 kernel: VFS: Disk quotas dquot_6.6.0 Apr 28 02:14:43.889263 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 28 02:14:43.889276 kernel: pnp: PnP ACPI init Apr 28 02:14:43.889428 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 28 02:14:43.889446 kernel: pnp: PnP ACPI: found 6 devices Apr 28 02:14:43.889456 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 28 02:14:43.889467 kernel: NET: Registered PF_INET protocol family Apr 28 02:14:43.889476 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 28 02:14:43.889486 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 28 02:14:43.889496 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 28 02:14:43.889508 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 28 02:14:43.889518 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 28 02:14:43.889529 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 28 02:14:43.889539 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 02:14:43.889549 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 02:14:43.889559 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 28 02:14:43.889569 kernel: NET: Registered PF_XDP protocol family Apr 28 02:14:43.889663 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 28 02:14:43.889798 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 28 02:14:43.889948 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 28 02:14:43.890028 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 28 02:14:43.890107 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 28 02:14:43.890185 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 28 02:14:43.890197 kernel: PCI: CLS 0 bytes, default 64 Apr 28 02:14:43.890207 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 28 02:14:43.890217 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 02:14:43.890227 kernel: Initialise system trusted keyrings Apr 28 02:14:43.890240 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 28 02:14:43.890250 kernel: Key type asymmetric registered Apr 28 02:14:43.890260 kernel: Asymmetric key parser 'x509' registered Apr 28 02:14:43.890269 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 28 02:14:43.890279 kernel: io scheduler mq-deadline registered Apr 28 02:14:43.890288 kernel: io scheduler kyber registered Apr 28 02:14:43.890298 kernel: io scheduler bfq registered Apr 28 02:14:43.890308 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 28 02:14:43.890319 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 28 02:14:43.890359 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 28 02:14:43.890370 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 28 02:14:43.890380 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 28 02:14:43.890390 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 28 02:14:43.890399 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 28 02:14:43.890409 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 28 02:14:43.890418 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 28 02:14:43.890514 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 28 02:14:43.890531 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 28 02:14:43.890611 kernel: rtc_cmos 00:04: registered as rtc0 Apr 28 02:14:43.890692 kernel: rtc_cmos 00:04: setting system clock to 2026-04-28T02:14:43 UTC (1777342483) Apr 28 02:14:43.890820 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 28 02:14:43.890834 kernel: intel_pstate: CPU model not supported Apr 28 02:14:43.890842 kernel: NET: Registered PF_INET6 protocol family Apr 28 02:14:43.890853 kernel: Segment Routing with IPv6 Apr 28 02:14:43.890862 kernel: In-situ OAM (IOAM) with IPv6 Apr 28 02:14:43.890872 kernel: NET: Registered PF_PACKET protocol family Apr 28 02:14:43.890883 kernel: Key type dns_resolver registered Apr 28 02:14:43.890893 kernel: IPI shorthand broadcast: enabled Apr 28 02:14:43.890903 kernel: sched_clock: Marking stable (911010884, 342801735)->(1360131296, -106318677) Apr 28 02:14:43.890912 kernel: registered taskstats version 1 Apr 28 02:14:43.890923 kernel: Loading compiled-in X.509 certificates Apr 28 02:14:43.890932 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 40b5c5a01382737457e1eae3e889ae587960eb18' Apr 28 02:14:43.890942 kernel: Key type .fscrypt registered Apr 28 02:14:43.890953 kernel: Key type fscrypt-provisioning registered Apr 28 02:14:43.890962 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 28 02:14:43.890974 kernel: ima: Allocated hash algorithm: sha1 Apr 28 02:14:43.890983 kernel: ima: No architecture policies found Apr 28 02:14:43.890994 kernel: clk: Disabling unused clocks Apr 28 02:14:43.891003 kernel: Freeing unused kernel image (initmem) memory: 42884K Apr 28 02:14:43.891013 kernel: Write protecting the kernel read-only data: 36864k Apr 28 02:14:43.891023 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 28 02:14:43.891033 kernel: Run /init as init process Apr 28 02:14:43.891043 kernel: with arguments: Apr 28 02:14:43.891052 kernel: /init Apr 28 02:14:43.891064 kernel: with environment: Apr 28 02:14:43.891073 kernel: HOME=/ Apr 28 02:14:43.891083 kernel: TERM=linux Apr 28 02:14:43.891095 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 02:14:43.891108 systemd[1]: Detected virtualization kvm. Apr 28 02:14:43.891119 systemd[1]: Detected architecture x86-64. Apr 28 02:14:43.891129 systemd[1]: Running in initrd. Apr 28 02:14:43.891139 systemd[1]: No hostname configured, using default hostname. Apr 28 02:14:43.891152 systemd[1]: Hostname set to . Apr 28 02:14:43.891162 systemd[1]: Initializing machine ID from VM UUID. Apr 28 02:14:43.891173 systemd[1]: Queued start job for default target initrd.target. Apr 28 02:14:43.891183 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 02:14:43.891194 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 02:14:43.891206 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 28 02:14:43.891216 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 02:14:43.891227 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 28 02:14:43.891241 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 28 02:14:43.891266 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 28 02:14:43.891277 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 28 02:14:43.891288 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 02:14:43.891301 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 02:14:43.891312 systemd[1]: Reached target paths.target - Path Units. Apr 28 02:14:43.891322 systemd[1]: Reached target slices.target - Slice Units. Apr 28 02:14:43.891359 systemd[1]: Reached target swap.target - Swaps. Apr 28 02:14:43.891370 systemd[1]: Reached target timers.target - Timer Units. Apr 28 02:14:43.891381 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 02:14:43.891393 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 02:14:43.891403 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 02:14:43.891414 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 02:14:43.891427 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 02:14:43.891438 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 02:14:43.891448 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 02:14:43.891460 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 02:14:43.891472 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 28 02:14:43.891484 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 02:14:43.891494 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 28 02:14:43.891505 systemd[1]: Starting systemd-fsck-usr.service... Apr 28 02:14:43.891516 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 02:14:43.891529 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 02:14:43.891539 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 02:14:43.891551 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 28 02:14:43.891580 systemd-journald[194]: Collecting audit messages is disabled. Apr 28 02:14:43.891608 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 02:14:43.891620 systemd[1]: Finished systemd-fsck-usr.service. Apr 28 02:14:43.891636 systemd-journald[194]: Journal started Apr 28 02:14:43.891662 systemd-journald[194]: Runtime Journal (/run/log/journal/56d415839df74e0a9a5952d9f9acd5de) is 6.0M, max 48.4M, 42.3M free. Apr 28 02:14:43.893799 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 02:14:43.896248 systemd-modules-load[195]: Inserted module 'overlay' Apr 28 02:14:43.903843 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 02:14:43.999394 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 28 02:14:43.999418 kernel: Bridge firewalling registered Apr 28 02:14:43.922682 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 28 02:14:44.019984 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 02:14:44.020400 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 02:14:44.026538 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:14:44.030209 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 02:14:44.033428 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 02:14:44.041055 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 02:14:44.047510 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 02:14:44.047895 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 02:14:44.051269 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:14:44.053899 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 02:14:44.056285 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 02:14:44.059958 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 28 02:14:44.069574 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 02:14:44.075116 dracut-cmdline[230]: dracut-dracut-053 Apr 28 02:14:44.078281 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 02:14:44.081684 systemd-resolved[226]: Positive Trust Anchors: Apr 28 02:14:44.081692 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 02:14:44.081751 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 02:14:44.083667 systemd-resolved[226]: Defaulting to hostname 'linux'. Apr 28 02:14:44.084424 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 02:14:44.090116 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 02:14:44.149847 kernel: SCSI subsystem initialized Apr 28 02:14:44.157798 kernel: Loading iSCSI transport class v2.0-870. Apr 28 02:14:44.168766 kernel: iscsi: registered transport (tcp) Apr 28 02:14:44.186891 kernel: iscsi: registered transport (qla4xxx) Apr 28 02:14:44.186961 kernel: QLogic iSCSI HBA Driver Apr 28 02:14:44.218896 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 28 02:14:44.228949 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 28 02:14:44.251867 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 28 02:14:44.251908 kernel: device-mapper: uevent: version 1.0.3 Apr 28 02:14:44.253500 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 28 02:14:44.291792 kernel: raid6: avx512x4 gen() 44664 MB/s Apr 28 02:14:44.308984 kernel: raid6: avx512x2 gen() 42921 MB/s Apr 28 02:14:44.325819 kernel: raid6: avx512x1 gen() 43098 MB/s Apr 28 02:14:44.342867 kernel: raid6: avx2x4 gen() 37154 MB/s Apr 28 02:14:44.359945 kernel: raid6: avx2x2 gen() 37338 MB/s Apr 28 02:14:44.377847 kernel: raid6: avx2x1 gen() 28791 MB/s Apr 28 02:14:44.377973 kernel: raid6: using algorithm avx512x4 gen() 44664 MB/s Apr 28 02:14:44.395832 kernel: raid6: .... xor() 9581 MB/s, rmw enabled Apr 28 02:14:44.395933 kernel: raid6: using avx512x2 recovery algorithm Apr 28 02:14:44.416800 kernel: xor: automatically using best checksumming function avx Apr 28 02:14:44.554787 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 28 02:14:44.564985 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 28 02:14:44.578930 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 02:14:44.588823 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 28 02:14:44.591394 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 02:14:44.599829 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 28 02:14:44.610093 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Apr 28 02:14:44.634659 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 02:14:44.640901 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 02:14:44.670527 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 02:14:44.679855 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 28 02:14:44.689567 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 28 02:14:44.693589 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 02:14:44.698002 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 02:14:44.700081 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 02:14:44.709243 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 28 02:14:44.712956 kernel: cryptd: max_cpu_qlen set to 1000 Apr 28 02:14:44.716085 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 28 02:14:44.724048 kernel: AVX2 version of gcm_enc/dec engaged. Apr 28 02:14:44.724067 kernel: AES CTR mode by8 optimization enabled Apr 28 02:14:44.730801 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 28 02:14:44.725642 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 02:14:44.741957 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 28 02:14:44.742008 kernel: GPT:9289727 != 19775487 Apr 28 02:14:44.742025 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 28 02:14:44.742042 kernel: GPT:9289727 != 19775487 Apr 28 02:14:44.742066 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 28 02:14:44.742083 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:14:44.725755 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 02:14:44.730315 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 02:14:44.732315 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 02:14:44.732445 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:14:44.742084 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 02:14:44.757478 kernel: libata version 3.00 loaded. Apr 28 02:14:44.762037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 02:14:44.767072 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 28 02:14:44.772101 kernel: ahci 0000:00:1f.2: version 3.0 Apr 28 02:14:44.777603 kernel: BTRFS: device fsid c393bc7b-9362-4bef-afe6-6491ed4d6c93 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (471) Apr 28 02:14:44.777643 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 28 02:14:44.777664 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Apr 28 02:14:44.779816 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 28 02:14:44.779948 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 28 02:14:44.785750 kernel: scsi host0: ahci Apr 28 02:14:44.788742 kernel: scsi host1: ahci Apr 28 02:14:44.789884 kernel: scsi host2: ahci Apr 28 02:14:44.790080 kernel: scsi host3: ahci Apr 28 02:14:44.790151 kernel: scsi host4: ahci Apr 28 02:14:44.790220 kernel: scsi host5: ahci Apr 28 02:14:44.790291 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 28 02:14:44.790299 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 28 02:14:44.790311 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 28 02:14:44.790318 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 28 02:14:44.790325 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 28 02:14:44.790361 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 28 02:14:44.795359 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 28 02:14:44.897985 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:14:44.905212 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 28 02:14:44.913561 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 02:14:44.918593 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 28 02:14:44.922859 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 28 02:14:44.937194 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 28 02:14:44.938100 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 02:14:44.951091 disk-uuid[568]: Primary Header is updated. Apr 28 02:14:44.951091 disk-uuid[568]: Secondary Entries is updated. Apr 28 02:14:44.951091 disk-uuid[568]: Secondary Header is updated. Apr 28 02:14:44.953745 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:14:44.958764 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:14:44.962751 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:14:44.963284 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 02:14:45.098839 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 28 02:14:45.098965 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 28 02:14:45.101157 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 28 02:14:45.101745 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 28 02:14:45.104748 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 28 02:14:45.106762 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 28 02:14:45.106779 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 28 02:14:45.108135 kernel: ata3.00: applying bridge limits Apr 28 02:14:45.109119 kernel: ata3.00: configured for UDMA/100 Apr 28 02:14:45.109746 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 28 02:14:45.155019 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 28 02:14:45.155366 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 28 02:14:45.171778 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 28 02:14:45.964377 disk-uuid[570]: The operation has completed successfully. Apr 28 02:14:45.966167 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:14:45.983600 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 28 02:14:45.983729 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 28 02:14:46.002913 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 28 02:14:46.007411 sh[594]: Success Apr 28 02:14:46.017784 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 28 02:14:46.043927 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 28 02:14:46.065206 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 28 02:14:46.069923 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 28 02:14:46.079481 kernel: BTRFS info (device dm-0): first mount of filesystem c393bc7b-9362-4bef-afe6-6491ed4d6c93 Apr 28 02:14:46.079516 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 28 02:14:46.079537 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 28 02:14:46.081290 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 28 02:14:46.082589 kernel: BTRFS info (device dm-0): using free space tree Apr 28 02:14:46.088320 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 28 02:14:46.090156 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 28 02:14:46.107904 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 28 02:14:46.111856 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 28 02:14:46.120729 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:14:46.120748 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 02:14:46.120755 kernel: BTRFS info (device vda6): using free space tree Apr 28 02:14:46.122732 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 02:14:46.129833 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 28 02:14:46.132914 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:14:46.138555 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 28 02:14:46.146942 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 28 02:14:46.192539 ignition[679]: Ignition 2.19.0 Apr 28 02:14:46.192557 ignition[679]: Stage: fetch-offline Apr 28 02:14:46.192587 ignition[679]: no configs at "/usr/lib/ignition/base.d" Apr 28 02:14:46.192593 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:14:46.192656 ignition[679]: parsed url from cmdline: "" Apr 28 02:14:46.192659 ignition[679]: no config URL provided Apr 28 02:14:46.192662 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 02:14:46.192736 ignition[679]: no config at "/usr/lib/ignition/user.ign" Apr 28 02:14:46.192757 ignition[679]: op(1): [started] loading QEMU firmware config module Apr 28 02:14:46.192761 ignition[679]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 28 02:14:46.202616 ignition[679]: op(1): [finished] loading QEMU firmware config module Apr 28 02:14:46.202631 ignition[679]: QEMU firmware config was not found. Ignoring... Apr 28 02:14:46.213945 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 02:14:46.220882 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 02:14:46.236642 systemd-networkd[783]: lo: Link UP Apr 28 02:14:46.236668 systemd-networkd[783]: lo: Gained carrier Apr 28 02:14:46.237595 systemd-networkd[783]: Enumeration completed Apr 28 02:14:46.237765 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 02:14:46.238129 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:14:46.238131 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 02:14:46.239915 systemd-networkd[783]: eth0: Link UP Apr 28 02:14:46.239919 systemd-networkd[783]: eth0: Gained carrier Apr 28 02:14:46.239927 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:14:46.243610 systemd[1]: Reached target network.target - Network. Apr 28 02:14:46.278909 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 02:14:46.344261 ignition[679]: parsing config with SHA512: 6effeaa6fb3a099530a74fb143b5e1828d56fd92283afd6f3e00906174c4b8d0f8140d4a2ae881326cfe853c4e7b3cf46dae1bf5ba3339de17ec5b0c9f433a48 Apr 28 02:14:46.347964 unknown[679]: fetched base config from "system" Apr 28 02:14:46.347976 unknown[679]: fetched user config from "qemu" Apr 28 02:14:46.350684 ignition[679]: fetch-offline: fetch-offline passed Apr 28 02:14:46.350931 ignition[679]: Ignition finished successfully Apr 28 02:14:46.355478 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 02:14:46.359835 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 28 02:14:46.370049 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 28 02:14:46.382604 ignition[787]: Ignition 2.19.0 Apr 28 02:14:46.382626 ignition[787]: Stage: kargs Apr 28 02:14:46.382803 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 28 02:14:46.382810 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:14:46.383460 ignition[787]: kargs: kargs passed Apr 28 02:14:46.383495 ignition[787]: Ignition finished successfully Apr 28 02:14:46.389545 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 28 02:14:46.398170 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 28 02:14:46.408369 ignition[795]: Ignition 2.19.0 Apr 28 02:14:46.408386 ignition[795]: Stage: disks Apr 28 02:14:46.410264 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 28 02:14:46.408516 ignition[795]: no configs at "/usr/lib/ignition/base.d" Apr 28 02:14:46.412732 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 28 02:14:46.408523 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:14:46.416242 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 02:14:46.409188 ignition[795]: disks: disks passed Apr 28 02:14:46.417798 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 02:14:46.409222 ignition[795]: Ignition finished successfully Apr 28 02:14:46.423512 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 02:14:46.425426 systemd[1]: Reached target basic.target - Basic System. Apr 28 02:14:46.443977 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 28 02:14:46.457935 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 28 02:14:46.462204 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 28 02:14:46.474914 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 28 02:14:46.558844 kernel: EXT4-fs (vda9): mounted filesystem f590d1f8-5181-4682-9e04-fe65400dca5c r/w with ordered data mode. Quota mode: none. Apr 28 02:14:46.559396 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 28 02:14:46.561422 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 28 02:14:46.579956 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 02:14:46.599871 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Apr 28 02:14:46.599903 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:14:46.599918 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 02:14:46.599932 kernel: BTRFS info (device vda6): using free space tree Apr 28 02:14:46.599945 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 02:14:46.582919 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 28 02:14:46.584830 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 28 02:14:46.584874 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 28 02:14:46.584903 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 02:14:46.590581 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 28 02:14:46.596925 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 28 02:14:46.600320 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 02:14:46.642503 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Apr 28 02:14:46.648561 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Apr 28 02:14:46.653416 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Apr 28 02:14:46.658468 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Apr 28 02:14:46.731780 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 28 02:14:46.737901 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 28 02:14:46.741742 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 28 02:14:46.747148 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:14:46.763915 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 28 02:14:46.765920 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 28 02:14:46.770059 ignition[926]: INFO : Ignition 2.19.0 Apr 28 02:14:46.770059 ignition[926]: INFO : Stage: mount Apr 28 02:14:46.770059 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 02:14:46.770059 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:14:46.770059 ignition[926]: INFO : mount: mount passed Apr 28 02:14:46.770059 ignition[926]: INFO : Ignition finished successfully Apr 28 02:14:46.772886 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 28 02:14:47.077928 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 28 02:14:47.088329 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 02:14:47.097748 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Apr 28 02:14:47.100973 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:14:47.100999 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 02:14:47.101010 kernel: BTRFS info (device vda6): using free space tree Apr 28 02:14:47.105767 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 02:14:47.106946 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 02:14:47.132013 ignition[957]: INFO : Ignition 2.19.0 Apr 28 02:14:47.132013 ignition[957]: INFO : Stage: files Apr 28 02:14:47.134921 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 02:14:47.134921 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:14:47.134921 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Apr 28 02:14:47.134921 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 28 02:14:47.134921 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 28 02:14:47.145910 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 28 02:14:47.145910 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 28 02:14:47.145910 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 28 02:14:47.145910 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 02:14:47.145910 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 28 02:14:47.137210 unknown[957]: wrote ssh authorized keys file for user: core Apr 28 02:14:47.211793 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 28 02:14:47.474959 systemd-networkd[783]: eth0: Gained IPv6LL Apr 28 02:14:47.611379 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 02:14:47.611379 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 28 02:14:47.611379 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 28 02:14:47.675163 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 28 02:14:47.723721 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 28 02:14:47.723721 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 28 02:14:47.729829 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 28 02:14:47.729829 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 28 02:14:47.729829 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 28 02:14:47.729829 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 02:14:47.729829 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 02:14:47.729829 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 02:14:47.729829 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 02:14:47.729829 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 02:14:47.729829 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 02:14:47.729829 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 02:14:47.729829 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 02:14:47.729829 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 02:14:47.729829 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 28 02:14:47.801637 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 28 02:14:48.095912 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 02:14:48.095912 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 28 02:14:48.102200 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 02:14:48.102200 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 02:14:48.102200 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 28 02:14:48.102200 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 28 02:14:48.102200 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 02:14:48.102200 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 02:14:48.102200 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 28 02:14:48.102200 ignition[957]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 28 02:14:48.127144 ignition[957]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 02:14:48.127144 ignition[957]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 02:14:48.127144 ignition[957]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 28 02:14:48.127144 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 28 02:14:48.127144 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 28 02:14:48.127144 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 28 02:14:48.127144 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 28 02:14:48.127144 ignition[957]: INFO : files: files passed Apr 28 02:14:48.127144 ignition[957]: INFO : Ignition finished successfully Apr 28 02:14:48.120864 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 28 02:14:48.137031 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 28 02:14:48.139544 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 28 02:14:48.143319 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 28 02:14:48.166894 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Apr 28 02:14:48.143428 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 28 02:14:48.171025 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 02:14:48.171025 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 28 02:14:48.151893 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 02:14:48.178103 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 02:14:48.154383 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 28 02:14:48.158160 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 28 02:14:48.179203 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 28 02:14:48.179293 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 28 02:14:48.182598 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 28 02:14:48.185891 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 28 02:14:48.187577 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 28 02:14:48.188228 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 28 02:14:48.201328 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 02:14:48.204395 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 28 02:14:48.215678 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 28 02:14:48.218001 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 02:14:48.220065 systemd[1]: Stopped target timers.target - Timer Units. Apr 28 02:14:48.223262 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 28 02:14:48.223379 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 02:14:48.229151 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 28 02:14:48.230968 systemd[1]: Stopped target basic.target - Basic System. Apr 28 02:14:48.235613 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 28 02:14:48.236971 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 02:14:48.240198 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 28 02:14:48.243680 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 28 02:14:48.250032 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 02:14:48.250172 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 28 02:14:48.255843 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 28 02:14:48.258950 systemd[1]: Stopped target swap.target - Swaps. Apr 28 02:14:48.260450 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 28 02:14:48.260591 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 28 02:14:48.267475 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 28 02:14:48.267605 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 02:14:48.274339 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 28 02:14:48.275893 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 02:14:48.276056 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 28 02:14:48.276165 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 28 02:14:48.280501 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 28 02:14:48.280613 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 02:14:48.288220 systemd[1]: Stopped target paths.target - Path Units. Apr 28 02:14:48.291572 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 28 02:14:48.294829 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 02:14:48.297117 systemd[1]: Stopped target slices.target - Slice Units. Apr 28 02:14:48.298668 systemd[1]: Stopped target sockets.target - Socket Units. Apr 28 02:14:48.302651 systemd[1]: iscsid.socket: Deactivated successfully. Apr 28 02:14:48.302766 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 02:14:48.305504 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 28 02:14:48.305577 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 02:14:48.306965 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 28 02:14:48.307065 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 02:14:48.309742 systemd[1]: ignition-files.service: Deactivated successfully. Apr 28 02:14:48.309985 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 28 02:14:48.327012 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 28 02:14:48.330172 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 28 02:14:48.331594 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 28 02:14:48.331692 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 02:14:48.335629 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 28 02:14:48.335746 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 02:14:48.337939 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 28 02:14:48.339030 ignition[1011]: INFO : Ignition 2.19.0 Apr 28 02:14:48.339030 ignition[1011]: INFO : Stage: umount Apr 28 02:14:48.339030 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 02:14:48.339030 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:14:48.338003 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 28 02:14:48.340663 ignition[1011]: INFO : umount: umount passed Apr 28 02:14:48.340663 ignition[1011]: INFO : Ignition finished successfully Apr 28 02:14:48.340539 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 28 02:14:48.340626 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 28 02:14:48.341302 systemd[1]: Stopped target network.target - Network. Apr 28 02:14:48.342019 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 28 02:14:48.342063 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 28 02:14:48.342287 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 28 02:14:48.342307 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 28 02:14:48.342573 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 28 02:14:48.342609 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 28 02:14:48.343083 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 28 02:14:48.343107 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 28 02:14:48.343502 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 28 02:14:48.344230 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 28 02:14:48.384628 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 28 02:14:48.384869 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 28 02:14:48.390026 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 28 02:14:48.390066 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 02:14:48.390836 systemd-networkd[783]: eth0: DHCPv6 lease lost Apr 28 02:14:48.392225 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 28 02:14:48.392320 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 28 02:14:48.397541 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 28 02:14:48.397612 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 28 02:14:48.410169 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 28 02:14:48.412073 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 28 02:14:48.412116 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 02:14:48.416399 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 02:14:48.416436 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:14:48.419777 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 28 02:14:48.419822 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 28 02:14:48.423775 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 02:14:48.429508 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 28 02:14:48.436618 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 28 02:14:48.436821 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 28 02:14:48.448651 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 28 02:14:48.448827 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 02:14:48.450550 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 28 02:14:48.450578 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 28 02:14:48.458444 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 28 02:14:48.460188 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 02:14:48.465320 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 28 02:14:48.465398 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 28 02:14:48.470209 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 28 02:14:48.470271 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 28 02:14:48.475762 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 02:14:48.475820 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 02:14:48.491955 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 28 02:14:48.496609 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 28 02:14:48.496670 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 02:14:48.502798 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 28 02:14:48.502860 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 02:14:48.506785 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 28 02:14:48.506818 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 02:14:48.509168 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 02:14:48.509201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:14:48.512814 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 28 02:14:48.512893 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 28 02:14:48.517610 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 28 02:14:48.517739 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 28 02:14:48.521628 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 28 02:14:48.524964 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 28 02:14:48.525007 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 28 02:14:48.539933 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 28 02:14:48.545765 systemd[1]: Switching root. Apr 28 02:14:48.574263 systemd-journald[194]: Journal stopped Apr 28 02:14:49.303395 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 28 02:14:49.303446 kernel: SELinux: policy capability network_peer_controls=1 Apr 28 02:14:49.303462 kernel: SELinux: policy capability open_perms=1 Apr 28 02:14:49.303475 kernel: SELinux: policy capability extended_socket_class=1 Apr 28 02:14:49.303483 kernel: SELinux: policy capability always_check_network=0 Apr 28 02:14:49.303490 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 28 02:14:49.303498 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 28 02:14:49.303505 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 28 02:14:49.303512 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 28 02:14:49.303520 kernel: audit: type=1403 audit(1777342488.712:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 28 02:14:49.303529 systemd[1]: Successfully loaded SELinux policy in 36.093ms. Apr 28 02:14:49.303544 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.193ms. Apr 28 02:14:49.303555 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 02:14:49.303563 systemd[1]: Detected virtualization kvm. Apr 28 02:14:49.303571 systemd[1]: Detected architecture x86-64. Apr 28 02:14:49.303579 systemd[1]: Detected first boot. Apr 28 02:14:49.303587 systemd[1]: Initializing machine ID from VM UUID. Apr 28 02:14:49.303595 zram_generator::config[1055]: No configuration found. Apr 28 02:14:49.303604 systemd[1]: Populated /etc with preset unit settings. Apr 28 02:14:49.303612 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 28 02:14:49.303622 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 28 02:14:49.303629 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 28 02:14:49.303637 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 28 02:14:49.303646 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 28 02:14:49.303653 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 28 02:14:49.303661 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 28 02:14:49.303669 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 28 02:14:49.303677 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 28 02:14:49.303686 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 28 02:14:49.303727 systemd[1]: Created slice user.slice - User and Session Slice. Apr 28 02:14:49.303736 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 02:14:49.303743 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 02:14:49.303751 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 28 02:14:49.303760 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 28 02:14:49.303768 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 28 02:14:49.303776 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 02:14:49.303783 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 28 02:14:49.303793 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 02:14:49.303800 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 28 02:14:49.303808 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 28 02:14:49.303820 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 28 02:14:49.303828 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 28 02:14:49.303835 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 02:14:49.303843 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 02:14:49.303851 systemd[1]: Reached target slices.target - Slice Units. Apr 28 02:14:49.303860 systemd[1]: Reached target swap.target - Swaps. Apr 28 02:14:49.303868 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 28 02:14:49.303876 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 28 02:14:49.303884 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 02:14:49.303891 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 02:14:49.303899 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 02:14:49.303906 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 28 02:14:49.303914 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 28 02:14:49.303922 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 28 02:14:49.303932 systemd[1]: Mounting media.mount - External Media Directory... Apr 28 02:14:49.303940 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:14:49.303947 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 28 02:14:49.303955 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 28 02:14:49.303963 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 28 02:14:49.303971 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 28 02:14:49.303979 systemd[1]: Reached target machines.target - Containers. Apr 28 02:14:49.303986 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 28 02:14:49.303996 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 02:14:49.304004 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 02:14:49.304011 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 28 02:14:49.304019 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 02:14:49.304027 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 02:14:49.304034 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 02:14:49.304042 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 28 02:14:49.304049 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 02:14:49.304057 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 28 02:14:49.304067 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 28 02:14:49.304075 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 28 02:14:49.304082 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 28 02:14:49.304090 systemd[1]: Stopped systemd-fsck-usr.service. Apr 28 02:14:49.304098 kernel: fuse: init (API version 7.39) Apr 28 02:14:49.304105 kernel: loop: module loaded Apr 28 02:14:49.304113 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 02:14:49.304120 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 02:14:49.304128 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 28 02:14:49.304137 kernel: ACPI: bus type drm_connector registered Apr 28 02:14:49.304156 systemd-journald[1136]: Collecting audit messages is disabled. Apr 28 02:14:49.304172 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 28 02:14:49.304180 systemd-journald[1136]: Journal started Apr 28 02:14:49.304198 systemd-journald[1136]: Runtime Journal (/run/log/journal/56d415839df74e0a9a5952d9f9acd5de) is 6.0M, max 48.4M, 42.3M free. Apr 28 02:14:49.040043 systemd[1]: Queued start job for default target multi-user.target. Apr 28 02:14:49.058925 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 28 02:14:49.059299 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 28 02:14:49.308689 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 02:14:49.308750 systemd[1]: verity-setup.service: Deactivated successfully. Apr 28 02:14:49.308761 systemd[1]: Stopped verity-setup.service. Apr 28 02:14:49.310842 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:14:49.316886 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 02:14:49.317561 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 28 02:14:49.318090 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 28 02:14:49.318344 systemd[1]: Mounted media.mount - External Media Directory. Apr 28 02:14:49.318618 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 28 02:14:49.319138 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 28 02:14:49.319430 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 28 02:14:49.319829 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 28 02:14:49.320046 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 02:14:49.320301 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 28 02:14:49.320428 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 28 02:14:49.320877 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 02:14:49.320980 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 02:14:49.321388 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 02:14:49.321486 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 02:14:49.322177 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 02:14:49.322292 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 02:14:49.322741 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 28 02:14:49.322936 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 28 02:14:49.323296 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 02:14:49.323440 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 02:14:49.323870 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 02:14:49.324048 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 28 02:14:49.330768 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 02:14:49.331801 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 28 02:14:49.336302 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 28 02:14:49.336439 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 02:14:49.339844 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 02:14:49.343874 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 02:14:49.345256 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 28 02:14:49.345490 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 28 02:14:49.361245 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 28 02:14:49.364940 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 28 02:14:49.364989 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 02:14:49.367461 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 28 02:14:49.373764 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Apr 28 02:14:49.373784 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Apr 28 02:14:49.377946 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 28 02:14:49.381198 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 28 02:14:49.382960 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 02:14:49.384687 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 28 02:14:49.387535 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 28 02:14:49.389505 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 02:14:49.390420 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 28 02:14:49.393213 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 28 02:14:49.396094 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 02:14:49.399121 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:14:49.401499 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 02:14:49.402493 systemd-journald[1136]: Time spent on flushing to /var/log/journal/56d415839df74e0a9a5952d9f9acd5de is 16.343ms for 965 entries. Apr 28 02:14:49.402493 systemd-journald[1136]: System Journal (/var/log/journal/56d415839df74e0a9a5952d9f9acd5de) is 8.0M, max 195.6M, 187.6M free. Apr 28 02:14:49.426906 systemd-journald[1136]: Received client request to flush runtime journal. Apr 28 02:14:49.426938 kernel: loop0: detected capacity change from 0 to 228704 Apr 28 02:14:49.406602 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 28 02:14:49.410957 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 28 02:14:49.416652 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 28 02:14:49.424903 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 28 02:14:49.429213 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 28 02:14:49.433833 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 28 02:14:49.434865 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 28 02:14:49.438163 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 28 02:14:49.448981 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 28 02:14:49.449442 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 28 02:14:49.454744 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 28 02:14:49.463922 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 28 02:14:49.472085 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 02:14:49.482803 kernel: loop1: detected capacity change from 0 to 142488 Apr 28 02:14:49.487414 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Apr 28 02:14:49.487428 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Apr 28 02:14:49.490939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 02:14:49.522209 kernel: loop2: detected capacity change from 0 to 140768 Apr 28 02:14:49.552744 kernel: loop3: detected capacity change from 0 to 228704 Apr 28 02:14:49.565783 kernel: loop4: detected capacity change from 0 to 142488 Apr 28 02:14:49.579762 kernel: loop5: detected capacity change from 0 to 140768 Apr 28 02:14:49.587767 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 28 02:14:49.588059 (sd-merge)[1199]: Merged extensions into '/usr'. Apr 28 02:14:49.592260 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Apr 28 02:14:49.592284 systemd[1]: Reloading... Apr 28 02:14:49.634768 zram_generator::config[1223]: No configuration found. Apr 28 02:14:49.658821 ldconfig[1172]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 28 02:14:49.719171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 02:14:49.748048 systemd[1]: Reloading finished in 155 ms. Apr 28 02:14:49.775966 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 28 02:14:49.778465 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 28 02:14:49.794977 systemd[1]: Starting ensure-sysext.service... Apr 28 02:14:49.798626 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 02:14:49.856016 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Apr 28 02:14:49.856068 systemd[1]: Reloading... Apr 28 02:14:49.866222 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 02:14:49.866471 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 02:14:49.867088 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 02:14:49.867268 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Apr 28 02:14:49.867321 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Apr 28 02:14:49.869084 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 02:14:49.869278 systemd-tmpfiles[1263]: Skipping /boot Apr 28 02:14:49.875051 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 02:14:49.875058 systemd-tmpfiles[1263]: Skipping /boot Apr 28 02:14:49.891747 zram_generator::config[1290]: No configuration found. Apr 28 02:14:49.967967 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 02:14:49.997761 systemd[1]: Reloading finished in 141 ms. Apr 28 02:14:50.012436 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 28 02:14:50.026166 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 02:14:50.035923 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 02:14:50.043138 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 28 02:14:50.046941 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 28 02:14:50.052091 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 02:14:50.055869 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 02:14:50.060488 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 28 02:14:50.063993 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:14:50.064095 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 02:14:50.066114 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 02:14:50.069415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 02:14:50.073187 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 02:14:50.075283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 02:14:50.075394 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:14:50.079063 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 28 02:14:50.084113 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 02:14:50.084261 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 02:14:50.087516 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 28 02:14:50.087566 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Apr 28 02:14:50.090983 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 02:14:50.091117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 02:14:50.094089 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 02:14:50.094355 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 02:14:50.102910 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 28 02:14:50.106118 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:14:50.106285 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 02:14:50.112090 augenrules[1360]: No rules Apr 28 02:14:50.112964 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 02:14:50.120978 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 02:14:50.126605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 02:14:50.128607 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 02:14:50.129819 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 28 02:14:50.131615 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:14:50.132309 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 02:14:50.137622 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 28 02:14:50.140205 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 02:14:50.142665 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 28 02:14:50.145265 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 02:14:50.151997 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 02:14:50.154587 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 02:14:50.154686 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 02:14:50.157310 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 02:14:50.157469 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 02:14:50.162360 systemd-resolved[1333]: Positive Trust Anchors: Apr 28 02:14:50.162609 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 02:14:50.162676 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 02:14:50.166535 systemd-resolved[1333]: Defaulting to hostname 'linux'. Apr 28 02:14:50.167680 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 02:14:50.174214 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 28 02:14:50.174460 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 02:14:50.176814 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:14:50.176953 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 02:14:50.191734 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1388) Apr 28 02:14:50.185985 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 02:14:50.193355 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 02:14:50.199482 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 02:14:50.208966 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 02:14:50.211305 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 02:14:50.215812 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 28 02:14:50.215852 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 28 02:14:50.217133 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 02:14:50.219201 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 02:14:50.219242 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:14:50.220147 systemd[1]: Finished ensure-sysext.service. Apr 28 02:14:50.223583 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 28 02:14:50.223724 kernel: ACPI: button: Power Button [PWRF] Apr 28 02:14:50.226526 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 02:14:50.226763 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 02:14:50.228538 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 28 02:14:50.228840 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 28 02:14:50.231890 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 02:14:50.232087 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 02:14:50.236875 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 02:14:50.236977 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 02:14:50.239768 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 02:14:50.239886 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 02:14:50.254798 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 28 02:14:50.276052 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 02:14:50.288128 kernel: mousedev: PS/2 mouse device common for all mice Apr 28 02:14:50.298950 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 28 02:14:50.301466 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 02:14:50.301584 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 02:14:50.303434 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 28 02:14:50.309912 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 02:14:50.313984 systemd-networkd[1410]: lo: Link UP Apr 28 02:14:50.313987 systemd-networkd[1410]: lo: Gained carrier Apr 28 02:14:50.315013 systemd-networkd[1410]: Enumeration completed Apr 28 02:14:50.315138 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 02:14:50.315565 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:14:50.315568 systemd-networkd[1410]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 02:14:50.316214 systemd-networkd[1410]: eth0: Link UP Apr 28 02:14:50.316217 systemd-networkd[1410]: eth0: Gained carrier Apr 28 02:14:50.316226 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:14:50.317347 systemd[1]: Reached target network.target - Network. Apr 28 02:14:50.320773 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 28 02:14:50.323175 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 28 02:14:50.359665 systemd-networkd[1410]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 02:14:50.376935 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 28 02:14:50.377498 systemd[1]: Reached target time-set.target - System Time Set. Apr 28 02:14:51.287031 systemd-resolved[1333]: Clock change detected. Flushing caches. Apr 28 02:14:51.287097 systemd-timesyncd[1421]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 28 02:14:51.287145 systemd-timesyncd[1421]: Initial clock synchronization to Tue 2026-04-28 02:14:51.286820 UTC. Apr 28 02:14:51.354979 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 28 02:14:51.420275 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 28 02:14:51.423343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:14:51.429049 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 02:14:51.458874 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 28 02:14:51.462017 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 02:14:51.464426 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 02:14:51.466566 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 28 02:14:51.468622 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 28 02:14:51.471201 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 28 02:14:51.473094 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 28 02:14:51.475816 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 28 02:14:51.478806 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 28 02:14:51.478917 systemd[1]: Reached target paths.target - Path Units. Apr 28 02:14:51.480663 systemd[1]: Reached target timers.target - Timer Units. Apr 28 02:14:51.483021 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 28 02:14:51.486081 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 28 02:14:51.495548 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 28 02:14:51.499107 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 28 02:14:51.501817 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 28 02:14:51.503701 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 02:14:51.505814 systemd[1]: Reached target basic.target - Basic System. Apr 28 02:14:51.509301 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 28 02:14:51.509368 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 28 02:14:51.510408 systemd[1]: Starting containerd.service - containerd container runtime... Apr 28 02:14:51.513808 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 02:14:51.514760 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 28 02:14:51.518600 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 28 02:14:51.523342 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 28 02:14:51.526942 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 28 02:14:51.528971 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 28 02:14:51.529262 jq[1441]: false Apr 28 02:14:51.541179 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 28 02:14:51.548375 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 28 02:14:51.563218 dbus-daemon[1440]: [system] SELinux support is enabled Apr 28 02:14:51.565817 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 28 02:14:51.583475 extend-filesystems[1442]: Found loop3 Apr 28 02:14:51.590448 extend-filesystems[1442]: Found loop4 Apr 28 02:14:51.590448 extend-filesystems[1442]: Found loop5 Apr 28 02:14:51.590448 extend-filesystems[1442]: Found sr0 Apr 28 02:14:51.590448 extend-filesystems[1442]: Found vda Apr 28 02:14:51.590448 extend-filesystems[1442]: Found vda1 Apr 28 02:14:51.590448 extend-filesystems[1442]: Found vda2 Apr 28 02:14:51.590448 extend-filesystems[1442]: Found vda3 Apr 28 02:14:51.590448 extend-filesystems[1442]: Found usr Apr 28 02:14:51.590448 extend-filesystems[1442]: Found vda4 Apr 28 02:14:51.590448 extend-filesystems[1442]: Found vda6 Apr 28 02:14:51.590448 extend-filesystems[1442]: Found vda7 Apr 28 02:14:51.590448 extend-filesystems[1442]: Found vda9 Apr 28 02:14:51.590448 extend-filesystems[1442]: Checking size of /dev/vda9 Apr 28 02:14:51.590311 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 28 02:14:51.596629 extend-filesystems[1442]: Resized partition /dev/vda9 Apr 28 02:14:51.600081 extend-filesystems[1461]: resize2fs 1.47.1 (20-May-2024) Apr 28 02:14:51.604781 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 28 02:14:51.604812 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1372) Apr 28 02:14:51.616578 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 28 02:14:51.618294 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 28 02:14:51.629988 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 28 02:14:51.630011 systemd[1]: Starting update-engine.service - Update Engine... Apr 28 02:14:51.633409 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 28 02:14:51.635968 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 28 02:14:51.645237 jq[1464]: true Apr 28 02:14:51.640326 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 28 02:14:51.648285 extend-filesystems[1461]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 28 02:14:51.648285 extend-filesystems[1461]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 28 02:14:51.648285 extend-filesystems[1461]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 28 02:14:51.654308 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Apr 28 02:14:51.659782 update_engine[1462]: I20260428 02:14:51.659467 1462 main.cc:92] Flatcar Update Engine starting Apr 28 02:14:51.662031 update_engine[1462]: I20260428 02:14:51.660483 1462 update_check_scheduler.cc:74] Next update check in 8m18s Apr 28 02:14:51.661424 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 28 02:14:51.661607 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 28 02:14:51.661816 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 28 02:14:51.662377 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 28 02:14:51.664938 systemd[1]: motdgen.service: Deactivated successfully. Apr 28 02:14:51.665158 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 28 02:14:51.666322 systemd-logind[1454]: Watching system buttons on /dev/input/event1 (Power Button) Apr 28 02:14:51.666337 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 28 02:14:51.667290 systemd-logind[1454]: New seat seat0. Apr 28 02:14:51.669092 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 28 02:14:51.669210 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 28 02:14:51.673279 systemd[1]: Started systemd-logind.service - User Login Management. Apr 28 02:14:51.681769 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 28 02:14:51.684067 dbus-daemon[1440]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 28 02:14:51.685597 jq[1468]: true Apr 28 02:14:51.692258 systemd[1]: Started update-engine.service - Update Engine. Apr 28 02:14:51.692379 tar[1467]: linux-amd64/LICENSE Apr 28 02:14:51.692560 tar[1467]: linux-amd64/helm Apr 28 02:14:51.694909 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 28 02:14:51.695024 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 28 02:14:51.697435 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 28 02:14:51.697564 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 28 02:14:51.701058 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 28 02:14:51.713457 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 28 02:14:51.734824 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Apr 28 02:14:51.736467 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 28 02:14:51.737644 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 28 02:14:51.739104 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 28 02:14:51.753197 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 28 02:14:51.755105 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 28 02:14:51.760737 systemd[1]: issuegen.service: Deactivated successfully. Apr 28 02:14:51.761013 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 28 02:14:51.772106 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 28 02:14:51.783258 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 28 02:14:51.791189 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 28 02:14:51.797987 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 28 02:14:51.800610 systemd[1]: Reached target getty.target - Login Prompts. Apr 28 02:14:51.844443 containerd[1469]: time="2026-04-28T02:14:51.844339529Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 28 02:14:51.861266 containerd[1469]: time="2026-04-28T02:14:51.861210883Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 28 02:14:51.862801 containerd[1469]: time="2026-04-28T02:14:51.862764302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:14:51.864099 containerd[1469]: time="2026-04-28T02:14:51.862901529Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 28 02:14:51.864099 containerd[1469]: time="2026-04-28T02:14:51.862924723Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 28 02:14:51.864099 containerd[1469]: time="2026-04-28T02:14:51.863061278Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 28 02:14:51.864099 containerd[1469]: time="2026-04-28T02:14:51.863076921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 28 02:14:51.864099 containerd[1469]: time="2026-04-28T02:14:51.863121704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:14:51.864099 containerd[1469]: time="2026-04-28T02:14:51.863132978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 28 02:14:51.864099 containerd[1469]: time="2026-04-28T02:14:51.863336877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:14:51.864099 containerd[1469]: time="2026-04-28T02:14:51.863354032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 28 02:14:51.864099 containerd[1469]: time="2026-04-28T02:14:51.863366117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:14:51.864099 containerd[1469]: time="2026-04-28T02:14:51.863375260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 28 02:14:51.864099 containerd[1469]: time="2026-04-28T02:14:51.863458082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 28 02:14:51.864099 containerd[1469]: time="2026-04-28T02:14:51.863674892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 28 02:14:51.864368 containerd[1469]: time="2026-04-28T02:14:51.863792247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:14:51.864368 containerd[1469]: time="2026-04-28T02:14:51.863804531Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 28 02:14:51.864368 containerd[1469]: time="2026-04-28T02:14:51.863912707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 28 02:14:51.864368 containerd[1469]: time="2026-04-28T02:14:51.863948877Z" level=info msg="metadata content store policy set" policy=shared Apr 28 02:14:51.868901 containerd[1469]: time="2026-04-28T02:14:51.868810317Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 28 02:14:51.868959 containerd[1469]: time="2026-04-28T02:14:51.868904810Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 28 02:14:51.868959 containerd[1469]: time="2026-04-28T02:14:51.868919369Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 28 02:14:51.868959 containerd[1469]: time="2026-04-28T02:14:51.868930747Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 28 02:14:51.868959 containerd[1469]: time="2026-04-28T02:14:51.868942222Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 28 02:14:51.869109 containerd[1469]: time="2026-04-28T02:14:51.869075626Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 28 02:14:51.869309 containerd[1469]: time="2026-04-28T02:14:51.869276614Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 28 02:14:51.869394 containerd[1469]: time="2026-04-28T02:14:51.869364278Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 28 02:14:51.869394 containerd[1469]: time="2026-04-28T02:14:51.869389365Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 28 02:14:51.869422 containerd[1469]: time="2026-04-28T02:14:51.869398819Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 28 02:14:51.869422 containerd[1469]: time="2026-04-28T02:14:51.869408273Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 28 02:14:51.869422 containerd[1469]: time="2026-04-28T02:14:51.869419725Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 28 02:14:51.869462 containerd[1469]: time="2026-04-28T02:14:51.869428408Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 28 02:14:51.869462 containerd[1469]: time="2026-04-28T02:14:51.869438237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 28 02:14:51.869462 containerd[1469]: time="2026-04-28T02:14:51.869449091Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 28 02:14:51.869462 containerd[1469]: time="2026-04-28T02:14:51.869458367Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 28 02:14:51.869526 containerd[1469]: time="2026-04-28T02:14:51.869467036Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 28 02:14:51.869526 containerd[1469]: time="2026-04-28T02:14:51.869475237Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 28 02:14:51.869526 containerd[1469]: time="2026-04-28T02:14:51.869491188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.869569 containerd[1469]: time="2026-04-28T02:14:51.869525183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.869569 containerd[1469]: time="2026-04-28T02:14:51.869535056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.869569 containerd[1469]: time="2026-04-28T02:14:51.869545330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.869569 containerd[1469]: time="2026-04-28T02:14:51.869554054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.869569 containerd[1469]: time="2026-04-28T02:14:51.869563455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.869633 containerd[1469]: time="2026-04-28T02:14:51.869572160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.869633 containerd[1469]: time="2026-04-28T02:14:51.869581067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.869633 containerd[1469]: time="2026-04-28T02:14:51.869589918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.869633 containerd[1469]: time="2026-04-28T02:14:51.869600975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.869633 containerd[1469]: time="2026-04-28T02:14:51.869609194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.869633 containerd[1469]: time="2026-04-28T02:14:51.869617446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.869633 containerd[1469]: time="2026-04-28T02:14:51.869627313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.869719 containerd[1469]: time="2026-04-28T02:14:51.869638482Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 28 02:14:51.869719 containerd[1469]: time="2026-04-28T02:14:51.869655146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.869719 containerd[1469]: time="2026-04-28T02:14:51.869664386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.869719 containerd[1469]: time="2026-04-28T02:14:51.869672505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 28 02:14:51.869719 containerd[1469]: time="2026-04-28T02:14:51.869704795Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 28 02:14:51.869782 containerd[1469]: time="2026-04-28T02:14:51.869718671Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 28 02:14:51.869782 containerd[1469]: time="2026-04-28T02:14:51.869728279Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 28 02:14:51.869782 containerd[1469]: time="2026-04-28T02:14:51.869737435Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 28 02:14:51.869782 containerd[1469]: time="2026-04-28T02:14:51.869746391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.869782 containerd[1469]: time="2026-04-28T02:14:51.869762837Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 28 02:14:51.869782 containerd[1469]: time="2026-04-28T02:14:51.869771112Z" level=info msg="NRI interface is disabled by configuration." Apr 28 02:14:51.869782 containerd[1469]: time="2026-04-28T02:14:51.869780059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 28 02:14:51.870083 containerd[1469]: time="2026-04-28T02:14:51.870020211Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 28 02:14:51.870083 containerd[1469]: time="2026-04-28T02:14:51.870080242Z" level=info msg="Connect containerd service" Apr 28 02:14:51.870216 containerd[1469]: time="2026-04-28T02:14:51.870108733Z" level=info msg="using legacy CRI server" Apr 28 02:14:51.870216 containerd[1469]: time="2026-04-28T02:14:51.870114470Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 28 02:14:51.871027 containerd[1469]: time="2026-04-28T02:14:51.870948134Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 28 02:14:51.871790 containerd[1469]: time="2026-04-28T02:14:51.871756736Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 02:14:51.871989 containerd[1469]: time="2026-04-28T02:14:51.871952440Z" level=info msg="Start subscribing containerd event" Apr 28 02:14:51.872044 containerd[1469]: time="2026-04-28T02:14:51.872023195Z" level=info msg="Start recovering state" Apr 28 02:14:51.872093 containerd[1469]: time="2026-04-28T02:14:51.872075193Z" level=info msg="Start event monitor" Apr 28 02:14:51.872093 containerd[1469]: time="2026-04-28T02:14:51.872081501Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 28 02:14:51.872123 containerd[1469]: time="2026-04-28T02:14:51.872088622Z" level=info msg="Start snapshots syncer" Apr 28 02:14:51.872123 containerd[1469]: time="2026-04-28T02:14:51.872121599Z" level=info msg="Start cni network conf syncer for default" Apr 28 02:14:51.872151 containerd[1469]: time="2026-04-28T02:14:51.872126895Z" level=info msg="Start streaming server" Apr 28 02:14:51.872151 containerd[1469]: time="2026-04-28T02:14:51.872133738Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 28 02:14:51.872180 containerd[1469]: time="2026-04-28T02:14:51.872170940Z" level=info msg="containerd successfully booted in 0.029215s" Apr 28 02:14:51.872331 systemd[1]: Started containerd.service - containerd container runtime. Apr 28 02:14:52.084890 tar[1467]: linux-amd64/README.md Apr 28 02:14:52.101601 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 28 02:14:52.736306 systemd-networkd[1410]: eth0: Gained IPv6LL Apr 28 02:14:52.738984 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 28 02:14:52.741604 systemd[1]: Reached target network-online.target - Network is Online. Apr 28 02:14:52.753104 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 28 02:14:52.757274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:14:52.761566 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 28 02:14:52.780221 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 28 02:14:52.789793 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 28 02:14:52.790169 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 28 02:14:52.793544 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 28 02:14:53.519261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:14:53.521818 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 28 02:14:53.523794 systemd[1]: Startup finished in 1.035s (kernel) + 5.030s (initrd) + 3.934s (userspace) = 10.000s. Apr 28 02:14:53.524375 (kubelet)[1553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 02:14:53.961587 kubelet[1553]: E0428 02:14:53.961175 1553 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 02:14:53.963920 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 02:14:53.964034 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 02:14:57.758436 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 28 02:14:57.760033 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:36838.service - OpenSSH per-connection server daemon (10.0.0.1:36838). Apr 28 02:14:57.798622 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 36838 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:57.800311 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:57.807750 systemd-logind[1454]: New session 1 of user core. Apr 28 02:14:57.808722 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 28 02:14:57.818117 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 28 02:14:57.829224 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 28 02:14:57.831967 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 28 02:14:57.838926 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 28 02:14:57.960440 systemd[1570]: Queued start job for default target default.target. Apr 28 02:14:57.973321 systemd[1570]: Created slice app.slice - User Application Slice. Apr 28 02:14:57.973368 systemd[1570]: Reached target paths.target - Paths. Apr 28 02:14:57.973380 systemd[1570]: Reached target timers.target - Timers. Apr 28 02:14:57.974853 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 28 02:14:57.987529 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 28 02:14:57.987680 systemd[1570]: Reached target sockets.target - Sockets. Apr 28 02:14:57.987716 systemd[1570]: Reached target basic.target - Basic System. Apr 28 02:14:57.987765 systemd[1570]: Reached target default.target - Main User Target. Apr 28 02:14:57.987798 systemd[1570]: Startup finished in 143ms. Apr 28 02:14:57.987907 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 28 02:14:57.989580 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 28 02:14:58.051294 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:36844.service - OpenSSH per-connection server daemon (10.0.0.1:36844). Apr 28 02:14:58.089563 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 36844 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:58.090926 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:58.095093 systemd-logind[1454]: New session 2 of user core. Apr 28 02:14:58.103188 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 28 02:14:58.158690 sshd[1581]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:58.170626 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:36844.service: Deactivated successfully. Apr 28 02:14:58.171956 systemd[1]: session-2.scope: Deactivated successfully. Apr 28 02:14:58.173032 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Apr 28 02:14:58.186278 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:36846.service - OpenSSH per-connection server daemon (10.0.0.1:36846). Apr 28 02:14:58.187635 systemd-logind[1454]: Removed session 2. Apr 28 02:14:58.212975 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 36846 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:58.214380 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:58.218513 systemd-logind[1454]: New session 3 of user core. Apr 28 02:14:58.232061 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 28 02:14:58.280588 sshd[1588]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:58.298377 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:36846.service: Deactivated successfully. Apr 28 02:14:58.299677 systemd[1]: session-3.scope: Deactivated successfully. Apr 28 02:14:58.300857 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Apr 28 02:14:58.301855 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:36852.service - OpenSSH per-connection server daemon (10.0.0.1:36852). Apr 28 02:14:58.302878 systemd-logind[1454]: Removed session 3. Apr 28 02:14:58.336570 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 36852 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:58.337872 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:58.342466 systemd-logind[1454]: New session 4 of user core. Apr 28 02:14:58.351999 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 28 02:14:58.404362 sshd[1595]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:58.413318 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:36852.service: Deactivated successfully. Apr 28 02:14:58.414604 systemd[1]: session-4.scope: Deactivated successfully. Apr 28 02:14:58.415799 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Apr 28 02:14:58.435286 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:36864.service - OpenSSH per-connection server daemon (10.0.0.1:36864). Apr 28 02:14:58.437144 systemd-logind[1454]: Removed session 4. Apr 28 02:14:58.460820 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 36864 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:58.462124 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:58.465607 systemd-logind[1454]: New session 5 of user core. Apr 28 02:14:58.475239 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 28 02:14:58.535023 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 28 02:14:58.535281 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 02:14:58.552178 sudo[1605]: pam_unix(sudo:session): session closed for user root Apr 28 02:14:58.554100 sshd[1602]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:58.577497 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:36864.service: Deactivated successfully. Apr 28 02:14:58.578709 systemd[1]: session-5.scope: Deactivated successfully. Apr 28 02:14:58.579865 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Apr 28 02:14:58.580856 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:36878.service - OpenSSH per-connection server daemon (10.0.0.1:36878). Apr 28 02:14:58.581930 systemd-logind[1454]: Removed session 5. Apr 28 02:14:58.609609 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 36878 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:58.610994 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:58.614641 systemd-logind[1454]: New session 6 of user core. Apr 28 02:14:58.628363 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 28 02:14:58.680065 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 28 02:14:58.680291 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 02:14:58.683368 sudo[1614]: pam_unix(sudo:session): session closed for user root Apr 28 02:14:58.689501 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 28 02:14:58.690317 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 02:14:58.711474 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 28 02:14:58.713138 auditctl[1617]: No rules Apr 28 02:14:58.713382 systemd[1]: audit-rules.service: Deactivated successfully. Apr 28 02:14:58.713532 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 28 02:14:58.715326 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 02:14:58.737739 augenrules[1635]: No rules Apr 28 02:14:58.738721 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 02:14:58.739468 sudo[1613]: pam_unix(sudo:session): session closed for user root Apr 28 02:14:58.740884 sshd[1610]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:58.753877 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:36878.service: Deactivated successfully. Apr 28 02:14:58.754963 systemd[1]: session-6.scope: Deactivated successfully. Apr 28 02:14:58.755865 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Apr 28 02:14:58.763131 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:36882.service - OpenSSH per-connection server daemon (10.0.0.1:36882). Apr 28 02:14:58.763896 systemd-logind[1454]: Removed session 6. Apr 28 02:14:58.787862 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 36882 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:58.788807 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:58.792336 systemd-logind[1454]: New session 7 of user core. Apr 28 02:14:58.802017 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 28 02:14:58.852059 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 28 02:14:58.852273 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 02:14:59.114194 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 28 02:14:59.114238 (dockerd)[1664]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 28 02:14:59.354167 dockerd[1664]: time="2026-04-28T02:14:59.354105510Z" level=info msg="Starting up" Apr 28 02:14:59.452254 systemd[1]: var-lib-docker-metacopy\x2dcheck2929187629-merged.mount: Deactivated successfully. Apr 28 02:14:59.468520 dockerd[1664]: time="2026-04-28T02:14:59.468448140Z" level=info msg="Loading containers: start." Apr 28 02:14:59.570871 kernel: Initializing XFRM netlink socket Apr 28 02:14:59.648362 systemd-networkd[1410]: docker0: Link UP Apr 28 02:14:59.670359 dockerd[1664]: time="2026-04-28T02:14:59.670283093Z" level=info msg="Loading containers: done." Apr 28 02:14:59.680943 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2415554381-merged.mount: Deactivated successfully. Apr 28 02:14:59.683222 dockerd[1664]: time="2026-04-28T02:14:59.683150023Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 02:14:59.683302 dockerd[1664]: time="2026-04-28T02:14:59.683278589Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 28 02:14:59.683404 dockerd[1664]: time="2026-04-28T02:14:59.683379634Z" level=info msg="Daemon has completed initialization" Apr 28 02:14:59.711178 dockerd[1664]: time="2026-04-28T02:14:59.711073406Z" level=info msg="API listen on /run/docker.sock" Apr 28 02:14:59.711193 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 28 02:15:00.089678 containerd[1469]: time="2026-04-28T02:15:00.089543033Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 28 02:15:00.519946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4291105273.mount: Deactivated successfully. Apr 28 02:15:01.154004 containerd[1469]: time="2026-04-28T02:15:01.153927322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:01.154764 containerd[1469]: time="2026-04-28T02:15:01.154732139Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 28 02:15:01.155632 containerd[1469]: time="2026-04-28T02:15:01.155592343Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:01.159160 containerd[1469]: time="2026-04-28T02:15:01.159062210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:01.160169 containerd[1469]: time="2026-04-28T02:15:01.160139500Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.070544649s" Apr 28 02:15:01.160169 containerd[1469]: time="2026-04-28T02:15:01.160172006Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 28 02:15:01.160756 containerd[1469]: time="2026-04-28T02:15:01.160735641Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 28 02:15:01.895979 containerd[1469]: time="2026-04-28T02:15:01.895903164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:01.896664 containerd[1469]: time="2026-04-28T02:15:01.896614379Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 28 02:15:01.897392 containerd[1469]: time="2026-04-28T02:15:01.897352136Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:01.899741 containerd[1469]: time="2026-04-28T02:15:01.899698579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:01.900672 containerd[1469]: time="2026-04-28T02:15:01.900639641Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 739.876188ms" Apr 28 02:15:01.900707 containerd[1469]: time="2026-04-28T02:15:01.900678577Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 28 02:15:01.901161 containerd[1469]: time="2026-04-28T02:15:01.901139506Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 28 02:15:02.602599 containerd[1469]: time="2026-04-28T02:15:02.602514799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:02.603135 containerd[1469]: time="2026-04-28T02:15:02.603099575Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 28 02:15:02.604646 containerd[1469]: time="2026-04-28T02:15:02.604454149Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:02.607002 containerd[1469]: time="2026-04-28T02:15:02.606795643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:02.607775 containerd[1469]: time="2026-04-28T02:15:02.607742338Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 706.575962ms" Apr 28 02:15:02.607852 containerd[1469]: time="2026-04-28T02:15:02.607793355Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 28 02:15:02.608613 containerd[1469]: time="2026-04-28T02:15:02.608318658Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 28 02:15:03.339300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3133239894.mount: Deactivated successfully. Apr 28 02:15:03.678607 containerd[1469]: time="2026-04-28T02:15:03.678338008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:03.679078 containerd[1469]: time="2026-04-28T02:15:03.679044402Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 28 02:15:03.680122 containerd[1469]: time="2026-04-28T02:15:03.680058736Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:03.681628 containerd[1469]: time="2026-04-28T02:15:03.681564229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:03.681977 containerd[1469]: time="2026-04-28T02:15:03.681933405Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.07359099s" Apr 28 02:15:03.681977 containerd[1469]: time="2026-04-28T02:15:03.681970656Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 28 02:15:03.682505 containerd[1469]: time="2026-04-28T02:15:03.682480366Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 28 02:15:04.049218 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 28 02:15:04.054055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:15:04.059384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount955357192.mount: Deactivated successfully. Apr 28 02:15:04.162867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:15:04.167265 (kubelet)[1896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 02:15:04.213670 kubelet[1896]: E0428 02:15:04.213417 1896 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 02:15:04.218910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 02:15:04.219083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 02:15:04.788536 containerd[1469]: time="2026-04-28T02:15:04.788441045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:04.789618 containerd[1469]: time="2026-04-28T02:15:04.789364723Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 28 02:15:04.791660 containerd[1469]: time="2026-04-28T02:15:04.791576016Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:04.795525 containerd[1469]: time="2026-04-28T02:15:04.795463955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:04.796502 containerd[1469]: time="2026-04-28T02:15:04.796449891Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.11393854s" Apr 28 02:15:04.796550 containerd[1469]: time="2026-04-28T02:15:04.796504611Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 28 02:15:04.797180 containerd[1469]: time="2026-04-28T02:15:04.797036148Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 28 02:15:05.180342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2695617717.mount: Deactivated successfully. Apr 28 02:15:05.187770 containerd[1469]: time="2026-04-28T02:15:05.187696044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:05.188793 containerd[1469]: time="2026-04-28T02:15:05.188691246Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 28 02:15:05.190360 containerd[1469]: time="2026-04-28T02:15:05.190266979Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:05.192236 containerd[1469]: time="2026-04-28T02:15:05.192196995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:05.192789 containerd[1469]: time="2026-04-28T02:15:05.192745734Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 395.68735ms" Apr 28 02:15:05.192789 containerd[1469]: time="2026-04-28T02:15:05.192778270Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 28 02:15:05.193320 containerd[1469]: time="2026-04-28T02:15:05.193290158Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 28 02:15:05.632739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1492925242.mount: Deactivated successfully. Apr 28 02:15:06.176175 containerd[1469]: time="2026-04-28T02:15:06.176120772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:06.176813 containerd[1469]: time="2026-04-28T02:15:06.176771556Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 28 02:15:06.177924 containerd[1469]: time="2026-04-28T02:15:06.177887177Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:06.181169 containerd[1469]: time="2026-04-28T02:15:06.181139865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:06.182014 containerd[1469]: time="2026-04-28T02:15:06.181994024Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 988.675796ms" Apr 28 02:15:06.182055 containerd[1469]: time="2026-04-28T02:15:06.182018498Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 28 02:15:09.170533 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:15:09.180069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:15:09.200883 systemd[1]: Reloading requested from client PID 2050 ('systemctl') (unit session-7.scope)... Apr 28 02:15:09.200901 systemd[1]: Reloading... Apr 28 02:15:09.255516 zram_generator::config[2089]: No configuration found. Apr 28 02:15:09.329714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 02:15:09.378435 systemd[1]: Reloading finished in 177 ms. Apr 28 02:15:09.415038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:15:09.416302 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:15:09.418260 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 02:15:09.418440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:15:09.419630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:15:09.514803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:15:09.518468 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 02:15:09.553377 kubelet[2139]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 02:15:09.553377 kubelet[2139]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 02:15:09.553377 kubelet[2139]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 02:15:09.553377 kubelet[2139]: I0428 02:15:09.553160 2139 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 02:15:10.076543 kubelet[2139]: I0428 02:15:10.076396 2139 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 28 02:15:10.076743 kubelet[2139]: I0428 02:15:10.076604 2139 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 02:15:10.077373 kubelet[2139]: I0428 02:15:10.077327 2139 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 02:15:10.095642 kubelet[2139]: E0428 02:15:10.095587 2139 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 02:15:10.097228 kubelet[2139]: I0428 02:15:10.097208 2139 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 02:15:10.101862 kubelet[2139]: E0428 02:15:10.101812 2139 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 02:15:10.101862 kubelet[2139]: I0428 02:15:10.101857 2139 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 28 02:15:10.104905 kubelet[2139]: I0428 02:15:10.104890 2139 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 28 02:15:10.105431 kubelet[2139]: I0428 02:15:10.105384 2139 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 02:15:10.105572 kubelet[2139]: I0428 02:15:10.105419 2139 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 02:15:10.105572 kubelet[2139]: I0428 02:15:10.105563 2139 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 02:15:10.105572 kubelet[2139]: I0428 02:15:10.105570 2139 container_manager_linux.go:303] "Creating device plugin manager" Apr 28 02:15:10.105688 kubelet[2139]: I0428 02:15:10.105684 2139 state_mem.go:36] "Initialized new in-memory state store" Apr 28 02:15:10.108719 kubelet[2139]: I0428 02:15:10.108684 2139 kubelet.go:480] "Attempting to sync node with API server" Apr 28 02:15:10.108719 kubelet[2139]: I0428 02:15:10.108710 2139 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 02:15:10.108759 kubelet[2139]: I0428 02:15:10.108736 2139 kubelet.go:386] "Adding apiserver pod source" Apr 28 02:15:10.110398 kubelet[2139]: I0428 02:15:10.109928 2139 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 02:15:10.116163 kubelet[2139]: E0428 02:15:10.115627 2139 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 02:15:10.116163 kubelet[2139]: I0428 02:15:10.115707 2139 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 02:15:10.116163 kubelet[2139]: E0428 02:15:10.115881 2139 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 02:15:10.116163 kubelet[2139]: I0428 02:15:10.116108 2139 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 02:15:10.116690 kubelet[2139]: W0428 02:15:10.116656 2139 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 28 02:15:10.120623 kubelet[2139]: I0428 02:15:10.120587 2139 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 28 02:15:10.120675 kubelet[2139]: I0428 02:15:10.120668 2139 server.go:1289] "Started kubelet" Apr 28 02:15:10.123860 kubelet[2139]: I0428 02:15:10.121369 2139 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 02:15:10.123860 kubelet[2139]: I0428 02:15:10.121590 2139 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 02:15:10.123860 kubelet[2139]: I0428 02:15:10.121655 2139 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 02:15:10.123860 kubelet[2139]: I0428 02:15:10.121690 2139 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 02:15:10.123860 kubelet[2139]: I0428 02:15:10.122316 2139 server.go:317] "Adding debug handlers to kubelet server" Apr 28 02:15:10.123860 kubelet[2139]: I0428 02:15:10.123093 2139 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 02:15:10.124515 kubelet[2139]: E0428 02:15:10.124458 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:15:10.124515 kubelet[2139]: I0428 02:15:10.124509 2139 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 28 02:15:10.124664 kubelet[2139]: I0428 02:15:10.124657 2139 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 28 02:15:10.124793 kubelet[2139]: I0428 02:15:10.124761 2139 reconciler.go:26] "Reconciler: start to sync state" Apr 28 02:15:10.125043 kubelet[2139]: E0428 02:15:10.125008 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" Apr 28 02:15:10.125169 kubelet[2139]: E0428 02:15:10.123996 2139 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa63989ff0924c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 02:15:10.120604236 +0000 UTC m=+0.599050589,LastTimestamp:2026-04-28 02:15:10.120604236 +0000 UTC m=+0.599050589,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 02:15:10.125257 kubelet[2139]: I0428 02:15:10.125236 2139 factory.go:223] Registration of the systemd container factory successfully Apr 28 02:15:10.125351 kubelet[2139]: I0428 02:15:10.125281 2139 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 02:15:10.125475 kubelet[2139]: E0428 02:15:10.125444 2139 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 02:15:10.125964 kubelet[2139]: E0428 02:15:10.125947 2139 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 02:15:10.126472 kubelet[2139]: I0428 02:15:10.126447 2139 factory.go:223] Registration of the containerd container factory successfully Apr 28 02:15:10.136753 kubelet[2139]: I0428 02:15:10.136718 2139 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 28 02:15:10.137816 kubelet[2139]: I0428 02:15:10.137775 2139 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 28 02:15:10.137816 kubelet[2139]: I0428 02:15:10.137799 2139 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 28 02:15:10.137816 kubelet[2139]: I0428 02:15:10.137812 2139 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 02:15:10.137816 kubelet[2139]: I0428 02:15:10.137821 2139 kubelet.go:2436] "Starting kubelet main sync loop" Apr 28 02:15:10.137974 kubelet[2139]: E0428 02:15:10.137867 2139 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 02:15:10.140047 kubelet[2139]: I0428 02:15:10.139923 2139 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 02:15:10.140047 kubelet[2139]: I0428 02:15:10.139933 2139 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 02:15:10.140047 kubelet[2139]: I0428 02:15:10.139944 2139 state_mem.go:36] "Initialized new in-memory state store" Apr 28 02:15:10.141170 kubelet[2139]: E0428 02:15:10.141128 2139 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 02:15:10.182437 kubelet[2139]: I0428 02:15:10.181755 2139 policy_none.go:49] "None policy: Start" Apr 28 02:15:10.182437 kubelet[2139]: I0428 02:15:10.181796 2139 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 28 02:15:10.182437 kubelet[2139]: I0428 02:15:10.181815 2139 state_mem.go:35] "Initializing new in-memory state store" Apr 28 02:15:10.203264 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 28 02:15:10.214535 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 28 02:15:10.217037 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 28 02:15:10.224582 kubelet[2139]: E0428 02:15:10.224542 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:15:10.226571 kubelet[2139]: E0428 02:15:10.226504 2139 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 02:15:10.226707 kubelet[2139]: I0428 02:15:10.226672 2139 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 02:15:10.226749 kubelet[2139]: I0428 02:15:10.226713 2139 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 02:15:10.227066 kubelet[2139]: I0428 02:15:10.226930 2139 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 02:15:10.227794 kubelet[2139]: E0428 02:15:10.227774 2139 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 02:15:10.227865 kubelet[2139]: E0428 02:15:10.227805 2139 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 02:15:10.267576 systemd[1]: Created slice kubepods-burstable-pod8f3e38e39b9cdfadb5c3e7160351e10e.slice - libcontainer container kubepods-burstable-pod8f3e38e39b9cdfadb5c3e7160351e10e.slice. Apr 28 02:15:10.287496 kubelet[2139]: E0428 02:15:10.287428 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:15:10.290434 systemd[1]: Created slice kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice - libcontainer container kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice. Apr 28 02:15:10.291802 kubelet[2139]: E0428 02:15:10.291781 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:15:10.293087 systemd[1]: Created slice kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice - libcontainer container kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice. Apr 28 02:15:10.294030 kubelet[2139]: E0428 02:15:10.294011 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:15:10.325796 kubelet[2139]: E0428 02:15:10.325715 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" Apr 28 02:15:10.325965 kubelet[2139]: I0428 02:15:10.325868 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f3e38e39b9cdfadb5c3e7160351e10e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f3e38e39b9cdfadb5c3e7160351e10e\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:15:10.325965 kubelet[2139]: I0428 02:15:10.325892 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f3e38e39b9cdfadb5c3e7160351e10e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f3e38e39b9cdfadb5c3e7160351e10e\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:15:10.325965 kubelet[2139]: I0428 02:15:10.325910 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f3e38e39b9cdfadb5c3e7160351e10e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8f3e38e39b9cdfadb5c3e7160351e10e\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:15:10.325965 kubelet[2139]: I0428 02:15:10.325922 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:10.325965 kubelet[2139]: I0428 02:15:10.325936 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 28 02:15:10.326078 kubelet[2139]: I0428 02:15:10.325947 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:10.326078 kubelet[2139]: I0428 02:15:10.325957 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:10.326078 kubelet[2139]: I0428 02:15:10.325968 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:10.326078 kubelet[2139]: I0428 02:15:10.325981 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:10.328865 kubelet[2139]: I0428 02:15:10.328814 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 02:15:10.329089 kubelet[2139]: E0428 02:15:10.329066 2139 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 28 02:15:10.530799 kubelet[2139]: I0428 02:15:10.530756 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 02:15:10.531126 kubelet[2139]: E0428 02:15:10.531088 2139 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 28 02:15:10.589070 kubelet[2139]: E0428 02:15:10.588893 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:10.589696 containerd[1469]: time="2026-04-28T02:15:10.589590277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8f3e38e39b9cdfadb5c3e7160351e10e,Namespace:kube-system,Attempt:0,}" Apr 28 02:15:10.593039 kubelet[2139]: E0428 02:15:10.593005 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:10.593898 containerd[1469]: time="2026-04-28T02:15:10.593743914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 28 02:15:10.595240 kubelet[2139]: E0428 02:15:10.595194 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:10.595632 containerd[1469]: time="2026-04-28T02:15:10.595589217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 28 02:15:10.726872 kubelet[2139]: E0428 02:15:10.726794 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" Apr 28 02:15:10.932489 kubelet[2139]: I0428 02:15:10.932337 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 02:15:10.932782 kubelet[2139]: E0428 02:15:10.932744 2139 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 28 02:15:10.956590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861294627.mount: Deactivated successfully. Apr 28 02:15:10.962045 containerd[1469]: time="2026-04-28T02:15:10.961985388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:15:10.963375 containerd[1469]: time="2026-04-28T02:15:10.963338822Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 02:15:10.964121 containerd[1469]: time="2026-04-28T02:15:10.964083720Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:15:10.964877 containerd[1469]: time="2026-04-28T02:15:10.964814449Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:15:10.965514 containerd[1469]: time="2026-04-28T02:15:10.965410707Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 28 02:15:10.966589 containerd[1469]: time="2026-04-28T02:15:10.966555657Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:15:10.967182 containerd[1469]: time="2026-04-28T02:15:10.967143761Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 02:15:10.969202 containerd[1469]: time="2026-04-28T02:15:10.969165722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:15:10.970495 containerd[1469]: time="2026-04-28T02:15:10.970463133Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 374.80113ms" Apr 28 02:15:10.970985 containerd[1469]: time="2026-04-28T02:15:10.970958705Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 377.140293ms" Apr 28 02:15:10.971823 containerd[1469]: time="2026-04-28T02:15:10.971767110Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 382.074905ms" Apr 28 02:15:11.047141 containerd[1469]: time="2026-04-28T02:15:11.047027991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:15:11.047141 containerd[1469]: time="2026-04-28T02:15:11.047112236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:15:11.047428 containerd[1469]: time="2026-04-28T02:15:11.047322950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:15:11.047490 containerd[1469]: time="2026-04-28T02:15:11.047417572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:15:11.049683 containerd[1469]: time="2026-04-28T02:15:11.049311913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:15:11.049683 containerd[1469]: time="2026-04-28T02:15:11.049464633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:15:11.049683 containerd[1469]: time="2026-04-28T02:15:11.049480625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:15:11.049683 containerd[1469]: time="2026-04-28T02:15:11.049545659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:15:11.050755 containerd[1469]: time="2026-04-28T02:15:11.050655371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:15:11.050755 containerd[1469]: time="2026-04-28T02:15:11.050716974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:15:11.051852 containerd[1469]: time="2026-04-28T02:15:11.050777024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:15:11.051852 containerd[1469]: time="2026-04-28T02:15:11.051345025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:15:11.068999 systemd[1]: Started cri-containerd-aa73643727f6689ac6e46da2913bca62de4e482fb642042b7f566c7a9ab56768.scope - libcontainer container aa73643727f6689ac6e46da2913bca62de4e482fb642042b7f566c7a9ab56768. Apr 28 02:15:11.069897 systemd[1]: Started cri-containerd-d6799095814df51221bdb437517548d7eb53ed13202678d91bbb337d24571359.scope - libcontainer container d6799095814df51221bdb437517548d7eb53ed13202678d91bbb337d24571359. Apr 28 02:15:11.072380 systemd[1]: Started cri-containerd-8aca09f26e24f107d0906db80ff78013b1f6b889cc9e2c6d0acb8ee2f36bd5e2.scope - libcontainer container 8aca09f26e24f107d0906db80ff78013b1f6b889cc9e2c6d0acb8ee2f36bd5e2. Apr 28 02:15:11.106319 containerd[1469]: time="2026-04-28T02:15:11.106249261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa73643727f6689ac6e46da2913bca62de4e482fb642042b7f566c7a9ab56768\"" Apr 28 02:15:11.109571 kubelet[2139]: E0428 02:15:11.109545 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:11.113722 containerd[1469]: time="2026-04-28T02:15:11.113667707Z" level=info msg="CreateContainer within sandbox \"aa73643727f6689ac6e46da2913bca62de4e482fb642042b7f566c7a9ab56768\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 28 02:15:11.114977 containerd[1469]: time="2026-04-28T02:15:11.114939857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8f3e38e39b9cdfadb5c3e7160351e10e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8aca09f26e24f107d0906db80ff78013b1f6b889cc9e2c6d0acb8ee2f36bd5e2\"" Apr 28 02:15:11.115556 kubelet[2139]: E0428 02:15:11.115522 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:11.118553 containerd[1469]: time="2026-04-28T02:15:11.118475028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6799095814df51221bdb437517548d7eb53ed13202678d91bbb337d24571359\"" Apr 28 02:15:11.119860 kubelet[2139]: E0428 02:15:11.119504 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:11.120207 containerd[1469]: time="2026-04-28T02:15:11.120184438Z" level=info msg="CreateContainer within sandbox \"8aca09f26e24f107d0906db80ff78013b1f6b889cc9e2c6d0acb8ee2f36bd5e2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 28 02:15:11.123877 containerd[1469]: time="2026-04-28T02:15:11.123809952Z" level=info msg="CreateContainer within sandbox \"d6799095814df51221bdb437517548d7eb53ed13202678d91bbb337d24571359\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 28 02:15:11.131234 containerd[1469]: time="2026-04-28T02:15:11.131174266Z" level=info msg="CreateContainer within sandbox \"aa73643727f6689ac6e46da2913bca62de4e482fb642042b7f566c7a9ab56768\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6d8fa703a7f07d785e45a604c042a73c4e0f59ec4725239aeef3214aaa9d2153\"" Apr 28 02:15:11.131990 containerd[1469]: time="2026-04-28T02:15:11.131950597Z" level=info msg="StartContainer for \"6d8fa703a7f07d785e45a604c042a73c4e0f59ec4725239aeef3214aaa9d2153\"" Apr 28 02:15:11.137900 containerd[1469]: time="2026-04-28T02:15:11.137791779Z" level=info msg="CreateContainer within sandbox \"8aca09f26e24f107d0906db80ff78013b1f6b889cc9e2c6d0acb8ee2f36bd5e2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6d4fbc27a2dc8d89627edfc1b2ac3e78f5ff653db6a003e9318237a24a916d3f\"" Apr 28 02:15:11.138973 containerd[1469]: time="2026-04-28T02:15:11.138697855Z" level=info msg="StartContainer for \"6d4fbc27a2dc8d89627edfc1b2ac3e78f5ff653db6a003e9318237a24a916d3f\"" Apr 28 02:15:11.146806 containerd[1469]: time="2026-04-28T02:15:11.146745441Z" level=info msg="CreateContainer within sandbox \"d6799095814df51221bdb437517548d7eb53ed13202678d91bbb337d24571359\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d51bc49d63925456ca366c8ebcc802380551bc0420b0f09e3fdec12cabb78cb8\"" Apr 28 02:15:11.147344 containerd[1469]: time="2026-04-28T02:15:11.147300535Z" level=info msg="StartContainer for \"d51bc49d63925456ca366c8ebcc802380551bc0420b0f09e3fdec12cabb78cb8\"" Apr 28 02:15:11.161258 systemd[1]: Started cri-containerd-6d8fa703a7f07d785e45a604c042a73c4e0f59ec4725239aeef3214aaa9d2153.scope - libcontainer container 6d8fa703a7f07d785e45a604c042a73c4e0f59ec4725239aeef3214aaa9d2153. Apr 28 02:15:11.164066 systemd[1]: Started cri-containerd-6d4fbc27a2dc8d89627edfc1b2ac3e78f5ff653db6a003e9318237a24a916d3f.scope - libcontainer container 6d4fbc27a2dc8d89627edfc1b2ac3e78f5ff653db6a003e9318237a24a916d3f. Apr 28 02:15:11.179017 systemd[1]: Started cri-containerd-d51bc49d63925456ca366c8ebcc802380551bc0420b0f09e3fdec12cabb78cb8.scope - libcontainer container d51bc49d63925456ca366c8ebcc802380551bc0420b0f09e3fdec12cabb78cb8. Apr 28 02:15:11.209565 containerd[1469]: time="2026-04-28T02:15:11.209451984Z" level=info msg="StartContainer for \"6d4fbc27a2dc8d89627edfc1b2ac3e78f5ff653db6a003e9318237a24a916d3f\" returns successfully" Apr 28 02:15:11.216751 containerd[1469]: time="2026-04-28T02:15:11.216683878Z" level=info msg="StartContainer for \"6d8fa703a7f07d785e45a604c042a73c4e0f59ec4725239aeef3214aaa9d2153\" returns successfully" Apr 28 02:15:11.227115 containerd[1469]: time="2026-04-28T02:15:11.227090294Z" level=info msg="StartContainer for \"d51bc49d63925456ca366c8ebcc802380551bc0420b0f09e3fdec12cabb78cb8\" returns successfully" Apr 28 02:15:11.734692 kubelet[2139]: I0428 02:15:11.734599 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 02:15:12.067539 kubelet[2139]: E0428 02:15:12.067400 2139 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 28 02:15:12.114180 kubelet[2139]: I0428 02:15:12.114109 2139 apiserver.go:52] "Watching apiserver" Apr 28 02:15:12.124863 kubelet[2139]: I0428 02:15:12.124799 2139 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 28 02:15:12.152921 kubelet[2139]: E0428 02:15:12.152820 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:15:12.153055 kubelet[2139]: E0428 02:15:12.153004 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:12.153662 kubelet[2139]: E0428 02:15:12.153649 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:15:12.153732 kubelet[2139]: E0428 02:15:12.153718 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:12.154778 kubelet[2139]: E0428 02:15:12.154761 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:15:12.154886 kubelet[2139]: E0428 02:15:12.154867 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:12.251376 kubelet[2139]: I0428 02:15:12.251330 2139 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 02:15:12.325572 kubelet[2139]: I0428 02:15:12.325328 2139 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 02:15:12.335035 kubelet[2139]: E0428 02:15:12.334966 2139 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 28 02:15:12.335035 kubelet[2139]: I0428 02:15:12.335008 2139 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:12.336323 kubelet[2139]: E0428 02:15:12.336295 2139 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:12.336323 kubelet[2139]: I0428 02:15:12.336316 2139 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 02:15:12.337772 kubelet[2139]: E0428 02:15:12.337722 2139 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 28 02:15:13.156255 kubelet[2139]: I0428 02:15:13.156178 2139 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 02:15:13.156660 kubelet[2139]: I0428 02:15:13.156617 2139 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 02:15:13.156978 kubelet[2139]: I0428 02:15:13.156943 2139 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:13.161232 kubelet[2139]: E0428 02:15:13.161181 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:13.163525 kubelet[2139]: E0428 02:15:13.163428 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:13.163705 kubelet[2139]: E0428 02:15:13.163446 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:13.960724 systemd[1]: Reloading requested from client PID 2431 ('systemctl') (unit session-7.scope)... Apr 28 02:15:13.960757 systemd[1]: Reloading... Apr 28 02:15:14.026881 zram_generator::config[2470]: No configuration found. Apr 28 02:15:14.114465 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 02:15:14.157221 kubelet[2139]: E0428 02:15:14.157179 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:14.157496 kubelet[2139]: E0428 02:15:14.157349 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:14.157496 kubelet[2139]: I0428 02:15:14.157450 2139 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 02:15:14.168028 kubelet[2139]: E0428 02:15:14.167986 2139 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 28 02:15:14.168170 kubelet[2139]: E0428 02:15:14.168133 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:14.168918 systemd[1]: Reloading finished in 207 ms. Apr 28 02:15:14.198967 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:15:14.217581 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 02:15:14.217815 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:15:14.229760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:15:14.335451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:15:14.340149 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 02:15:14.376185 kubelet[2515]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 02:15:14.376185 kubelet[2515]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 02:15:14.376185 kubelet[2515]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 02:15:14.376502 kubelet[2515]: I0428 02:15:14.376213 2515 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 02:15:14.380760 kubelet[2515]: I0428 02:15:14.380725 2515 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 28 02:15:14.380760 kubelet[2515]: I0428 02:15:14.380749 2515 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 02:15:14.380966 kubelet[2515]: I0428 02:15:14.380939 2515 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 02:15:14.382300 kubelet[2515]: I0428 02:15:14.382257 2515 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 28 02:15:14.384974 kubelet[2515]: I0428 02:15:14.384938 2515 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 02:15:14.389440 kubelet[2515]: E0428 02:15:14.389395 2515 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 02:15:14.389440 kubelet[2515]: I0428 02:15:14.389423 2515 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 28 02:15:14.393064 kubelet[2515]: I0428 02:15:14.392940 2515 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 28 02:15:14.393521 kubelet[2515]: I0428 02:15:14.393464 2515 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 02:15:14.393625 kubelet[2515]: I0428 02:15:14.393499 2515 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 02:15:14.393625 kubelet[2515]: I0428 02:15:14.393624 2515 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 02:15:14.393725 kubelet[2515]: I0428 02:15:14.393653 2515 container_manager_linux.go:303] "Creating device plugin manager" Apr 28 02:15:14.393725 kubelet[2515]: I0428 02:15:14.393694 2515 state_mem.go:36] "Initialized new in-memory state store" Apr 28 02:15:14.393899 kubelet[2515]: I0428 02:15:14.393864 2515 kubelet.go:480] "Attempting to sync node with API server" Apr 28 02:15:14.393899 kubelet[2515]: I0428 02:15:14.393882 2515 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 02:15:14.393899 kubelet[2515]: I0428 02:15:14.393900 2515 kubelet.go:386] "Adding apiserver pod source" Apr 28 02:15:14.393952 kubelet[2515]: I0428 02:15:14.393913 2515 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 02:15:14.394970 kubelet[2515]: I0428 02:15:14.394943 2515 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 02:15:14.395524 kubelet[2515]: I0428 02:15:14.395509 2515 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 02:15:14.402253 kubelet[2515]: I0428 02:15:14.402212 2515 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 28 02:15:14.402397 kubelet[2515]: I0428 02:15:14.402370 2515 server.go:1289] "Started kubelet" Apr 28 02:15:14.405509 kubelet[2515]: I0428 02:15:14.403180 2515 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 02:15:14.405509 kubelet[2515]: I0428 02:15:14.403495 2515 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 02:15:14.405509 kubelet[2515]: I0428 02:15:14.403726 2515 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 02:15:14.405509 kubelet[2515]: I0428 02:15:14.404284 2515 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 02:15:14.406442 kubelet[2515]: E0428 02:15:14.406381 2515 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 02:15:14.406723 kubelet[2515]: I0428 02:15:14.406707 2515 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 02:15:14.410723 kubelet[2515]: I0428 02:15:14.408964 2515 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 28 02:15:14.410723 kubelet[2515]: I0428 02:15:14.409087 2515 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 28 02:15:14.410723 kubelet[2515]: I0428 02:15:14.409207 2515 reconciler.go:26] "Reconciler: start to sync state" Apr 28 02:15:14.410723 kubelet[2515]: I0428 02:15:14.410252 2515 factory.go:223] Registration of the systemd container factory successfully Apr 28 02:15:14.410723 kubelet[2515]: I0428 02:15:14.410309 2515 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 02:15:14.414061 kubelet[2515]: I0428 02:15:14.414026 2515 factory.go:223] Registration of the containerd container factory successfully Apr 28 02:15:14.414692 kubelet[2515]: I0428 02:15:14.414304 2515 server.go:317] "Adding debug handlers to kubelet server" Apr 28 02:15:14.420217 kubelet[2515]: I0428 02:15:14.420189 2515 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 28 02:15:14.421473 kubelet[2515]: I0428 02:15:14.421404 2515 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 28 02:15:14.421545 kubelet[2515]: I0428 02:15:14.421538 2515 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 28 02:15:14.421587 kubelet[2515]: I0428 02:15:14.421582 2515 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 02:15:14.421624 kubelet[2515]: I0428 02:15:14.421620 2515 kubelet.go:2436] "Starting kubelet main sync loop" Apr 28 02:15:14.421759 kubelet[2515]: E0428 02:15:14.421734 2515 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 02:15:14.445606 kubelet[2515]: I0428 02:15:14.445566 2515 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 02:15:14.445606 kubelet[2515]: I0428 02:15:14.445590 2515 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 02:15:14.445606 kubelet[2515]: I0428 02:15:14.445604 2515 state_mem.go:36] "Initialized new in-memory state store" Apr 28 02:15:14.445845 kubelet[2515]: I0428 02:15:14.445788 2515 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 28 02:15:14.445922 kubelet[2515]: I0428 02:15:14.445856 2515 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 28 02:15:14.445960 kubelet[2515]: I0428 02:15:14.445949 2515 policy_none.go:49] "None policy: Start" Apr 28 02:15:14.446157 kubelet[2515]: I0428 02:15:14.446125 2515 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 28 02:15:14.446157 kubelet[2515]: I0428 02:15:14.446148 2515 state_mem.go:35] "Initializing new in-memory state store" Apr 28 02:15:14.446256 kubelet[2515]: I0428 02:15:14.446239 2515 state_mem.go:75] "Updated machine memory state" Apr 28 02:15:14.449280 kubelet[2515]: E0428 02:15:14.449244 2515 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 02:15:14.449370 kubelet[2515]: I0428 02:15:14.449355 2515 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 02:15:14.449399 kubelet[2515]: I0428 02:15:14.449372 2515 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 02:15:14.449548 kubelet[2515]: I0428 02:15:14.449528 2515 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 02:15:14.452403 kubelet[2515]: E0428 02:15:14.451125 2515 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 02:15:14.523883 kubelet[2515]: I0428 02:15:14.523371 2515 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:14.523883 kubelet[2515]: I0428 02:15:14.523491 2515 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 02:15:14.524146 kubelet[2515]: I0428 02:15:14.524081 2515 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 02:15:14.529698 kubelet[2515]: E0428 02:15:14.529571 2515 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:14.530226 kubelet[2515]: E0428 02:15:14.530208 2515 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 28 02:15:14.530258 kubelet[2515]: E0428 02:15:14.530222 2515 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 28 02:15:14.555986 kubelet[2515]: I0428 02:15:14.555942 2515 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 02:15:14.564176 kubelet[2515]: I0428 02:15:14.564116 2515 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 28 02:15:14.564322 kubelet[2515]: I0428 02:15:14.564260 2515 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 02:15:14.710703 kubelet[2515]: I0428 02:15:14.710617 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:14.710703 kubelet[2515]: I0428 02:15:14.710674 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:14.710703 kubelet[2515]: I0428 02:15:14.710692 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:14.710703 kubelet[2515]: I0428 02:15:14.710709 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 28 02:15:14.710703 kubelet[2515]: I0428 02:15:14.710726 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f3e38e39b9cdfadb5c3e7160351e10e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8f3e38e39b9cdfadb5c3e7160351e10e\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:15:14.710991 kubelet[2515]: I0428 02:15:14.710806 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:14.710991 kubelet[2515]: I0428 02:15:14.710885 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:14.710991 kubelet[2515]: I0428 02:15:14.710905 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f3e38e39b9cdfadb5c3e7160351e10e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f3e38e39b9cdfadb5c3e7160351e10e\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:15:14.710991 kubelet[2515]: I0428 02:15:14.710948 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f3e38e39b9cdfadb5c3e7160351e10e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f3e38e39b9cdfadb5c3e7160351e10e\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:15:14.830612 kubelet[2515]: E0428 02:15:14.830548 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:14.830759 kubelet[2515]: E0428 02:15:14.830549 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:14.830759 kubelet[2515]: E0428 02:15:14.830578 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:14.959419 sudo[2556]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 28 02:15:14.959672 sudo[2556]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 28 02:15:15.395143 kubelet[2515]: I0428 02:15:15.395027 2515 apiserver.go:52] "Watching apiserver" Apr 28 02:15:15.409997 kubelet[2515]: I0428 02:15:15.409945 2515 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 28 02:15:15.422688 sudo[2556]: pam_unix(sudo:session): session closed for user root Apr 28 02:15:15.434458 kubelet[2515]: I0428 02:15:15.434438 2515 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 02:15:15.434799 kubelet[2515]: I0428 02:15:15.434761 2515 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:15.435047 kubelet[2515]: I0428 02:15:15.435013 2515 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 02:15:15.441980 kubelet[2515]: E0428 02:15:15.441930 2515 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 28 02:15:15.442096 kubelet[2515]: E0428 02:15:15.442086 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:15.443070 kubelet[2515]: E0428 02:15:15.443011 2515 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:15:15.443475 kubelet[2515]: E0428 02:15:15.443441 2515 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 28 02:15:15.445680 kubelet[2515]: E0428 02:15:15.443532 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:15.445680 kubelet[2515]: E0428 02:15:15.443593 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:15.462012 kubelet[2515]: I0428 02:15:15.461859 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.46181366 podStartE2EDuration="2.46181366s" podCreationTimestamp="2026-04-28 02:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:15:15.455942514 +0000 UTC m=+1.111012491" watchObservedRunningTime="2026-04-28 02:15:15.46181366 +0000 UTC m=+1.116883648" Apr 28 02:15:15.473879 kubelet[2515]: I0428 02:15:15.471543 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.471527911 podStartE2EDuration="2.471527911s" podCreationTimestamp="2026-04-28 02:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:15:15.471469917 +0000 UTC m=+1.126539903" watchObservedRunningTime="2026-04-28 02:15:15.471527911 +0000 UTC m=+1.126597901" Apr 28 02:15:15.475271 kubelet[2515]: I0428 02:15:15.475245 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.475235226 podStartE2EDuration="2.475235226s" podCreationTimestamp="2026-04-28 02:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:15:15.462179449 +0000 UTC m=+1.117249435" watchObservedRunningTime="2026-04-28 02:15:15.475235226 +0000 UTC m=+1.130305211" Apr 28 02:15:16.436747 kubelet[2515]: E0428 02:15:16.436699 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:16.437078 kubelet[2515]: E0428 02:15:16.436762 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:16.437078 kubelet[2515]: E0428 02:15:16.436968 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:16.840270 sudo[1646]: pam_unix(sudo:session): session closed for user root Apr 28 02:15:16.841896 sshd[1643]: pam_unix(sshd:session): session closed for user core Apr 28 02:15:16.844453 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:36882.service: Deactivated successfully. Apr 28 02:15:16.845677 systemd[1]: session-7.scope: Deactivated successfully. Apr 28 02:15:16.845808 systemd[1]: session-7.scope: Consumed 5.115s CPU time, 161.8M memory peak, 0B memory swap peak. Apr 28 02:15:16.846199 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Apr 28 02:15:16.846965 systemd-logind[1454]: Removed session 7. Apr 28 02:15:20.831062 kubelet[2515]: E0428 02:15:20.831017 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:20.856393 kubelet[2515]: I0428 02:15:20.856348 2515 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 28 02:15:20.856801 containerd[1469]: time="2026-04-28T02:15:20.856752841Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 28 02:15:20.857133 kubelet[2515]: I0428 02:15:20.856958 2515 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 28 02:15:21.324222 kubelet[2515]: E0428 02:15:21.324043 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:21.445391 kubelet[2515]: E0428 02:15:21.445352 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:22.008051 systemd[1]: Created slice kubepods-besteffort-pod4056a42c_0ea5_46a0_b251_d0537c8fcf37.slice - libcontainer container kubepods-besteffort-pod4056a42c_0ea5_46a0_b251_d0537c8fcf37.slice. Apr 28 02:15:22.023364 systemd[1]: Created slice kubepods-burstable-podc4d3b86c_d45f_45be_9c28_3b6fbe58bd03.slice - libcontainer container kubepods-burstable-podc4d3b86c_d45f_45be_9c28_3b6fbe58bd03.slice. Apr 28 02:15:22.100638 kubelet[2515]: I0428 02:15:22.099064 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4056a42c-0ea5-46a0-b251-d0537c8fcf37-xtables-lock\") pod \"kube-proxy-k4tpc\" (UID: \"4056a42c-0ea5-46a0-b251-d0537c8fcf37\") " pod="kube-system/kube-proxy-k4tpc" Apr 28 02:15:22.100638 kubelet[2515]: I0428 02:15:22.099121 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-cni-path\") pod \"cilium-qmbn9\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " pod="kube-system/cilium-qmbn9" Apr 28 02:15:22.100638 kubelet[2515]: I0428 02:15:22.099136 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-etc-cni-netd\") pod \"cilium-qmbn9\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " pod="kube-system/cilium-qmbn9" Apr 28 02:15:22.100638 kubelet[2515]: I0428 02:15:22.099151 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dkh7\" (UniqueName: \"kubernetes.io/projected/4056a42c-0ea5-46a0-b251-d0537c8fcf37-kube-api-access-4dkh7\") pod \"kube-proxy-k4tpc\" (UID: \"4056a42c-0ea5-46a0-b251-d0537c8fcf37\") " pod="kube-system/kube-proxy-k4tpc" Apr 28 02:15:22.100638 kubelet[2515]: I0428 02:15:22.099165 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-bpf-maps\") pod \"cilium-qmbn9\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " pod="kube-system/cilium-qmbn9" Apr 28 02:15:22.100638 kubelet[2515]: I0428 02:15:22.099176 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-hostproc\") pod \"cilium-qmbn9\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " pod="kube-system/cilium-qmbn9" Apr 28 02:15:22.101215 kubelet[2515]: I0428 02:15:22.099188 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-cilium-cgroup\") pod \"cilium-qmbn9\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " pod="kube-system/cilium-qmbn9" Apr 28 02:15:22.101215 kubelet[2515]: I0428 02:15:22.099199 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-xtables-lock\") pod \"cilium-qmbn9\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " pod="kube-system/cilium-qmbn9" Apr 28 02:15:22.101215 kubelet[2515]: I0428 02:15:22.099210 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-clustermesh-secrets\") pod \"cilium-qmbn9\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " pod="kube-system/cilium-qmbn9" Apr 28 02:15:22.101215 kubelet[2515]: I0428 02:15:22.099231 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4056a42c-0ea5-46a0-b251-d0537c8fcf37-kube-proxy\") pod \"kube-proxy-k4tpc\" (UID: \"4056a42c-0ea5-46a0-b251-d0537c8fcf37\") " pod="kube-system/kube-proxy-k4tpc" Apr 28 02:15:22.101215 kubelet[2515]: I0428 02:15:22.099240 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-cilium-run\") pod \"cilium-qmbn9\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " pod="kube-system/cilium-qmbn9" Apr 28 02:15:22.101215 kubelet[2515]: I0428 02:15:22.099253 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-lib-modules\") pod \"cilium-qmbn9\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " pod="kube-system/cilium-qmbn9" Apr 28 02:15:22.101355 kubelet[2515]: I0428 02:15:22.099264 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-host-proc-sys-kernel\") pod \"cilium-qmbn9\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " pod="kube-system/cilium-qmbn9" Apr 28 02:15:22.101355 kubelet[2515]: I0428 02:15:22.099273 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-hubble-tls\") pod \"cilium-qmbn9\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " pod="kube-system/cilium-qmbn9" Apr 28 02:15:22.101355 kubelet[2515]: I0428 02:15:22.099285 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thvnt\" (UniqueName: \"kubernetes.io/projected/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-kube-api-access-thvnt\") pod \"cilium-qmbn9\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " pod="kube-system/cilium-qmbn9" Apr 28 02:15:22.101355 kubelet[2515]: I0428 02:15:22.099297 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4056a42c-0ea5-46a0-b251-d0537c8fcf37-lib-modules\") pod \"kube-proxy-k4tpc\" (UID: \"4056a42c-0ea5-46a0-b251-d0537c8fcf37\") " pod="kube-system/kube-proxy-k4tpc" Apr 28 02:15:22.101355 kubelet[2515]: I0428 02:15:22.099307 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-cilium-config-path\") pod \"cilium-qmbn9\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " pod="kube-system/cilium-qmbn9" Apr 28 02:15:22.101442 kubelet[2515]: I0428 02:15:22.099319 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-host-proc-sys-net\") pod \"cilium-qmbn9\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " pod="kube-system/cilium-qmbn9" Apr 28 02:15:22.113331 systemd[1]: Created slice kubepods-besteffort-pod781a2502_d870_4752_9150_b228287abd72.slice - libcontainer container kubepods-besteffort-pod781a2502_d870_4752_9150_b228287abd72.slice. Apr 28 02:15:22.200355 kubelet[2515]: I0428 02:15:22.200314 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv8lv\" (UniqueName: \"kubernetes.io/projected/781a2502-d870-4752-9150-b228287abd72-kube-api-access-mv8lv\") pod \"cilium-operator-6c4d7847fc-j6l7s\" (UID: \"781a2502-d870-4752-9150-b228287abd72\") " pod="kube-system/cilium-operator-6c4d7847fc-j6l7s" Apr 28 02:15:22.200355 kubelet[2515]: I0428 02:15:22.200387 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/781a2502-d870-4752-9150-b228287abd72-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-j6l7s\" (UID: \"781a2502-d870-4752-9150-b228287abd72\") " pod="kube-system/cilium-operator-6c4d7847fc-j6l7s" Apr 28 02:15:22.318656 kubelet[2515]: E0428 02:15:22.318501 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:22.319442 containerd[1469]: time="2026-04-28T02:15:22.319377449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k4tpc,Uid:4056a42c-0ea5-46a0-b251-d0537c8fcf37,Namespace:kube-system,Attempt:0,}" Apr 28 02:15:22.327215 kubelet[2515]: E0428 02:15:22.327155 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:22.327763 containerd[1469]: time="2026-04-28T02:15:22.327557025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qmbn9,Uid:c4d3b86c-d45f-45be-9c28-3b6fbe58bd03,Namespace:kube-system,Attempt:0,}" Apr 28 02:15:22.346500 containerd[1469]: time="2026-04-28T02:15:22.346259222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:15:22.346500 containerd[1469]: time="2026-04-28T02:15:22.346298691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:15:22.346500 containerd[1469]: time="2026-04-28T02:15:22.346310783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:15:22.346500 containerd[1469]: time="2026-04-28T02:15:22.346371681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:15:22.350517 containerd[1469]: time="2026-04-28T02:15:22.349856422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:15:22.350517 containerd[1469]: time="2026-04-28T02:15:22.349897365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:15:22.350517 containerd[1469]: time="2026-04-28T02:15:22.349905035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:15:22.350517 containerd[1469]: time="2026-04-28T02:15:22.349974101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:15:22.364029 systemd[1]: Started cri-containerd-ff51bbfb3c968b996ab19084ef2577059b66472a8bc1ccaac145dff20ccd8a95.scope - libcontainer container ff51bbfb3c968b996ab19084ef2577059b66472a8bc1ccaac145dff20ccd8a95. Apr 28 02:15:22.367156 systemd[1]: Started cri-containerd-7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4.scope - libcontainer container 7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4. Apr 28 02:15:22.383918 containerd[1469]: time="2026-04-28T02:15:22.383848141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k4tpc,Uid:4056a42c-0ea5-46a0-b251-d0537c8fcf37,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff51bbfb3c968b996ab19084ef2577059b66472a8bc1ccaac145dff20ccd8a95\"" Apr 28 02:15:22.384888 kubelet[2515]: E0428 02:15:22.384665 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:22.385497 containerd[1469]: time="2026-04-28T02:15:22.385480624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qmbn9,Uid:c4d3b86c-d45f-45be-9c28-3b6fbe58bd03,Namespace:kube-system,Attempt:0,} returns sandbox id \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\"" Apr 28 02:15:22.387430 kubelet[2515]: E0428 02:15:22.387294 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:22.388212 containerd[1469]: time="2026-04-28T02:15:22.388193208Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 28 02:15:22.388775 containerd[1469]: time="2026-04-28T02:15:22.388674065Z" level=info msg="CreateContainer within sandbox \"ff51bbfb3c968b996ab19084ef2577059b66472a8bc1ccaac145dff20ccd8a95\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 28 02:15:22.403114 containerd[1469]: time="2026-04-28T02:15:22.403078090Z" level=info msg="CreateContainer within sandbox \"ff51bbfb3c968b996ab19084ef2577059b66472a8bc1ccaac145dff20ccd8a95\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b6cea661fb45e2e5374cb59d9ebc32a567eee6f907e9097b886d6a1752f3c84c\"" Apr 28 02:15:22.403744 containerd[1469]: time="2026-04-28T02:15:22.403674004Z" level=info msg="StartContainer for \"b6cea661fb45e2e5374cb59d9ebc32a567eee6f907e9097b886d6a1752f3c84c\"" Apr 28 02:15:22.416969 kubelet[2515]: E0428 02:15:22.416950 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:22.417819 containerd[1469]: time="2026-04-28T02:15:22.417763398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-j6l7s,Uid:781a2502-d870-4752-9150-b228287abd72,Namespace:kube-system,Attempt:0,}" Apr 28 02:15:22.429613 systemd[1]: Started cri-containerd-b6cea661fb45e2e5374cb59d9ebc32a567eee6f907e9097b886d6a1752f3c84c.scope - libcontainer container b6cea661fb45e2e5374cb59d9ebc32a567eee6f907e9097b886d6a1752f3c84c. Apr 28 02:15:22.449325 containerd[1469]: time="2026-04-28T02:15:22.446506600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:15:22.449325 containerd[1469]: time="2026-04-28T02:15:22.446573479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:15:22.449325 containerd[1469]: time="2026-04-28T02:15:22.446590913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:15:22.450994 containerd[1469]: time="2026-04-28T02:15:22.450893152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:15:22.455366 containerd[1469]: time="2026-04-28T02:15:22.455339755Z" level=info msg="StartContainer for \"b6cea661fb45e2e5374cb59d9ebc32a567eee6f907e9097b886d6a1752f3c84c\" returns successfully" Apr 28 02:15:22.456617 kubelet[2515]: E0428 02:15:22.456181 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:22.471994 systemd[1]: Started cri-containerd-5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e.scope - libcontainer container 5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e. Apr 28 02:15:22.505576 containerd[1469]: time="2026-04-28T02:15:22.505513439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-j6l7s,Uid:781a2502-d870-4752-9150-b228287abd72,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e\"" Apr 28 02:15:22.506397 kubelet[2515]: E0428 02:15:22.506362 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:23.459231 kubelet[2515]: E0428 02:15:23.459192 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:24.460991 kubelet[2515]: E0428 02:15:24.460949 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:25.077785 kubelet[2515]: E0428 02:15:25.077612 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:25.085798 kubelet[2515]: I0428 02:15:25.085742 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k4tpc" podStartSLOduration=4.085725797 podStartE2EDuration="4.085725797s" podCreationTimestamp="2026-04-28 02:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:15:23.467875154 +0000 UTC m=+9.122945135" watchObservedRunningTime="2026-04-28 02:15:25.085725797 +0000 UTC m=+10.740795785" Apr 28 02:15:25.464143 kubelet[2515]: E0428 02:15:25.464104 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:28.444140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2764720707.mount: Deactivated successfully. Apr 28 02:15:29.738558 containerd[1469]: time="2026-04-28T02:15:29.738448400Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:29.739098 containerd[1469]: time="2026-04-28T02:15:29.739041569Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 28 02:15:29.740037 containerd[1469]: time="2026-04-28T02:15:29.739985121Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:29.741217 containerd[1469]: time="2026-04-28T02:15:29.741196409Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.35267533s" Apr 28 02:15:29.741301 containerd[1469]: time="2026-04-28T02:15:29.741221662Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 28 02:15:29.746030 containerd[1469]: time="2026-04-28T02:15:29.746006432Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 28 02:15:29.753883 containerd[1469]: time="2026-04-28T02:15:29.753791511Z" level=info msg="CreateContainer within sandbox \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 28 02:15:29.765434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2851543811.mount: Deactivated successfully. Apr 28 02:15:29.767018 containerd[1469]: time="2026-04-28T02:15:29.766975952Z" level=info msg="CreateContainer within sandbox \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b\"" Apr 28 02:15:29.767479 containerd[1469]: time="2026-04-28T02:15:29.767462856Z" level=info msg="StartContainer for \"39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b\"" Apr 28 02:15:29.800203 systemd[1]: Started cri-containerd-39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b.scope - libcontainer container 39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b. Apr 28 02:15:29.823722 containerd[1469]: time="2026-04-28T02:15:29.823671572Z" level=info msg="StartContainer for \"39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b\" returns successfully" Apr 28 02:15:29.830817 systemd[1]: cri-containerd-39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b.scope: Deactivated successfully. Apr 28 02:15:29.996750 containerd[1469]: time="2026-04-28T02:15:29.996557976Z" level=info msg="shim disconnected" id=39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b namespace=k8s.io Apr 28 02:15:29.996750 containerd[1469]: time="2026-04-28T02:15:29.996619356Z" level=warning msg="cleaning up after shim disconnected" id=39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b namespace=k8s.io Apr 28 02:15:29.996750 containerd[1469]: time="2026-04-28T02:15:29.996629322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:15:30.479342 kubelet[2515]: E0428 02:15:30.479289 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:30.486006 containerd[1469]: time="2026-04-28T02:15:30.484631752Z" level=info msg="CreateContainer within sandbox \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 28 02:15:30.499908 containerd[1469]: time="2026-04-28T02:15:30.499855905Z" level=info msg="CreateContainer within sandbox \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e\"" Apr 28 02:15:30.500747 containerd[1469]: time="2026-04-28T02:15:30.500681477Z" level=info msg="StartContainer for \"d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e\"" Apr 28 02:15:30.542200 systemd[1]: Started cri-containerd-d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e.scope - libcontainer container d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e. Apr 28 02:15:30.562550 containerd[1469]: time="2026-04-28T02:15:30.562506681Z" level=info msg="StartContainer for \"d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e\" returns successfully" Apr 28 02:15:30.571823 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 02:15:30.572766 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:15:30.572864 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 28 02:15:30.580913 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 02:15:30.581089 systemd[1]: cri-containerd-d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e.scope: Deactivated successfully. Apr 28 02:15:30.593619 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:15:30.602306 containerd[1469]: time="2026-04-28T02:15:30.602244332Z" level=info msg="shim disconnected" id=d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e namespace=k8s.io Apr 28 02:15:30.602306 containerd[1469]: time="2026-04-28T02:15:30.602293336Z" level=warning msg="cleaning up after shim disconnected" id=d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e namespace=k8s.io Apr 28 02:15:30.602306 containerd[1469]: time="2026-04-28T02:15:30.602300444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:15:30.763702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b-rootfs.mount: Deactivated successfully. Apr 28 02:15:30.835413 kubelet[2515]: E0428 02:15:30.835344 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:31.137697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3246188092.mount: Deactivated successfully. Apr 28 02:15:31.415435 containerd[1469]: time="2026-04-28T02:15:31.415251724Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:31.416074 containerd[1469]: time="2026-04-28T02:15:31.416043460Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 28 02:15:31.417053 containerd[1469]: time="2026-04-28T02:15:31.417008906Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:15:31.418126 containerd[1469]: time="2026-04-28T02:15:31.418101801Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.671974001s" Apr 28 02:15:31.418162 containerd[1469]: time="2026-04-28T02:15:31.418132348Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 28 02:15:31.422598 containerd[1469]: time="2026-04-28T02:15:31.422557345Z" level=info msg="CreateContainer within sandbox \"5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 28 02:15:31.437953 containerd[1469]: time="2026-04-28T02:15:31.437907649Z" level=info msg="CreateContainer within sandbox \"5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277\"" Apr 28 02:15:31.438300 containerd[1469]: time="2026-04-28T02:15:31.438243291Z" level=info msg="StartContainer for \"572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277\"" Apr 28 02:15:31.463022 systemd[1]: Started cri-containerd-572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277.scope - libcontainer container 572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277. Apr 28 02:15:31.486500 containerd[1469]: time="2026-04-28T02:15:31.486447716Z" level=info msg="StartContainer for \"572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277\" returns successfully" Apr 28 02:15:31.490001 kubelet[2515]: E0428 02:15:31.489763 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:31.491697 kubelet[2515]: E0428 02:15:31.491639 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:31.497602 containerd[1469]: time="2026-04-28T02:15:31.497550953Z" level=info msg="CreateContainer within sandbox \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 28 02:15:31.513129 containerd[1469]: time="2026-04-28T02:15:31.513058351Z" level=info msg="CreateContainer within sandbox \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67\"" Apr 28 02:15:31.514449 containerd[1469]: time="2026-04-28T02:15:31.513821345Z" level=info msg="StartContainer for \"ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67\"" Apr 28 02:15:31.547583 systemd[1]: Started cri-containerd-ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67.scope - libcontainer container ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67. Apr 28 02:15:31.588918 containerd[1469]: time="2026-04-28T02:15:31.588820347Z" level=info msg="StartContainer for \"ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67\" returns successfully" Apr 28 02:15:31.589261 systemd[1]: cri-containerd-ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67.scope: Deactivated successfully. Apr 28 02:15:31.637355 containerd[1469]: time="2026-04-28T02:15:31.637264551Z" level=info msg="shim disconnected" id=ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67 namespace=k8s.io Apr 28 02:15:31.637355 containerd[1469]: time="2026-04-28T02:15:31.637322321Z" level=warning msg="cleaning up after shim disconnected" id=ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67 namespace=k8s.io Apr 28 02:15:31.637355 containerd[1469]: time="2026-04-28T02:15:31.637329247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:15:32.496944 kubelet[2515]: E0428 02:15:32.496898 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:32.497544 kubelet[2515]: E0428 02:15:32.497445 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:32.504979 containerd[1469]: time="2026-04-28T02:15:32.504938107Z" level=info msg="CreateContainer within sandbox \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 28 02:15:32.514022 kubelet[2515]: I0428 02:15:32.513954 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-j6l7s" podStartSLOduration=1.602120591 podStartE2EDuration="10.513939391s" podCreationTimestamp="2026-04-28 02:15:22 +0000 UTC" firstStartedPulling="2026-04-28 02:15:22.507044007 +0000 UTC m=+8.162113979" lastFinishedPulling="2026-04-28 02:15:31.418862806 +0000 UTC m=+17.073932779" observedRunningTime="2026-04-28 02:15:31.516175023 +0000 UTC m=+17.171245005" watchObservedRunningTime="2026-04-28 02:15:32.513939391 +0000 UTC m=+18.169009375" Apr 28 02:15:32.518350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2253594294.mount: Deactivated successfully. Apr 28 02:15:32.520186 containerd[1469]: time="2026-04-28T02:15:32.520140137Z" level=info msg="CreateContainer within sandbox \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad\"" Apr 28 02:15:32.520866 containerd[1469]: time="2026-04-28T02:15:32.520807288Z" level=info msg="StartContainer for \"3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad\"" Apr 28 02:15:32.551038 systemd[1]: Started cri-containerd-3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad.scope - libcontainer container 3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad. Apr 28 02:15:32.568512 systemd[1]: cri-containerd-3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad.scope: Deactivated successfully. Apr 28 02:15:32.571481 containerd[1469]: time="2026-04-28T02:15:32.571439846Z" level=info msg="StartContainer for \"3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad\" returns successfully" Apr 28 02:15:32.591313 containerd[1469]: time="2026-04-28T02:15:32.591248668Z" level=info msg="shim disconnected" id=3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad namespace=k8s.io Apr 28 02:15:32.591313 containerd[1469]: time="2026-04-28T02:15:32.591306213Z" level=warning msg="cleaning up after shim disconnected" id=3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad namespace=k8s.io Apr 28 02:15:32.591313 containerd[1469]: time="2026-04-28T02:15:32.591312932Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:15:32.764450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad-rootfs.mount: Deactivated successfully. Apr 28 02:15:33.499128 kubelet[2515]: E0428 02:15:33.499066 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:33.503908 containerd[1469]: time="2026-04-28T02:15:33.503863951Z" level=info msg="CreateContainer within sandbox \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 28 02:15:33.518132 containerd[1469]: time="2026-04-28T02:15:33.518093467Z" level=info msg="CreateContainer within sandbox \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe\"" Apr 28 02:15:33.518494 containerd[1469]: time="2026-04-28T02:15:33.518471969Z" level=info msg="StartContainer for \"4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe\"" Apr 28 02:15:33.556196 systemd[1]: Started cri-containerd-4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe.scope - libcontainer container 4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe. Apr 28 02:15:33.577445 containerd[1469]: time="2026-04-28T02:15:33.577367510Z" level=info msg="StartContainer for \"4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe\" returns successfully" Apr 28 02:15:33.701938 kubelet[2515]: I0428 02:15:33.701886 2515 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 28 02:15:33.739438 systemd[1]: Created slice kubepods-burstable-pod4da40bd4_6798_4dcb_8f5c_3d4cfb3cf49c.slice - libcontainer container kubepods-burstable-pod4da40bd4_6798_4dcb_8f5c_3d4cfb3cf49c.slice. Apr 28 02:15:33.748427 systemd[1]: Created slice kubepods-burstable-pod7752c7ba_e9a4_45b6_ba5c_58c76e9259ea.slice - libcontainer container kubepods-burstable-pod7752c7ba_e9a4_45b6_ba5c_58c76e9259ea.slice. Apr 28 02:15:33.802683 kubelet[2515]: I0428 02:15:33.802392 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4da40bd4-6798-4dcb-8f5c-3d4cfb3cf49c-config-volume\") pod \"coredns-674b8bbfcf-btn5f\" (UID: \"4da40bd4-6798-4dcb-8f5c-3d4cfb3cf49c\") " pod="kube-system/coredns-674b8bbfcf-btn5f" Apr 28 02:15:33.802683 kubelet[2515]: I0428 02:15:33.802462 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rv5t\" (UniqueName: \"kubernetes.io/projected/4da40bd4-6798-4dcb-8f5c-3d4cfb3cf49c-kube-api-access-4rv5t\") pod \"coredns-674b8bbfcf-btn5f\" (UID: \"4da40bd4-6798-4dcb-8f5c-3d4cfb3cf49c\") " pod="kube-system/coredns-674b8bbfcf-btn5f" Apr 28 02:15:33.802683 kubelet[2515]: I0428 02:15:33.802494 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7752c7ba-e9a4-45b6-ba5c-58c76e9259ea-config-volume\") pod \"coredns-674b8bbfcf-4nbvs\" (UID: \"7752c7ba-e9a4-45b6-ba5c-58c76e9259ea\") " pod="kube-system/coredns-674b8bbfcf-4nbvs" Apr 28 02:15:33.802683 kubelet[2515]: I0428 02:15:33.802517 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk8hq\" (UniqueName: \"kubernetes.io/projected/7752c7ba-e9a4-45b6-ba5c-58c76e9259ea-kube-api-access-jk8hq\") pod \"coredns-674b8bbfcf-4nbvs\" (UID: \"7752c7ba-e9a4-45b6-ba5c-58c76e9259ea\") " pod="kube-system/coredns-674b8bbfcf-4nbvs" Apr 28 02:15:34.044117 kubelet[2515]: E0428 02:15:34.044051 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:34.047450 containerd[1469]: time="2026-04-28T02:15:34.047347474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-btn5f,Uid:4da40bd4-6798-4dcb-8f5c-3d4cfb3cf49c,Namespace:kube-system,Attempt:0,}" Apr 28 02:15:34.052475 kubelet[2515]: E0428 02:15:34.052282 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:34.052866 containerd[1469]: time="2026-04-28T02:15:34.052749752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4nbvs,Uid:7752c7ba-e9a4-45b6-ba5c-58c76e9259ea,Namespace:kube-system,Attempt:0,}" Apr 28 02:15:34.502989 kubelet[2515]: E0428 02:15:34.502947 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:34.516533 kubelet[2515]: I0428 02:15:34.516299 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qmbn9" podStartSLOduration=6.158502941 podStartE2EDuration="13.516284008s" podCreationTimestamp="2026-04-28 02:15:21 +0000 UTC" firstStartedPulling="2026-04-28 02:15:22.38795393 +0000 UTC m=+8.043023904" lastFinishedPulling="2026-04-28 02:15:29.745734997 +0000 UTC m=+15.400804971" observedRunningTime="2026-04-28 02:15:34.51605014 +0000 UTC m=+20.171120125" watchObservedRunningTime="2026-04-28 02:15:34.516284008 +0000 UTC m=+20.171353991" Apr 28 02:15:35.475659 systemd-networkd[1410]: cilium_host: Link UP Apr 28 02:15:35.476032 systemd-networkd[1410]: cilium_net: Link UP Apr 28 02:15:35.476182 systemd-networkd[1410]: cilium_net: Gained carrier Apr 28 02:15:35.476277 systemd-networkd[1410]: cilium_host: Gained carrier Apr 28 02:15:35.519249 kubelet[2515]: E0428 02:15:35.518574 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:35.579734 systemd-networkd[1410]: cilium_vxlan: Link UP Apr 28 02:15:35.579742 systemd-networkd[1410]: cilium_vxlan: Gained carrier Apr 28 02:15:35.754873 kernel: NET: Registered PF_ALG protocol family Apr 28 02:15:35.936025 systemd-networkd[1410]: cilium_net: Gained IPv6LL Apr 28 02:15:36.128135 systemd-networkd[1410]: cilium_host: Gained IPv6LL Apr 28 02:15:36.304223 systemd-networkd[1410]: lxc_health: Link UP Apr 28 02:15:36.312568 systemd-networkd[1410]: lxc_health: Gained carrier Apr 28 02:15:36.522342 kubelet[2515]: E0428 02:15:36.522299 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:36.526195 update_engine[1462]: I20260428 02:15:36.526088 1462 update_attempter.cc:509] Updating boot flags... Apr 28 02:15:36.565452 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3376) Apr 28 02:15:36.580878 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3376) Apr 28 02:15:36.609253 systemd-networkd[1410]: lxc5a9ed804af05: Link UP Apr 28 02:15:36.618872 kernel: eth0: renamed from tmpcdb30 Apr 28 02:15:36.626582 systemd-networkd[1410]: lxc568a6679b11d: Link UP Apr 28 02:15:36.638023 kernel: eth0: renamed from tmpd3468 Apr 28 02:15:36.652611 systemd-networkd[1410]: lxc5a9ed804af05: Gained carrier Apr 28 02:15:36.653573 systemd-networkd[1410]: lxc568a6679b11d: Gained carrier Apr 28 02:15:36.897144 systemd-networkd[1410]: cilium_vxlan: Gained IPv6LL Apr 28 02:15:37.524294 kubelet[2515]: E0428 02:15:37.524264 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:37.795018 systemd-networkd[1410]: lxc5a9ed804af05: Gained IPv6LL Apr 28 02:15:38.059292 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:43690.service - OpenSSH per-connection server daemon (10.0.0.1:43690). Apr 28 02:15:38.093720 sshd[3746]: Accepted publickey for core from 10.0.0.1 port 43690 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:15:38.095335 sshd[3746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:15:38.101799 systemd-logind[1454]: New session 8 of user core. Apr 28 02:15:38.106102 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 28 02:15:38.113006 systemd-networkd[1410]: lxc568a6679b11d: Gained IPv6LL Apr 28 02:15:38.252606 sshd[3746]: pam_unix(sshd:session): session closed for user core Apr 28 02:15:38.255291 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:43690.service: Deactivated successfully. Apr 28 02:15:38.256528 systemd[1]: session-8.scope: Deactivated successfully. Apr 28 02:15:38.257145 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Apr 28 02:15:38.257955 systemd-logind[1454]: Removed session 8. Apr 28 02:15:38.304074 systemd-networkd[1410]: lxc_health: Gained IPv6LL Apr 28 02:15:38.526886 kubelet[2515]: E0428 02:15:38.526703 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:39.819116 containerd[1469]: time="2026-04-28T02:15:39.818991007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:15:39.819116 containerd[1469]: time="2026-04-28T02:15:39.819061571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:15:39.819756 containerd[1469]: time="2026-04-28T02:15:39.819535084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:15:39.819756 containerd[1469]: time="2026-04-28T02:15:39.819705152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:15:39.824617 containerd[1469]: time="2026-04-28T02:15:39.824287147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:15:39.824617 containerd[1469]: time="2026-04-28T02:15:39.824364386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:15:39.824617 containerd[1469]: time="2026-04-28T02:15:39.824373915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:15:39.824759 containerd[1469]: time="2026-04-28T02:15:39.824626489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:15:39.845241 systemd[1]: Started cri-containerd-d3468a3818b4b1bc51aa2df429c90f99d6ccb28a830a49413aa3a8a649807db2.scope - libcontainer container d3468a3818b4b1bc51aa2df429c90f99d6ccb28a830a49413aa3a8a649807db2. Apr 28 02:15:39.848673 systemd[1]: Started cri-containerd-cdb3013f5638b168851c9320927c7b9cb451e0baaae69aee8d17c2f257aef0db.scope - libcontainer container cdb3013f5638b168851c9320927c7b9cb451e0baaae69aee8d17c2f257aef0db. Apr 28 02:15:39.858456 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 02:15:39.860360 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 02:15:39.889546 containerd[1469]: time="2026-04-28T02:15:39.889512405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-btn5f,Uid:4da40bd4-6798-4dcb-8f5c-3d4cfb3cf49c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3468a3818b4b1bc51aa2df429c90f99d6ccb28a830a49413aa3a8a649807db2\"" Apr 28 02:15:39.891232 kubelet[2515]: E0428 02:15:39.890663 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:39.892528 containerd[1469]: time="2026-04-28T02:15:39.892471814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4nbvs,Uid:7752c7ba-e9a4-45b6-ba5c-58c76e9259ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdb3013f5638b168851c9320927c7b9cb451e0baaae69aee8d17c2f257aef0db\"" Apr 28 02:15:39.893149 kubelet[2515]: E0428 02:15:39.893006 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:39.898793 containerd[1469]: time="2026-04-28T02:15:39.898731922Z" level=info msg="CreateContainer within sandbox \"d3468a3818b4b1bc51aa2df429c90f99d6ccb28a830a49413aa3a8a649807db2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 02:15:39.902501 containerd[1469]: time="2026-04-28T02:15:39.902450841Z" level=info msg="CreateContainer within sandbox \"cdb3013f5638b168851c9320927c7b9cb451e0baaae69aee8d17c2f257aef0db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 02:15:39.921184 containerd[1469]: time="2026-04-28T02:15:39.921117934Z" level=info msg="CreateContainer within sandbox \"d3468a3818b4b1bc51aa2df429c90f99d6ccb28a830a49413aa3a8a649807db2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ecc0b7d206a341257af4db8097c9adf6cb435696333023a2be80c2855cfd640\"" Apr 28 02:15:39.921767 containerd[1469]: time="2026-04-28T02:15:39.921741361Z" level=info msg="StartContainer for \"1ecc0b7d206a341257af4db8097c9adf6cb435696333023a2be80c2855cfd640\"" Apr 28 02:15:39.922289 containerd[1469]: time="2026-04-28T02:15:39.922013596Z" level=info msg="CreateContainer within sandbox \"cdb3013f5638b168851c9320927c7b9cb451e0baaae69aee8d17c2f257aef0db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fecd14a059945ea81afd9df27cd6e516a7f418ab648d1432d817337518fc1456\"" Apr 28 02:15:39.923113 containerd[1469]: time="2026-04-28T02:15:39.923073110Z" level=info msg="StartContainer for \"fecd14a059945ea81afd9df27cd6e516a7f418ab648d1432d817337518fc1456\"" Apr 28 02:15:39.966291 systemd[1]: Started cri-containerd-1ecc0b7d206a341257af4db8097c9adf6cb435696333023a2be80c2855cfd640.scope - libcontainer container 1ecc0b7d206a341257af4db8097c9adf6cb435696333023a2be80c2855cfd640. Apr 28 02:15:39.967521 systemd[1]: Started cri-containerd-fecd14a059945ea81afd9df27cd6e516a7f418ab648d1432d817337518fc1456.scope - libcontainer container fecd14a059945ea81afd9df27cd6e516a7f418ab648d1432d817337518fc1456. Apr 28 02:15:39.999791 containerd[1469]: time="2026-04-28T02:15:39.999709266Z" level=info msg="StartContainer for \"1ecc0b7d206a341257af4db8097c9adf6cb435696333023a2be80c2855cfd640\" returns successfully" Apr 28 02:15:40.000003 containerd[1469]: time="2026-04-28T02:15:39.999890163Z" level=info msg="StartContainer for \"fecd14a059945ea81afd9df27cd6e516a7f418ab648d1432d817337518fc1456\" returns successfully" Apr 28 02:15:40.532698 kubelet[2515]: E0428 02:15:40.532451 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:40.534249 kubelet[2515]: E0428 02:15:40.534187 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:40.541535 kubelet[2515]: I0428 02:15:40.541486 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-btn5f" podStartSLOduration=18.541474446 podStartE2EDuration="18.541474446s" podCreationTimestamp="2026-04-28 02:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:15:40.54131395 +0000 UTC m=+26.196383934" watchObservedRunningTime="2026-04-28 02:15:40.541474446 +0000 UTC m=+26.196544429" Apr 28 02:15:40.560073 kubelet[2515]: I0428 02:15:40.560015 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4nbvs" podStartSLOduration=18.559995915000002 podStartE2EDuration="18.559995915s" podCreationTimestamp="2026-04-28 02:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:15:40.559689723 +0000 UTC m=+26.214759702" watchObservedRunningTime="2026-04-28 02:15:40.559995915 +0000 UTC m=+26.215065899" Apr 28 02:15:40.824896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3171624459.mount: Deactivated successfully. Apr 28 02:15:41.536980 kubelet[2515]: E0428 02:15:41.536938 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:41.536980 kubelet[2515]: E0428 02:15:41.536953 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:42.538515 kubelet[2515]: E0428 02:15:42.538473 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:42.538920 kubelet[2515]: E0428 02:15:42.538613 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:15:43.269198 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:60538.service - OpenSSH per-connection server daemon (10.0.0.1:60538). Apr 28 02:15:43.301313 sshd[3938]: Accepted publickey for core from 10.0.0.1 port 60538 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:15:43.302564 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:15:43.306246 systemd-logind[1454]: New session 9 of user core. Apr 28 02:15:43.316215 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 28 02:15:43.492472 sshd[3938]: pam_unix(sshd:session): session closed for user core Apr 28 02:15:43.495292 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:60538.service: Deactivated successfully. Apr 28 02:15:43.496519 systemd[1]: session-9.scope: Deactivated successfully. Apr 28 02:15:43.497175 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Apr 28 02:15:43.498040 systemd-logind[1454]: Removed session 9. Apr 28 02:15:48.505102 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:60544.service - OpenSSH per-connection server daemon (10.0.0.1:60544). Apr 28 02:15:48.537463 sshd[3954]: Accepted publickey for core from 10.0.0.1 port 60544 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:15:48.538992 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:15:48.543903 systemd-logind[1454]: New session 10 of user core. Apr 28 02:15:48.550025 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 28 02:15:48.654296 sshd[3954]: pam_unix(sshd:session): session closed for user core Apr 28 02:15:48.657040 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:60544.service: Deactivated successfully. Apr 28 02:15:48.658293 systemd[1]: session-10.scope: Deactivated successfully. Apr 28 02:15:48.658958 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Apr 28 02:15:48.659756 systemd-logind[1454]: Removed session 10. Apr 28 02:15:53.667516 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:32954.service - OpenSSH per-connection server daemon (10.0.0.1:32954). Apr 28 02:15:53.699177 sshd[3971]: Accepted publickey for core from 10.0.0.1 port 32954 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:15:53.700452 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:15:53.704755 systemd-logind[1454]: New session 11 of user core. Apr 28 02:15:53.709970 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 28 02:15:53.809497 sshd[3971]: pam_unix(sshd:session): session closed for user core Apr 28 02:15:53.820594 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:32954.service: Deactivated successfully. Apr 28 02:15:53.821950 systemd[1]: session-11.scope: Deactivated successfully. Apr 28 02:15:53.823388 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Apr 28 02:15:53.831210 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:32970.service - OpenSSH per-connection server daemon (10.0.0.1:32970). Apr 28 02:15:53.832110 systemd-logind[1454]: Removed session 11. Apr 28 02:15:53.858011 sshd[3986]: Accepted publickey for core from 10.0.0.1 port 32970 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:15:53.859371 sshd[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:15:53.862901 systemd-logind[1454]: New session 12 of user core. Apr 28 02:15:53.870023 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 28 02:15:54.015111 sshd[3986]: pam_unix(sshd:session): session closed for user core Apr 28 02:15:54.024544 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:32970.service: Deactivated successfully. Apr 28 02:15:54.026221 systemd[1]: session-12.scope: Deactivated successfully. Apr 28 02:15:54.029132 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Apr 28 02:15:54.045223 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:32972.service - OpenSSH per-connection server daemon (10.0.0.1:32972). Apr 28 02:15:54.045991 systemd-logind[1454]: Removed session 12. Apr 28 02:15:54.073152 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 32972 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:15:54.074385 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:15:54.077988 systemd-logind[1454]: New session 13 of user core. Apr 28 02:15:54.083027 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 28 02:15:54.182299 sshd[3999]: pam_unix(sshd:session): session closed for user core Apr 28 02:15:54.185327 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:32972.service: Deactivated successfully. Apr 28 02:15:54.187182 systemd[1]: session-13.scope: Deactivated successfully. Apr 28 02:15:54.188283 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Apr 28 02:15:54.189187 systemd-logind[1454]: Removed session 13. Apr 28 02:15:59.193315 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:32980.service - OpenSSH per-connection server daemon (10.0.0.1:32980). Apr 28 02:15:59.221234 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 32980 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:15:59.222251 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:15:59.225634 systemd-logind[1454]: New session 14 of user core. Apr 28 02:15:59.236007 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 28 02:15:59.341239 sshd[4013]: pam_unix(sshd:session): session closed for user core Apr 28 02:15:59.350211 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:32980.service: Deactivated successfully. Apr 28 02:15:59.351556 systemd[1]: session-14.scope: Deactivated successfully. Apr 28 02:15:59.352746 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Apr 28 02:15:59.353971 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:32990.service - OpenSSH per-connection server daemon (10.0.0.1:32990). Apr 28 02:15:59.354746 systemd-logind[1454]: Removed session 14. Apr 28 02:15:59.395115 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 32990 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:15:59.396299 sshd[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:15:59.401027 systemd-logind[1454]: New session 15 of user core. Apr 28 02:15:59.410065 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 28 02:15:59.576221 sshd[4028]: pam_unix(sshd:session): session closed for user core Apr 28 02:15:59.595122 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:32990.service: Deactivated successfully. Apr 28 02:15:59.596412 systemd[1]: session-15.scope: Deactivated successfully. Apr 28 02:15:59.597564 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Apr 28 02:15:59.602139 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:53798.service - OpenSSH per-connection server daemon (10.0.0.1:53798). Apr 28 02:15:59.602737 systemd-logind[1454]: Removed session 15. Apr 28 02:15:59.629120 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 53798 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:15:59.630058 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:15:59.633275 systemd-logind[1454]: New session 16 of user core. Apr 28 02:15:59.644370 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 28 02:16:00.008908 sshd[4041]: pam_unix(sshd:session): session closed for user core Apr 28 02:16:00.019818 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:53798.service: Deactivated successfully. Apr 28 02:16:00.021202 systemd[1]: session-16.scope: Deactivated successfully. Apr 28 02:16:00.024452 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Apr 28 02:16:00.032447 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:53802.service - OpenSSH per-connection server daemon (10.0.0.1:53802). Apr 28 02:16:00.034340 systemd-logind[1454]: Removed session 16. Apr 28 02:16:00.060977 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 53802 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:16:00.062084 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:16:00.065373 systemd-logind[1454]: New session 17 of user core. Apr 28 02:16:00.070003 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 28 02:16:00.293770 sshd[4061]: pam_unix(sshd:session): session closed for user core Apr 28 02:16:00.303795 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:53802.service: Deactivated successfully. Apr 28 02:16:00.305022 systemd[1]: session-17.scope: Deactivated successfully. Apr 28 02:16:00.305988 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Apr 28 02:16:00.306943 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:53814.service - OpenSSH per-connection server daemon (10.0.0.1:53814). Apr 28 02:16:00.307631 systemd-logind[1454]: Removed session 17. Apr 28 02:16:00.334479 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 53814 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:16:00.335447 sshd[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:16:00.339121 systemd-logind[1454]: New session 18 of user core. Apr 28 02:16:00.351016 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 28 02:16:00.449187 sshd[4074]: pam_unix(sshd:session): session closed for user core Apr 28 02:16:00.451904 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:53814.service: Deactivated successfully. Apr 28 02:16:00.453109 systemd[1]: session-18.scope: Deactivated successfully. Apr 28 02:16:00.453598 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Apr 28 02:16:00.454350 systemd-logind[1454]: Removed session 18. Apr 28 02:16:05.464814 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:53826.service - OpenSSH per-connection server daemon (10.0.0.1:53826). Apr 28 02:16:05.493515 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 53826 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:16:05.494598 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:16:05.498122 systemd-logind[1454]: New session 19 of user core. Apr 28 02:16:05.508107 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 28 02:16:05.610878 sshd[4091]: pam_unix(sshd:session): session closed for user core Apr 28 02:16:05.613979 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:53826.service: Deactivated successfully. Apr 28 02:16:05.615403 systemd[1]: session-19.scope: Deactivated successfully. Apr 28 02:16:05.616481 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Apr 28 02:16:05.617292 systemd-logind[1454]: Removed session 19. Apr 28 02:16:10.621857 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:60884.service - OpenSSH per-connection server daemon (10.0.0.1:60884). Apr 28 02:16:10.649717 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 60884 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:16:10.650948 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:16:10.655871 systemd-logind[1454]: New session 20 of user core. Apr 28 02:16:10.664986 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 28 02:16:10.773477 sshd[4105]: pam_unix(sshd:session): session closed for user core Apr 28 02:16:10.788903 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:60884.service: Deactivated successfully. Apr 28 02:16:10.790295 systemd[1]: session-20.scope: Deactivated successfully. Apr 28 02:16:10.791548 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Apr 28 02:16:10.792936 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:60886.service - OpenSSH per-connection server daemon (10.0.0.1:60886). Apr 28 02:16:10.793606 systemd-logind[1454]: Removed session 20. Apr 28 02:16:10.822249 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 60886 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:16:10.823861 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:16:10.828519 systemd-logind[1454]: New session 21 of user core. Apr 28 02:16:10.834108 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 28 02:16:12.196245 containerd[1469]: time="2026-04-28T02:16:12.196139752Z" level=info msg="StopContainer for \"572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277\" with timeout 30 (s)" Apr 28 02:16:12.197401 containerd[1469]: time="2026-04-28T02:16:12.196723807Z" level=info msg="Stop container \"572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277\" with signal terminated" Apr 28 02:16:12.209429 systemd[1]: cri-containerd-572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277.scope: Deactivated successfully. Apr 28 02:16:12.216395 containerd[1469]: time="2026-04-28T02:16:12.216303943Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 02:16:12.222134 containerd[1469]: time="2026-04-28T02:16:12.222068499Z" level=info msg="StopContainer for \"4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe\" with timeout 2 (s)" Apr 28 02:16:12.222453 containerd[1469]: time="2026-04-28T02:16:12.222400088Z" level=info msg="Stop container \"4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe\" with signal terminated" Apr 28 02:16:12.229461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277-rootfs.mount: Deactivated successfully. Apr 28 02:16:12.230500 systemd-networkd[1410]: lxc_health: Link DOWN Apr 28 02:16:12.230503 systemd-networkd[1410]: lxc_health: Lost carrier Apr 28 02:16:12.236605 containerd[1469]: time="2026-04-28T02:16:12.236520255Z" level=info msg="shim disconnected" id=572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277 namespace=k8s.io Apr 28 02:16:12.236605 containerd[1469]: time="2026-04-28T02:16:12.236569819Z" level=warning msg="cleaning up after shim disconnected" id=572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277 namespace=k8s.io Apr 28 02:16:12.236605 containerd[1469]: time="2026-04-28T02:16:12.236576390Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:16:12.249212 systemd[1]: cri-containerd-4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe.scope: Deactivated successfully. Apr 28 02:16:12.250573 systemd[1]: cri-containerd-4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe.scope: Consumed 5.851s CPU time. Apr 28 02:16:12.261175 containerd[1469]: time="2026-04-28T02:16:12.261115152Z" level=info msg="StopContainer for \"572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277\" returns successfully" Apr 28 02:16:12.264434 containerd[1469]: time="2026-04-28T02:16:12.264365048Z" level=info msg="StopPodSandbox for \"5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e\"" Apr 28 02:16:12.264541 containerd[1469]: time="2026-04-28T02:16:12.264449763Z" level=info msg="Container to stop \"572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:16:12.266147 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e-shm.mount: Deactivated successfully. Apr 28 02:16:12.272741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe-rootfs.mount: Deactivated successfully. Apr 28 02:16:12.273343 systemd[1]: cri-containerd-5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e.scope: Deactivated successfully. Apr 28 02:16:12.285477 containerd[1469]: time="2026-04-28T02:16:12.285403916Z" level=info msg="shim disconnected" id=4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe namespace=k8s.io Apr 28 02:16:12.285477 containerd[1469]: time="2026-04-28T02:16:12.285474791Z" level=warning msg="cleaning up after shim disconnected" id=4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe namespace=k8s.io Apr 28 02:16:12.285477 containerd[1469]: time="2026-04-28T02:16:12.285486767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:16:12.295032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e-rootfs.mount: Deactivated successfully. Apr 28 02:16:12.298740 containerd[1469]: time="2026-04-28T02:16:12.298552994Z" level=info msg="shim disconnected" id=5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e namespace=k8s.io Apr 28 02:16:12.298901 containerd[1469]: time="2026-04-28T02:16:12.298738963Z" level=warning msg="cleaning up after shim disconnected" id=5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e namespace=k8s.io Apr 28 02:16:12.298901 containerd[1469]: time="2026-04-28T02:16:12.298781304Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:16:12.301484 containerd[1469]: time="2026-04-28T02:16:12.301442964Z" level=info msg="StopContainer for \"4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe\" returns successfully" Apr 28 02:16:12.302160 containerd[1469]: time="2026-04-28T02:16:12.302134507Z" level=info msg="StopPodSandbox for \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\"" Apr 28 02:16:12.302246 containerd[1469]: time="2026-04-28T02:16:12.302170933Z" level=info msg="Container to stop \"d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:16:12.302246 containerd[1469]: time="2026-04-28T02:16:12.302180557Z" level=info msg="Container to stop \"3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:16:12.302246 containerd[1469]: time="2026-04-28T02:16:12.302187739Z" level=info msg="Container to stop \"39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:16:12.302246 containerd[1469]: time="2026-04-28T02:16:12.302195613Z" level=info msg="Container to stop \"ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:16:12.302246 containerd[1469]: time="2026-04-28T02:16:12.302205509Z" level=info msg="Container to stop \"4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:16:12.311120 systemd[1]: cri-containerd-7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4.scope: Deactivated successfully. Apr 28 02:16:12.315042 containerd[1469]: time="2026-04-28T02:16:12.314960840Z" level=warning msg="cleanup warnings time=\"2026-04-28T02:16:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 02:16:12.322316 containerd[1469]: time="2026-04-28T02:16:12.322237163Z" level=info msg="TearDown network for sandbox \"5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e\" successfully" Apr 28 02:16:12.322316 containerd[1469]: time="2026-04-28T02:16:12.322283587Z" level=info msg="StopPodSandbox for \"5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e\" returns successfully" Apr 28 02:16:12.334572 containerd[1469]: time="2026-04-28T02:16:12.334313978Z" level=info msg="shim disconnected" id=7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4 namespace=k8s.io Apr 28 02:16:12.334572 containerd[1469]: time="2026-04-28T02:16:12.334390473Z" level=warning msg="cleaning up after shim disconnected" id=7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4 namespace=k8s.io Apr 28 02:16:12.334572 containerd[1469]: time="2026-04-28T02:16:12.334399800Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:16:12.347187 containerd[1469]: time="2026-04-28T02:16:12.347128431Z" level=info msg="TearDown network for sandbox \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\" successfully" Apr 28 02:16:12.347187 containerd[1469]: time="2026-04-28T02:16:12.347165032Z" level=info msg="StopPodSandbox for \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\" returns successfully" Apr 28 02:16:12.402425 kubelet[2515]: I0428 02:16:12.401951 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/781a2502-d870-4752-9150-b228287abd72-cilium-config-path\") pod \"781a2502-d870-4752-9150-b228287abd72\" (UID: \"781a2502-d870-4752-9150-b228287abd72\") " Apr 28 02:16:12.402425 kubelet[2515]: I0428 02:16:12.402015 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv8lv\" (UniqueName: \"kubernetes.io/projected/781a2502-d870-4752-9150-b228287abd72-kube-api-access-mv8lv\") pod \"781a2502-d870-4752-9150-b228287abd72\" (UID: \"781a2502-d870-4752-9150-b228287abd72\") " Apr 28 02:16:12.404644 kubelet[2515]: I0428 02:16:12.404592 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/781a2502-d870-4752-9150-b228287abd72-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "781a2502-d870-4752-9150-b228287abd72" (UID: "781a2502-d870-4752-9150-b228287abd72"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 02:16:12.405280 kubelet[2515]: I0428 02:16:12.405242 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/781a2502-d870-4752-9150-b228287abd72-kube-api-access-mv8lv" (OuterVolumeSpecName: "kube-api-access-mv8lv") pod "781a2502-d870-4752-9150-b228287abd72" (UID: "781a2502-d870-4752-9150-b228287abd72"). InnerVolumeSpecName "kube-api-access-mv8lv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 02:16:12.429141 systemd[1]: Removed slice kubepods-besteffort-pod781a2502_d870_4752_9150_b228287abd72.slice - libcontainer container kubepods-besteffort-pod781a2502_d870_4752_9150_b228287abd72.slice. Apr 28 02:16:12.503401 kubelet[2515]: I0428 02:16:12.503212 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-cni-path\") pod \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " Apr 28 02:16:12.503401 kubelet[2515]: I0428 02:16:12.503262 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-etc-cni-netd\") pod \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " Apr 28 02:16:12.503401 kubelet[2515]: I0428 02:16:12.503278 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-hostproc\") pod \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " Apr 28 02:16:12.503401 kubelet[2515]: I0428 02:16:12.503303 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-clustermesh-secrets\") pod \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " Apr 28 02:16:12.503401 kubelet[2515]: I0428 02:16:12.503320 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-hubble-tls\") pod \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " Apr 28 02:16:12.503401 kubelet[2515]: I0428 02:16:12.503335 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thvnt\" (UniqueName: \"kubernetes.io/projected/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-kube-api-access-thvnt\") pod \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " Apr 28 02:16:12.503654 kubelet[2515]: I0428 02:16:12.503352 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-xtables-lock\") pod \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " Apr 28 02:16:12.503654 kubelet[2515]: I0428 02:16:12.503442 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-cni-path" (OuterVolumeSpecName: "cni-path") pod "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03" (UID: "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:16:12.503654 kubelet[2515]: I0428 02:16:12.503443 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03" (UID: "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:16:12.503654 kubelet[2515]: I0428 02:16:12.503483 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-hostproc" (OuterVolumeSpecName: "hostproc") pod "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03" (UID: "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:16:12.503654 kubelet[2515]: I0428 02:16:12.503497 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03" (UID: "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:16:12.505078 kubelet[2515]: I0428 02:16:12.504862 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-cilium-config-path\") pod \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " Apr 28 02:16:12.505078 kubelet[2515]: I0428 02:16:12.504900 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-host-proc-sys-net\") pod \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " Apr 28 02:16:12.505078 kubelet[2515]: I0428 02:16:12.504913 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-bpf-maps\") pod \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " Apr 28 02:16:12.505078 kubelet[2515]: I0428 02:16:12.504933 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-cilium-cgroup\") pod \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " Apr 28 02:16:12.505078 kubelet[2515]: I0428 02:16:12.504949 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-host-proc-sys-kernel\") pod \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " Apr 28 02:16:12.505078 kubelet[2515]: I0428 02:16:12.504966 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-cilium-run\") pod \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " Apr 28 02:16:12.505533 kubelet[2515]: I0428 02:16:12.504978 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-lib-modules\") pod \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\" (UID: \"c4d3b86c-d45f-45be-9c28-3b6fbe58bd03\") " Apr 28 02:16:12.505533 kubelet[2515]: I0428 02:16:12.505017 2515 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 28 02:16:12.505533 kubelet[2515]: I0428 02:16:12.505026 2515 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 28 02:16:12.505533 kubelet[2515]: I0428 02:16:12.505038 2515 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 28 02:16:12.505533 kubelet[2515]: I0428 02:16:12.505046 2515 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/781a2502-d870-4752-9150-b228287abd72-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 28 02:16:12.505533 kubelet[2515]: I0428 02:16:12.505057 2515 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 28 02:16:12.505533 kubelet[2515]: I0428 02:16:12.505068 2515 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mv8lv\" (UniqueName: \"kubernetes.io/projected/781a2502-d870-4752-9150-b228287abd72-kube-api-access-mv8lv\") on node \"localhost\" DevicePath \"\"" Apr 28 02:16:12.505655 kubelet[2515]: I0428 02:16:12.505096 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03" (UID: "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:16:12.505860 kubelet[2515]: I0428 02:16:12.505722 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03" (UID: "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:16:12.505860 kubelet[2515]: I0428 02:16:12.505747 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03" (UID: "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:16:12.505860 kubelet[2515]: I0428 02:16:12.505758 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03" (UID: "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:16:12.505860 kubelet[2515]: I0428 02:16:12.505795 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03" (UID: "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:16:12.505860 kubelet[2515]: I0428 02:16:12.505818 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03" (UID: "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:16:12.506234 kubelet[2515]: I0428 02:16:12.506209 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03" (UID: "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 28 02:16:12.506429 kubelet[2515]: I0428 02:16:12.506387 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-kube-api-access-thvnt" (OuterVolumeSpecName: "kube-api-access-thvnt") pod "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03" (UID: "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03"). InnerVolumeSpecName "kube-api-access-thvnt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 02:16:12.507341 kubelet[2515]: I0428 02:16:12.507294 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03" (UID: "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 02:16:12.507701 kubelet[2515]: I0428 02:16:12.507665 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03" (UID: "c4d3b86c-d45f-45be-9c28-3b6fbe58bd03"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 02:16:12.606009 kubelet[2515]: I0428 02:16:12.605944 2515 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 28 02:16:12.606138 kubelet[2515]: I0428 02:16:12.606027 2515 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 28 02:16:12.606138 kubelet[2515]: I0428 02:16:12.606043 2515 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-thvnt\" (UniqueName: \"kubernetes.io/projected/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-kube-api-access-thvnt\") on node \"localhost\" DevicePath \"\"" Apr 28 02:16:12.606138 kubelet[2515]: I0428 02:16:12.606056 2515 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 28 02:16:12.606138 kubelet[2515]: I0428 02:16:12.606069 2515 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 28 02:16:12.606138 kubelet[2515]: I0428 02:16:12.606083 2515 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 28 02:16:12.606138 kubelet[2515]: I0428 02:16:12.606093 2515 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 28 02:16:12.606138 kubelet[2515]: I0428 02:16:12.606105 2515 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 28 02:16:12.606138 kubelet[2515]: I0428 02:16:12.606116 2515 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 28 02:16:12.606286 kubelet[2515]: I0428 02:16:12.606130 2515 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 28 02:16:12.610600 kubelet[2515]: I0428 02:16:12.610572 2515 scope.go:117] "RemoveContainer" containerID="4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe" Apr 28 02:16:12.611728 containerd[1469]: time="2026-04-28T02:16:12.611681155Z" level=info msg="RemoveContainer for \"4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe\"" Apr 28 02:16:12.614232 systemd[1]: Removed slice kubepods-burstable-podc4d3b86c_d45f_45be_9c28_3b6fbe58bd03.slice - libcontainer container kubepods-burstable-podc4d3b86c_d45f_45be_9c28_3b6fbe58bd03.slice. Apr 28 02:16:12.614311 systemd[1]: kubepods-burstable-podc4d3b86c_d45f_45be_9c28_3b6fbe58bd03.slice: Consumed 5.919s CPU time. Apr 28 02:16:12.616174 containerd[1469]: time="2026-04-28T02:16:12.616032764Z" level=info msg="RemoveContainer for \"4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe\" returns successfully" Apr 28 02:16:12.617410 kubelet[2515]: I0428 02:16:12.616398 2515 scope.go:117] "RemoveContainer" containerID="3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad" Apr 28 02:16:12.617491 containerd[1469]: time="2026-04-28T02:16:12.617439667Z" level=info msg="RemoveContainer for \"3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad\"" Apr 28 02:16:12.620216 containerd[1469]: time="2026-04-28T02:16:12.620158043Z" level=info msg="RemoveContainer for \"3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad\" returns successfully" Apr 28 02:16:12.620600 kubelet[2515]: I0428 02:16:12.620435 2515 scope.go:117] "RemoveContainer" containerID="ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67" Apr 28 02:16:12.621907 containerd[1469]: time="2026-04-28T02:16:12.621808320Z" level=info msg="RemoveContainer for \"ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67\"" Apr 28 02:16:12.625327 containerd[1469]: time="2026-04-28T02:16:12.625244628Z" level=info msg="RemoveContainer for \"ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67\" returns successfully" Apr 28 02:16:12.625620 kubelet[2515]: I0428 02:16:12.625560 2515 scope.go:117] "RemoveContainer" containerID="d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e" Apr 28 02:16:12.627177 containerd[1469]: time="2026-04-28T02:16:12.627037099Z" level=info msg="RemoveContainer for \"d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e\"" Apr 28 02:16:12.632490 containerd[1469]: time="2026-04-28T02:16:12.632435019Z" level=info msg="RemoveContainer for \"d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e\" returns successfully" Apr 28 02:16:12.632796 kubelet[2515]: I0428 02:16:12.632725 2515 scope.go:117] "RemoveContainer" containerID="39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b" Apr 28 02:16:12.642048 containerd[1469]: time="2026-04-28T02:16:12.642018629Z" level=info msg="RemoveContainer for \"39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b\"" Apr 28 02:16:12.675664 containerd[1469]: time="2026-04-28T02:16:12.675575179Z" level=info msg="RemoveContainer for \"39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b\" returns successfully" Apr 28 02:16:12.676072 kubelet[2515]: I0428 02:16:12.676032 2515 scope.go:117] "RemoveContainer" containerID="4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe" Apr 28 02:16:12.679868 containerd[1469]: time="2026-04-28T02:16:12.679710198Z" level=error msg="ContainerStatus for \"4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe\": not found" Apr 28 02:16:12.688402 kubelet[2515]: E0428 02:16:12.688339 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe\": not found" containerID="4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe" Apr 28 02:16:12.688501 kubelet[2515]: I0428 02:16:12.688405 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe"} err="failed to get container status \"4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b2a5092f79bc1fa3693929577a3b1fb7895096c4a84d806ccc5e94ae66eaabe\": not found" Apr 28 02:16:12.688501 kubelet[2515]: I0428 02:16:12.688455 2515 scope.go:117] "RemoveContainer" containerID="3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad" Apr 28 02:16:12.688808 containerd[1469]: time="2026-04-28T02:16:12.688725409Z" level=error msg="ContainerStatus for \"3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad\": not found" Apr 28 02:16:12.689057 kubelet[2515]: E0428 02:16:12.688974 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad\": not found" containerID="3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad" Apr 28 02:16:12.689057 kubelet[2515]: I0428 02:16:12.689015 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad"} err="failed to get container status \"3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d5363430252f6679e4becdddfbe4db6cdeee97ae118b28ed0ba444ec620a7ad\": not found" Apr 28 02:16:12.689057 kubelet[2515]: I0428 02:16:12.689036 2515 scope.go:117] "RemoveContainer" containerID="ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67" Apr 28 02:16:12.689279 containerd[1469]: time="2026-04-28T02:16:12.689227627Z" level=error msg="ContainerStatus for \"ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67\": not found" Apr 28 02:16:12.689371 kubelet[2515]: E0428 02:16:12.689350 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67\": not found" containerID="ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67" Apr 28 02:16:12.689414 kubelet[2515]: I0428 02:16:12.689376 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67"} err="failed to get container status \"ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67\": rpc error: code = NotFound desc = an error occurred when try to find container \"ccdf5bd0511661145f2a0efec5a812f5c824de10596d4e0281dc3eef68df6a67\": not found" Apr 28 02:16:12.689414 kubelet[2515]: I0428 02:16:12.689394 2515 scope.go:117] "RemoveContainer" containerID="d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e" Apr 28 02:16:12.689617 containerd[1469]: time="2026-04-28T02:16:12.689576636Z" level=error msg="ContainerStatus for \"d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e\": not found" Apr 28 02:16:12.689814 kubelet[2515]: E0428 02:16:12.689752 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e\": not found" containerID="d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e" Apr 28 02:16:12.689863 kubelet[2515]: I0428 02:16:12.689814 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e"} err="failed to get container status \"d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0ab78e56b4b672817248feb346a0da5ccee4204d01d217dbf6a2f5caf35db6e\": not found" Apr 28 02:16:12.689863 kubelet[2515]: I0428 02:16:12.689849 2515 scope.go:117] "RemoveContainer" containerID="39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b" Apr 28 02:16:12.690087 containerd[1469]: time="2026-04-28T02:16:12.690051766Z" level=error msg="ContainerStatus for \"39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b\": not found" Apr 28 02:16:12.690187 kubelet[2515]: E0428 02:16:12.690162 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b\": not found" containerID="39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b" Apr 28 02:16:12.690187 kubelet[2515]: I0428 02:16:12.690178 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b"} err="failed to get container status \"39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b\": rpc error: code = NotFound desc = an error occurred when try to find container \"39949f7cd823f47a612bbca7228b9580e2db29c547b6089c59fc25d247b4c79b\": not found" Apr 28 02:16:12.690235 kubelet[2515]: I0428 02:16:12.690189 2515 scope.go:117] "RemoveContainer" containerID="572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277" Apr 28 02:16:12.691345 containerd[1469]: time="2026-04-28T02:16:12.691132788Z" level=info msg="RemoveContainer for \"572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277\"" Apr 28 02:16:12.694030 containerd[1469]: time="2026-04-28T02:16:12.693979461Z" level=info msg="RemoveContainer for \"572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277\" returns successfully" Apr 28 02:16:12.694201 kubelet[2515]: I0428 02:16:12.694152 2515 scope.go:117] "RemoveContainer" containerID="572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277" Apr 28 02:16:12.694420 containerd[1469]: time="2026-04-28T02:16:12.694367200Z" level=error msg="ContainerStatus for \"572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277\": not found" Apr 28 02:16:12.694498 kubelet[2515]: E0428 02:16:12.694475 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277\": not found" containerID="572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277" Apr 28 02:16:12.694523 kubelet[2515]: I0428 02:16:12.694507 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277"} err="failed to get container status \"572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277\": rpc error: code = NotFound desc = an error occurred when try to find container \"572500f9a2b264e2716c774e6e85bec902ec8fe29737d42c4e0e0507f5359277\": not found" Apr 28 02:16:13.200552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4-rootfs.mount: Deactivated successfully. Apr 28 02:16:13.200655 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4-shm.mount: Deactivated successfully. Apr 28 02:16:13.200701 systemd[1]: var-lib-kubelet-pods-781a2502\x2dd870\x2d4752\x2d9150\x2db228287abd72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmv8lv.mount: Deactivated successfully. Apr 28 02:16:13.200753 systemd[1]: var-lib-kubelet-pods-c4d3b86c\x2dd45f\x2d45be\x2d9c28\x2d3b6fbe58bd03-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dthvnt.mount: Deactivated successfully. Apr 28 02:16:13.200821 systemd[1]: var-lib-kubelet-pods-c4d3b86c\x2dd45f\x2d45be\x2d9c28\x2d3b6fbe58bd03-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 28 02:16:13.200898 systemd[1]: var-lib-kubelet-pods-c4d3b86c\x2dd45f\x2d45be\x2d9c28\x2d3b6fbe58bd03-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 28 02:16:14.129233 sshd[4119]: pam_unix(sshd:session): session closed for user core Apr 28 02:16:14.141322 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:60886.service: Deactivated successfully. Apr 28 02:16:14.142624 systemd[1]: session-21.scope: Deactivated successfully. Apr 28 02:16:14.143883 systemd-logind[1454]: Session 21 logged out. Waiting for processes to exit. Apr 28 02:16:14.144707 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:60898.service - OpenSSH per-connection server daemon (10.0.0.1:60898). Apr 28 02:16:14.145482 systemd-logind[1454]: Removed session 21. Apr 28 02:16:14.175265 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 60898 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:16:14.176211 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:16:14.179512 systemd-logind[1454]: New session 22 of user core. Apr 28 02:16:14.184248 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 28 02:16:14.408218 containerd[1469]: time="2026-04-28T02:16:14.408009763Z" level=info msg="StopPodSandbox for \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\"" Apr 28 02:16:14.408218 containerd[1469]: time="2026-04-28T02:16:14.408088954Z" level=info msg="TearDown network for sandbox \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\" successfully" Apr 28 02:16:14.408218 containerd[1469]: time="2026-04-28T02:16:14.408098088Z" level=info msg="StopPodSandbox for \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\" returns successfully" Apr 28 02:16:14.408515 containerd[1469]: time="2026-04-28T02:16:14.408441701Z" level=info msg="RemovePodSandbox for \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\"" Apr 28 02:16:14.408515 containerd[1469]: time="2026-04-28T02:16:14.408470610Z" level=info msg="Forcibly stopping sandbox \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\"" Apr 28 02:16:14.408548 containerd[1469]: time="2026-04-28T02:16:14.408518034Z" level=info msg="TearDown network for sandbox \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\" successfully" Apr 28 02:16:14.412803 containerd[1469]: time="2026-04-28T02:16:14.412720676Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 02:16:14.412803 containerd[1469]: time="2026-04-28T02:16:14.412772078Z" level=info msg="RemovePodSandbox \"7945f3f3582fe90bac4b5080abe73de0820570aee956f53f6ef09fe5b86984a4\" returns successfully" Apr 28 02:16:14.413213 containerd[1469]: time="2026-04-28T02:16:14.413184565Z" level=info msg="StopPodSandbox for \"5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e\"" Apr 28 02:16:14.413296 containerd[1469]: time="2026-04-28T02:16:14.413250743Z" level=info msg="TearDown network for sandbox \"5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e\" successfully" Apr 28 02:16:14.413296 containerd[1469]: time="2026-04-28T02:16:14.413259569Z" level=info msg="StopPodSandbox for \"5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e\" returns successfully" Apr 28 02:16:14.414017 containerd[1469]: time="2026-04-28T02:16:14.413554043Z" level=info msg="RemovePodSandbox for \"5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e\"" Apr 28 02:16:14.414017 containerd[1469]: time="2026-04-28T02:16:14.413574405Z" level=info msg="Forcibly stopping sandbox \"5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e\"" Apr 28 02:16:14.414017 containerd[1469]: time="2026-04-28T02:16:14.413609170Z" level=info msg="TearDown network for sandbox \"5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e\" successfully" Apr 28 02:16:14.416023 containerd[1469]: time="2026-04-28T02:16:14.415985785Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 02:16:14.416023 containerd[1469]: time="2026-04-28T02:16:14.416023615Z" level=info msg="RemovePodSandbox \"5f6b3acbaae55b9072b6797ad08286ba0b79b2b2470381f92e2eaf33f587559e\" returns successfully" Apr 28 02:16:14.425002 kubelet[2515]: I0428 02:16:14.424935 2515 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="781a2502-d870-4752-9150-b228287abd72" path="/var/lib/kubelet/pods/781a2502-d870-4752-9150-b228287abd72/volumes" Apr 28 02:16:14.425407 kubelet[2515]: I0428 02:16:14.425381 2515 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4d3b86c-d45f-45be-9c28-3b6fbe58bd03" path="/var/lib/kubelet/pods/c4d3b86c-d45f-45be-9c28-3b6fbe58bd03/volumes" Apr 28 02:16:14.464534 kubelet[2515]: E0428 02:16:14.464474 2515 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 02:16:14.730029 sshd[4283]: pam_unix(sshd:session): session closed for user core Apr 28 02:16:14.740768 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:60898.service: Deactivated successfully. Apr 28 02:16:14.743014 systemd[1]: session-22.scope: Deactivated successfully. Apr 28 02:16:14.746043 systemd-logind[1454]: Session 22 logged out. Waiting for processes to exit. Apr 28 02:16:14.755711 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:60904.service - OpenSSH per-connection server daemon (10.0.0.1:60904). Apr 28 02:16:14.764060 systemd-logind[1454]: Removed session 22. Apr 28 02:16:14.773316 systemd[1]: Created slice kubepods-burstable-pod43dfc31e_00ca_4718_82e0_979b23c95364.slice - libcontainer container kubepods-burstable-pod43dfc31e_00ca_4718_82e0_979b23c95364.slice. Apr 28 02:16:14.791241 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 60904 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:16:14.792586 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:16:14.795881 systemd-logind[1454]: New session 23 of user core. Apr 28 02:16:14.806101 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 28 02:16:14.820887 kubelet[2515]: I0428 02:16:14.820854 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43dfc31e-00ca-4718-82e0-979b23c95364-cilium-cgroup\") pod \"cilium-hxxqg\" (UID: \"43dfc31e-00ca-4718-82e0-979b23c95364\") " pod="kube-system/cilium-hxxqg" Apr 28 02:16:14.820887 kubelet[2515]: I0428 02:16:14.820889 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43dfc31e-00ca-4718-82e0-979b23c95364-clustermesh-secrets\") pod \"cilium-hxxqg\" (UID: \"43dfc31e-00ca-4718-82e0-979b23c95364\") " pod="kube-system/cilium-hxxqg" Apr 28 02:16:14.821008 kubelet[2515]: I0428 02:16:14.820915 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43dfc31e-00ca-4718-82e0-979b23c95364-cilium-config-path\") pod \"cilium-hxxqg\" (UID: \"43dfc31e-00ca-4718-82e0-979b23c95364\") " pod="kube-system/cilium-hxxqg" Apr 28 02:16:14.821008 kubelet[2515]: I0428 02:16:14.820928 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43dfc31e-00ca-4718-82e0-979b23c95364-host-proc-sys-net\") pod \"cilium-hxxqg\" (UID: \"43dfc31e-00ca-4718-82e0-979b23c95364\") " pod="kube-system/cilium-hxxqg" Apr 28 02:16:14.821008 kubelet[2515]: I0428 02:16:14.820943 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43dfc31e-00ca-4718-82e0-979b23c95364-host-proc-sys-kernel\") pod \"cilium-hxxqg\" (UID: \"43dfc31e-00ca-4718-82e0-979b23c95364\") " pod="kube-system/cilium-hxxqg" Apr 28 02:16:14.821008 kubelet[2515]: I0428 02:16:14.820957 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43dfc31e-00ca-4718-82e0-979b23c95364-hubble-tls\") pod \"cilium-hxxqg\" (UID: \"43dfc31e-00ca-4718-82e0-979b23c95364\") " pod="kube-system/cilium-hxxqg" Apr 28 02:16:14.821092 kubelet[2515]: I0428 02:16:14.821000 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43dfc31e-00ca-4718-82e0-979b23c95364-bpf-maps\") pod \"cilium-hxxqg\" (UID: \"43dfc31e-00ca-4718-82e0-979b23c95364\") " pod="kube-system/cilium-hxxqg" Apr 28 02:16:14.821092 kubelet[2515]: I0428 02:16:14.821057 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43dfc31e-00ca-4718-82e0-979b23c95364-hostproc\") pod \"cilium-hxxqg\" (UID: \"43dfc31e-00ca-4718-82e0-979b23c95364\") " pod="kube-system/cilium-hxxqg" Apr 28 02:16:14.821092 kubelet[2515]: I0428 02:16:14.821080 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fzp2\" (UniqueName: \"kubernetes.io/projected/43dfc31e-00ca-4718-82e0-979b23c95364-kube-api-access-8fzp2\") pod \"cilium-hxxqg\" (UID: \"43dfc31e-00ca-4718-82e0-979b23c95364\") " pod="kube-system/cilium-hxxqg" Apr 28 02:16:14.821141 kubelet[2515]: I0428 02:16:14.821097 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43dfc31e-00ca-4718-82e0-979b23c95364-etc-cni-netd\") pod \"cilium-hxxqg\" (UID: \"43dfc31e-00ca-4718-82e0-979b23c95364\") " pod="kube-system/cilium-hxxqg" Apr 28 02:16:14.821141 kubelet[2515]: I0428 02:16:14.821112 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/43dfc31e-00ca-4718-82e0-979b23c95364-cilium-ipsec-secrets\") pod \"cilium-hxxqg\" (UID: \"43dfc31e-00ca-4718-82e0-979b23c95364\") " pod="kube-system/cilium-hxxqg" Apr 28 02:16:14.821141 kubelet[2515]: I0428 02:16:14.821126 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43dfc31e-00ca-4718-82e0-979b23c95364-cilium-run\") pod \"cilium-hxxqg\" (UID: \"43dfc31e-00ca-4718-82e0-979b23c95364\") " pod="kube-system/cilium-hxxqg" Apr 28 02:16:14.821141 kubelet[2515]: I0428 02:16:14.821137 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43dfc31e-00ca-4718-82e0-979b23c95364-cni-path\") pod \"cilium-hxxqg\" (UID: \"43dfc31e-00ca-4718-82e0-979b23c95364\") " pod="kube-system/cilium-hxxqg" Apr 28 02:16:14.821255 kubelet[2515]: I0428 02:16:14.821148 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43dfc31e-00ca-4718-82e0-979b23c95364-lib-modules\") pod \"cilium-hxxqg\" (UID: \"43dfc31e-00ca-4718-82e0-979b23c95364\") " pod="kube-system/cilium-hxxqg" Apr 28 02:16:14.821255 kubelet[2515]: I0428 02:16:14.821159 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43dfc31e-00ca-4718-82e0-979b23c95364-xtables-lock\") pod \"cilium-hxxqg\" (UID: \"43dfc31e-00ca-4718-82e0-979b23c95364\") " pod="kube-system/cilium-hxxqg" Apr 28 02:16:14.856559 sshd[4298]: pam_unix(sshd:session): session closed for user core Apr 28 02:16:14.869181 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:60904.service: Deactivated successfully. Apr 28 02:16:14.870490 systemd[1]: session-23.scope: Deactivated successfully. Apr 28 02:16:14.871619 systemd-logind[1454]: Session 23 logged out. Waiting for processes to exit. Apr 28 02:16:14.872609 systemd[1]: Started sshd@23-10.0.0.13:22-10.0.0.1:60920.service - OpenSSH per-connection server daemon (10.0.0.1:60920). Apr 28 02:16:14.873217 systemd-logind[1454]: Removed session 23. Apr 28 02:16:14.901469 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 60920 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:16:14.902545 sshd[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:16:14.905819 systemd-logind[1454]: New session 24 of user core. Apr 28 02:16:14.919270 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 28 02:16:15.077331 kubelet[2515]: E0428 02:16:15.077095 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:16:15.078455 containerd[1469]: time="2026-04-28T02:16:15.077961026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hxxqg,Uid:43dfc31e-00ca-4718-82e0-979b23c95364,Namespace:kube-system,Attempt:0,}" Apr 28 02:16:15.101337 containerd[1469]: time="2026-04-28T02:16:15.101108935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:16:15.101470 containerd[1469]: time="2026-04-28T02:16:15.101296501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:16:15.101470 containerd[1469]: time="2026-04-28T02:16:15.101326587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:16:15.101470 containerd[1469]: time="2026-04-28T02:16:15.101413018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:16:15.126034 systemd[1]: Started cri-containerd-f5e5729becc19c7a2907180eb6a1defb55165d78bd80da14488c8345984885a8.scope - libcontainer container f5e5729becc19c7a2907180eb6a1defb55165d78bd80da14488c8345984885a8. Apr 28 02:16:15.150310 containerd[1469]: time="2026-04-28T02:16:15.150274035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hxxqg,Uid:43dfc31e-00ca-4718-82e0-979b23c95364,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5e5729becc19c7a2907180eb6a1defb55165d78bd80da14488c8345984885a8\"" Apr 28 02:16:15.151200 kubelet[2515]: E0428 02:16:15.151177 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:16:15.156446 containerd[1469]: time="2026-04-28T02:16:15.156413314Z" level=info msg="CreateContainer within sandbox \"f5e5729becc19c7a2907180eb6a1defb55165d78bd80da14488c8345984885a8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 28 02:16:15.168434 containerd[1469]: time="2026-04-28T02:16:15.168355281Z" level=info msg="CreateContainer within sandbox \"f5e5729becc19c7a2907180eb6a1defb55165d78bd80da14488c8345984885a8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2717c0fe6c042373c6db95bcae35b1ed5f5d4fce377c26408d99b4e13f961339\"" Apr 28 02:16:15.168951 containerd[1469]: time="2026-04-28T02:16:15.168924908Z" level=info msg="StartContainer for \"2717c0fe6c042373c6db95bcae35b1ed5f5d4fce377c26408d99b4e13f961339\"" Apr 28 02:16:15.197053 systemd[1]: Started cri-containerd-2717c0fe6c042373c6db95bcae35b1ed5f5d4fce377c26408d99b4e13f961339.scope - libcontainer container 2717c0fe6c042373c6db95bcae35b1ed5f5d4fce377c26408d99b4e13f961339. Apr 28 02:16:15.216381 containerd[1469]: time="2026-04-28T02:16:15.216335444Z" level=info msg="StartContainer for \"2717c0fe6c042373c6db95bcae35b1ed5f5d4fce377c26408d99b4e13f961339\" returns successfully" Apr 28 02:16:15.225675 systemd[1]: cri-containerd-2717c0fe6c042373c6db95bcae35b1ed5f5d4fce377c26408d99b4e13f961339.scope: Deactivated successfully. Apr 28 02:16:15.252487 containerd[1469]: time="2026-04-28T02:16:15.252421755Z" level=info msg="shim disconnected" id=2717c0fe6c042373c6db95bcae35b1ed5f5d4fce377c26408d99b4e13f961339 namespace=k8s.io Apr 28 02:16:15.252487 containerd[1469]: time="2026-04-28T02:16:15.252474646Z" level=warning msg="cleaning up after shim disconnected" id=2717c0fe6c042373c6db95bcae35b1ed5f5d4fce377c26408d99b4e13f961339 namespace=k8s.io Apr 28 02:16:15.252487 containerd[1469]: time="2026-04-28T02:16:15.252481549Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:16:15.623188 kubelet[2515]: E0428 02:16:15.623151 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:16:15.628941 containerd[1469]: time="2026-04-28T02:16:15.628779046Z" level=info msg="CreateContainer within sandbox \"f5e5729becc19c7a2907180eb6a1defb55165d78bd80da14488c8345984885a8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 28 02:16:15.639019 containerd[1469]: time="2026-04-28T02:16:15.638959955Z" level=info msg="CreateContainer within sandbox \"f5e5729becc19c7a2907180eb6a1defb55165d78bd80da14488c8345984885a8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ec591aa9afef49927f9af53fd9124811ab1b72b4727346a23cc6a63466741363\"" Apr 28 02:16:15.639430 containerd[1469]: time="2026-04-28T02:16:15.639383650Z" level=info msg="StartContainer for \"ec591aa9afef49927f9af53fd9124811ab1b72b4727346a23cc6a63466741363\"" Apr 28 02:16:15.665021 systemd[1]: Started cri-containerd-ec591aa9afef49927f9af53fd9124811ab1b72b4727346a23cc6a63466741363.scope - libcontainer container ec591aa9afef49927f9af53fd9124811ab1b72b4727346a23cc6a63466741363. Apr 28 02:16:15.683109 containerd[1469]: time="2026-04-28T02:16:15.683074114Z" level=info msg="StartContainer for \"ec591aa9afef49927f9af53fd9124811ab1b72b4727346a23cc6a63466741363\" returns successfully" Apr 28 02:16:15.688591 systemd[1]: cri-containerd-ec591aa9afef49927f9af53fd9124811ab1b72b4727346a23cc6a63466741363.scope: Deactivated successfully. Apr 28 02:16:15.707577 containerd[1469]: time="2026-04-28T02:16:15.707501165Z" level=info msg="shim disconnected" id=ec591aa9afef49927f9af53fd9124811ab1b72b4727346a23cc6a63466741363 namespace=k8s.io Apr 28 02:16:15.707577 containerd[1469]: time="2026-04-28T02:16:15.707557177Z" level=warning msg="cleaning up after shim disconnected" id=ec591aa9afef49927f9af53fd9124811ab1b72b4727346a23cc6a63466741363 namespace=k8s.io Apr 28 02:16:15.707577 containerd[1469]: time="2026-04-28T02:16:15.707563978Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:16:16.057431 kubelet[2515]: I0428 02:16:16.057183 2515 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-28T02:16:16Z","lastTransitionTime":"2026-04-28T02:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 28 02:16:16.627532 kubelet[2515]: E0428 02:16:16.627470 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:16:16.635887 containerd[1469]: time="2026-04-28T02:16:16.633436397Z" level=info msg="CreateContainer within sandbox \"f5e5729becc19c7a2907180eb6a1defb55165d78bd80da14488c8345984885a8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 28 02:16:16.650648 containerd[1469]: time="2026-04-28T02:16:16.650588932Z" level=info msg="CreateContainer within sandbox \"f5e5729becc19c7a2907180eb6a1defb55165d78bd80da14488c8345984885a8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f6dd960159ee3bbfa25193eac737ef5ca8ad25d2c2763a0c7a4b312c5d07ae06\"" Apr 28 02:16:16.651200 containerd[1469]: time="2026-04-28T02:16:16.651158345Z" level=info msg="StartContainer for \"f6dd960159ee3bbfa25193eac737ef5ca8ad25d2c2763a0c7a4b312c5d07ae06\"" Apr 28 02:16:16.681995 systemd[1]: Started cri-containerd-f6dd960159ee3bbfa25193eac737ef5ca8ad25d2c2763a0c7a4b312c5d07ae06.scope - libcontainer container f6dd960159ee3bbfa25193eac737ef5ca8ad25d2c2763a0c7a4b312c5d07ae06. Apr 28 02:16:16.706139 containerd[1469]: time="2026-04-28T02:16:16.706099411Z" level=info msg="StartContainer for \"f6dd960159ee3bbfa25193eac737ef5ca8ad25d2c2763a0c7a4b312c5d07ae06\" returns successfully" Apr 28 02:16:16.707039 systemd[1]: cri-containerd-f6dd960159ee3bbfa25193eac737ef5ca8ad25d2c2763a0c7a4b312c5d07ae06.scope: Deactivated successfully. Apr 28 02:16:16.731948 containerd[1469]: time="2026-04-28T02:16:16.731882151Z" level=info msg="shim disconnected" id=f6dd960159ee3bbfa25193eac737ef5ca8ad25d2c2763a0c7a4b312c5d07ae06 namespace=k8s.io Apr 28 02:16:16.731948 containerd[1469]: time="2026-04-28T02:16:16.731938903Z" level=warning msg="cleaning up after shim disconnected" id=f6dd960159ee3bbfa25193eac737ef5ca8ad25d2c2763a0c7a4b312c5d07ae06 namespace=k8s.io Apr 28 02:16:16.731948 containerd[1469]: time="2026-04-28T02:16:16.731946248Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:16:16.928322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6dd960159ee3bbfa25193eac737ef5ca8ad25d2c2763a0c7a4b312c5d07ae06-rootfs.mount: Deactivated successfully. Apr 28 02:16:17.631703 kubelet[2515]: E0428 02:16:17.631638 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:16:17.637220 containerd[1469]: time="2026-04-28T02:16:17.636502160Z" level=info msg="CreateContainer within sandbox \"f5e5729becc19c7a2907180eb6a1defb55165d78bd80da14488c8345984885a8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 28 02:16:17.647558 containerd[1469]: time="2026-04-28T02:16:17.647506265Z" level=info msg="CreateContainer within sandbox \"f5e5729becc19c7a2907180eb6a1defb55165d78bd80da14488c8345984885a8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5be249f154e598862dcb4c9fc9db1e148bf522705f0235c6ee1082954865e6d1\"" Apr 28 02:16:17.648019 containerd[1469]: time="2026-04-28T02:16:17.647987458Z" level=info msg="StartContainer for \"5be249f154e598862dcb4c9fc9db1e148bf522705f0235c6ee1082954865e6d1\"" Apr 28 02:16:17.679150 systemd[1]: Started cri-containerd-5be249f154e598862dcb4c9fc9db1e148bf522705f0235c6ee1082954865e6d1.scope - libcontainer container 5be249f154e598862dcb4c9fc9db1e148bf522705f0235c6ee1082954865e6d1. Apr 28 02:16:17.696882 systemd[1]: cri-containerd-5be249f154e598862dcb4c9fc9db1e148bf522705f0235c6ee1082954865e6d1.scope: Deactivated successfully. Apr 28 02:16:17.698713 containerd[1469]: time="2026-04-28T02:16:17.698665037Z" level=info msg="StartContainer for \"5be249f154e598862dcb4c9fc9db1e148bf522705f0235c6ee1082954865e6d1\" returns successfully" Apr 28 02:16:17.717515 containerd[1469]: time="2026-04-28T02:16:17.717420604Z" level=info msg="shim disconnected" id=5be249f154e598862dcb4c9fc9db1e148bf522705f0235c6ee1082954865e6d1 namespace=k8s.io Apr 28 02:16:17.717515 containerd[1469]: time="2026-04-28T02:16:17.717475368Z" level=warning msg="cleaning up after shim disconnected" id=5be249f154e598862dcb4c9fc9db1e148bf522705f0235c6ee1082954865e6d1 namespace=k8s.io Apr 28 02:16:17.717515 containerd[1469]: time="2026-04-28T02:16:17.717482322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:16:17.928130 systemd[1]: run-containerd-runc-k8s.io-5be249f154e598862dcb4c9fc9db1e148bf522705f0235c6ee1082954865e6d1-runc.vbxFy5.mount: Deactivated successfully. Apr 28 02:16:17.928225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5be249f154e598862dcb4c9fc9db1e148bf522705f0235c6ee1082954865e6d1-rootfs.mount: Deactivated successfully. Apr 28 02:16:18.636427 kubelet[2515]: E0428 02:16:18.636387 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:16:18.640962 containerd[1469]: time="2026-04-28T02:16:18.640889882Z" level=info msg="CreateContainer within sandbox \"f5e5729becc19c7a2907180eb6a1defb55165d78bd80da14488c8345984885a8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 28 02:16:18.654162 containerd[1469]: time="2026-04-28T02:16:18.654109282Z" level=info msg="CreateContainer within sandbox \"f5e5729becc19c7a2907180eb6a1defb55165d78bd80da14488c8345984885a8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fbb478bf545363304e5d98ed10717155c783276017b714ddf297b68c58aa362f\"" Apr 28 02:16:18.655264 containerd[1469]: time="2026-04-28T02:16:18.654552567Z" level=info msg="StartContainer for \"fbb478bf545363304e5d98ed10717155c783276017b714ddf297b68c58aa362f\"" Apr 28 02:16:18.680248 systemd[1]: Started cri-containerd-fbb478bf545363304e5d98ed10717155c783276017b714ddf297b68c58aa362f.scope - libcontainer container fbb478bf545363304e5d98ed10717155c783276017b714ddf297b68c58aa362f. Apr 28 02:16:18.702163 containerd[1469]: time="2026-04-28T02:16:18.702113197Z" level=info msg="StartContainer for \"fbb478bf545363304e5d98ed10717155c783276017b714ddf297b68c58aa362f\" returns successfully" Apr 28 02:16:18.917882 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 28 02:16:19.641371 kubelet[2515]: E0428 02:16:19.641331 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:16:21.078194 kubelet[2515]: E0428 02:16:21.078080 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:16:21.751551 systemd-networkd[1410]: lxc_health: Link UP Apr 28 02:16:21.761330 systemd-networkd[1410]: lxc_health: Gained carrier Apr 28 02:16:23.079589 kubelet[2515]: E0428 02:16:23.079206 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:16:23.098306 kubelet[2515]: I0428 02:16:23.097422 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hxxqg" podStartSLOduration=9.097403589 podStartE2EDuration="9.097403589s" podCreationTimestamp="2026-04-28 02:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:16:19.653982349 +0000 UTC m=+65.309052333" watchObservedRunningTime="2026-04-28 02:16:23.097403589 +0000 UTC m=+68.752473563" Apr 28 02:16:23.555167 systemd-networkd[1410]: lxc_health: Gained IPv6LL Apr 28 02:16:23.649449 kubelet[2515]: E0428 02:16:23.649367 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:16:24.651591 kubelet[2515]: E0428 02:16:24.651498 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:16:25.382192 systemd[1]: run-containerd-runc-k8s.io-fbb478bf545363304e5d98ed10717155c783276017b714ddf297b68c58aa362f-runc.1J1jMm.mount: Deactivated successfully. Apr 28 02:16:27.500746 sshd[4306]: pam_unix(sshd:session): session closed for user core Apr 28 02:16:27.503526 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:60920.service: Deactivated successfully. Apr 28 02:16:27.504731 systemd[1]: session-24.scope: Deactivated successfully. Apr 28 02:16:27.505262 systemd-logind[1454]: Session 24 logged out. Waiting for processes to exit. Apr 28 02:16:27.506049 systemd-logind[1454]: Removed session 24.