Apr 28 02:12:52.851293 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 27 22:40:10 -00 2026 Apr 28 02:12:52.851311 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 02:12:52.851321 kernel: BIOS-provided physical RAM map: Apr 28 02:12:52.851327 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 28 02:12:52.851332 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 28 02:12:52.851337 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 28 02:12:52.851343 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 28 02:12:52.851348 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 28 02:12:52.851353 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 28 02:12:52.851359 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 28 02:12:52.851364 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 28 02:12:52.851369 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 28 02:12:52.851374 kernel: NX (Execute Disable) protection: active Apr 28 02:12:52.851379 kernel: APIC: Static calls initialized Apr 28 02:12:52.851386 kernel: SMBIOS 2.8 present. Apr 28 02:12:52.851393 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 28 02:12:52.851398 kernel: Hypervisor detected: KVM Apr 28 02:12:52.851404 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 28 02:12:52.851409 kernel: kvm-clock: using sched offset of 4859885545 cycles Apr 28 02:12:52.851415 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 28 02:12:52.851421 kernel: tsc: Detected 2793.438 MHz processor Apr 28 02:12:52.851427 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 28 02:12:52.851432 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 28 02:12:52.851438 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 28 02:12:52.851445 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 28 02:12:52.851451 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 28 02:12:52.851456 kernel: Using GB pages for direct mapping Apr 28 02:12:52.851462 kernel: ACPI: Early table checksum verification disabled Apr 28 02:12:52.851467 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 28 02:12:52.851473 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:12:52.851479 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:12:52.851484 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:12:52.851490 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 28 02:12:52.851497 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:12:52.851502 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:12:52.851508 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:12:52.851513 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:12:52.851519 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 28 02:12:52.851525 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 28 02:12:52.851530 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 28 02:12:52.851538 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 28 02:12:52.851545 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 28 02:12:52.851551 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 28 02:12:52.851557 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 28 02:12:52.851563 kernel: No NUMA configuration found Apr 28 02:12:52.851569 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 28 02:12:52.851574 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 28 02:12:52.851580 kernel: Zone ranges: Apr 28 02:12:52.851588 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 28 02:12:52.851593 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 28 02:12:52.851599 kernel: Normal empty Apr 28 02:12:52.851605 kernel: Movable zone start for each node Apr 28 02:12:52.851611 kernel: Early memory node ranges Apr 28 02:12:52.851616 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 28 02:12:52.851622 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 28 02:12:52.851628 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 28 02:12:52.851634 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 02:12:52.851641 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 28 02:12:52.851672 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 28 02:12:52.851682 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 28 02:12:52.851691 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 28 02:12:52.851736 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 28 02:12:52.851741 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 28 02:12:52.851745 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 28 02:12:52.851750 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 28 02:12:52.851755 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 28 02:12:52.851762 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 28 02:12:52.851767 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 28 02:12:52.851772 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 28 02:12:52.851777 kernel: TSC deadline timer available Apr 28 02:12:52.851782 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 28 02:12:52.851787 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 28 02:12:52.851792 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 28 02:12:52.851797 kernel: kvm-guest: setup PV sched yield Apr 28 02:12:52.851801 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 28 02:12:52.851808 kernel: Booting paravirtualized kernel on KVM Apr 28 02:12:52.851813 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 28 02:12:52.851829 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 28 02:12:52.851834 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 28 02:12:52.851839 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 28 02:12:52.851844 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 28 02:12:52.851849 kernel: kvm-guest: PV spinlocks enabled Apr 28 02:12:52.851853 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 28 02:12:52.851859 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 02:12:52.851866 kernel: random: crng init done Apr 28 02:12:52.851871 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 28 02:12:52.851876 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 28 02:12:52.851880 kernel: Fallback order for Node 0: 0 Apr 28 02:12:52.851885 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 28 02:12:52.851890 kernel: Policy zone: DMA32 Apr 28 02:12:52.851895 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 28 02:12:52.851900 kernel: Memory: 2433648K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 137900K reserved, 0K cma-reserved) Apr 28 02:12:52.851906 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 28 02:12:52.851911 kernel: ftrace: allocating 37996 entries in 149 pages Apr 28 02:12:52.851916 kernel: ftrace: allocated 149 pages with 4 groups Apr 28 02:12:52.851921 kernel: Dynamic Preempt: voluntary Apr 28 02:12:52.851926 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 28 02:12:52.851931 kernel: rcu: RCU event tracing is enabled. Apr 28 02:12:52.851936 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 28 02:12:52.851941 kernel: Trampoline variant of Tasks RCU enabled. Apr 28 02:12:52.851946 kernel: Rude variant of Tasks RCU enabled. Apr 28 02:12:52.851951 kernel: Tracing variant of Tasks RCU enabled. Apr 28 02:12:52.851957 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 28 02:12:52.851962 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 28 02:12:52.851967 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 28 02:12:52.851972 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 28 02:12:52.851977 kernel: Console: colour VGA+ 80x25 Apr 28 02:12:52.851982 kernel: printk: console [ttyS0] enabled Apr 28 02:12:52.851987 kernel: ACPI: Core revision 20230628 Apr 28 02:12:52.851992 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 28 02:12:52.851996 kernel: APIC: Switch to symmetric I/O mode setup Apr 28 02:12:52.852003 kernel: x2apic enabled Apr 28 02:12:52.852008 kernel: APIC: Switched APIC routing to: physical x2apic Apr 28 02:12:52.852012 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 28 02:12:52.852017 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 28 02:12:52.852022 kernel: kvm-guest: setup PV IPIs Apr 28 02:12:52.852027 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 28 02:12:52.852032 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 02:12:52.852043 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 28 02:12:52.852049 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 28 02:12:52.852054 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 28 02:12:52.852060 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 28 02:12:52.852066 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 28 02:12:52.852072 kernel: Spectre V2 : Mitigation: Retpolines Apr 28 02:12:52.852077 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 28 02:12:52.852083 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 28 02:12:52.852088 kernel: RETBleed: Vulnerable Apr 28 02:12:52.852096 kernel: Speculative Store Bypass: Vulnerable Apr 28 02:12:52.852101 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 28 02:12:52.852106 kernel: GDS: Unknown: Dependent on hypervisor status Apr 28 02:12:52.852112 kernel: active return thunk: its_return_thunk Apr 28 02:12:52.852117 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 28 02:12:52.852123 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 28 02:12:52.852128 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 28 02:12:52.852191 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 28 02:12:52.852198 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 28 02:12:52.852206 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 28 02:12:52.852212 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 28 02:12:52.852217 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 28 02:12:52.852222 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 28 02:12:52.852228 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 28 02:12:52.852233 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 28 02:12:52.852239 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 28 02:12:52.852244 kernel: Freeing SMP alternatives memory: 32K Apr 28 02:12:52.852250 kernel: pid_max: default: 32768 minimum: 301 Apr 28 02:12:52.852256 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 28 02:12:52.852262 kernel: landlock: Up and running. Apr 28 02:12:52.852267 kernel: SELinux: Initializing. Apr 28 02:12:52.852273 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 02:12:52.852278 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 02:12:52.852284 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 28 02:12:52.852289 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 02:12:52.852295 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 02:12:52.852301 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 02:12:52.852308 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 28 02:12:52.852313 kernel: signal: max sigframe size: 3632 Apr 28 02:12:52.852319 kernel: rcu: Hierarchical SRCU implementation. Apr 28 02:12:52.852324 kernel: rcu: Max phase no-delay instances is 400. Apr 28 02:12:52.852330 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 28 02:12:52.852335 kernel: smp: Bringing up secondary CPUs ... Apr 28 02:12:52.852340 kernel: smpboot: x86: Booting SMP configuration: Apr 28 02:12:52.852346 kernel: .... node #0, CPUs: #1 #2 #3 Apr 28 02:12:52.852351 kernel: smp: Brought up 1 node, 4 CPUs Apr 28 02:12:52.852358 kernel: smpboot: Max logical packages: 1 Apr 28 02:12:52.852363 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 28 02:12:52.852369 kernel: devtmpfs: initialized Apr 28 02:12:52.852374 kernel: x86/mm: Memory block size: 128MB Apr 28 02:12:52.852380 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 28 02:12:52.852385 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 28 02:12:52.852390 kernel: pinctrl core: initialized pinctrl subsystem Apr 28 02:12:52.852396 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 28 02:12:52.852401 kernel: audit: initializing netlink subsys (disabled) Apr 28 02:12:52.852408 kernel: audit: type=2000 audit(1777342371.608:1): state=initialized audit_enabled=0 res=1 Apr 28 02:12:52.852413 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 28 02:12:52.852419 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 28 02:12:52.852424 kernel: cpuidle: using governor menu Apr 28 02:12:52.852430 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 28 02:12:52.852435 kernel: dca service started, version 1.12.1 Apr 28 02:12:52.852441 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 28 02:12:52.852446 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 28 02:12:52.852452 kernel: PCI: Using configuration type 1 for base access Apr 28 02:12:52.852458 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 28 02:12:52.852464 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 28 02:12:52.852469 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 28 02:12:52.852475 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 28 02:12:52.852480 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 28 02:12:52.852486 kernel: ACPI: Added _OSI(Module Device) Apr 28 02:12:52.852491 kernel: ACPI: Added _OSI(Processor Device) Apr 28 02:12:52.852496 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 28 02:12:52.852502 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 28 02:12:52.852509 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 28 02:12:52.852514 kernel: ACPI: Interpreter enabled Apr 28 02:12:52.852520 kernel: ACPI: PM: (supports S0 S3 S5) Apr 28 02:12:52.852525 kernel: ACPI: Using IOAPIC for interrupt routing Apr 28 02:12:52.852531 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 28 02:12:52.852536 kernel: PCI: Using E820 reservations for host bridge windows Apr 28 02:12:52.852542 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 28 02:12:52.852547 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 28 02:12:52.852676 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 28 02:12:52.852754 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 28 02:12:52.852810 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 28 02:12:52.852817 kernel: PCI host bridge to bus 0000:00 Apr 28 02:12:52.852875 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 28 02:12:52.852924 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 28 02:12:52.852972 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 28 02:12:52.853023 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 28 02:12:52.853071 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 28 02:12:52.853119 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 28 02:12:52.853204 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 28 02:12:52.853272 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 28 02:12:52.853333 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 28 02:12:52.853392 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 28 02:12:52.853447 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 28 02:12:52.853501 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 28 02:12:52.853556 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 28 02:12:52.853618 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 28 02:12:52.853708 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 28 02:12:52.853765 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 28 02:12:52.853822 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 28 02:12:52.853924 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 28 02:12:52.853982 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 28 02:12:52.854037 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 28 02:12:52.854091 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 28 02:12:52.854186 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 28 02:12:52.854245 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 28 02:12:52.854302 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 28 02:12:52.854376 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 28 02:12:52.854433 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 28 02:12:52.854493 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 28 02:12:52.854548 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 28 02:12:52.854606 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 28 02:12:52.854698 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 28 02:12:52.854757 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 28 02:12:52.854816 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 28 02:12:52.854870 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 28 02:12:52.854877 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 28 02:12:52.854883 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 28 02:12:52.854888 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 28 02:12:52.854894 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 28 02:12:52.854901 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 28 02:12:52.854906 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 28 02:12:52.854912 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 28 02:12:52.854917 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 28 02:12:52.854922 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 28 02:12:52.854928 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 28 02:12:52.854933 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 28 02:12:52.854938 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 28 02:12:52.854944 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 28 02:12:52.854950 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 28 02:12:52.854956 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 28 02:12:52.854961 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 28 02:12:52.854967 kernel: iommu: Default domain type: Translated Apr 28 02:12:52.854972 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 28 02:12:52.854977 kernel: PCI: Using ACPI for IRQ routing Apr 28 02:12:52.854983 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 28 02:12:52.854988 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 28 02:12:52.854994 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 28 02:12:52.855049 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 28 02:12:52.855104 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 28 02:12:52.855190 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 28 02:12:52.855198 kernel: vgaarb: loaded Apr 28 02:12:52.855206 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 28 02:12:52.855212 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 28 02:12:52.855217 kernel: clocksource: Switched to clocksource kvm-clock Apr 28 02:12:52.855223 kernel: VFS: Disk quotas dquot_6.6.0 Apr 28 02:12:52.855229 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 28 02:12:52.855236 kernel: pnp: PnP ACPI init Apr 28 02:12:52.855299 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 28 02:12:52.855308 kernel: pnp: PnP ACPI: found 6 devices Apr 28 02:12:52.855313 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 28 02:12:52.855319 kernel: NET: Registered PF_INET protocol family Apr 28 02:12:52.855325 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 28 02:12:52.855330 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 28 02:12:52.855336 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 28 02:12:52.855343 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 28 02:12:52.855349 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 28 02:12:52.855354 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 28 02:12:52.855360 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 02:12:52.855365 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 02:12:52.855371 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 28 02:12:52.855377 kernel: NET: Registered PF_XDP protocol family Apr 28 02:12:52.855428 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 28 02:12:52.855477 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 28 02:12:52.855529 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 28 02:12:52.855579 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 28 02:12:52.855629 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 28 02:12:52.855717 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 28 02:12:52.855726 kernel: PCI: CLS 0 bytes, default 64 Apr 28 02:12:52.855731 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 28 02:12:52.855737 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 02:12:52.855743 kernel: Initialise system trusted keyrings Apr 28 02:12:52.855751 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 28 02:12:52.855756 kernel: Key type asymmetric registered Apr 28 02:12:52.855761 kernel: Asymmetric key parser 'x509' registered Apr 28 02:12:52.855767 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 28 02:12:52.855773 kernel: io scheduler mq-deadline registered Apr 28 02:12:52.855778 kernel: io scheduler kyber registered Apr 28 02:12:52.855783 kernel: io scheduler bfq registered Apr 28 02:12:52.855789 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 28 02:12:52.855795 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 28 02:12:52.855802 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 28 02:12:52.855808 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 28 02:12:52.855813 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 28 02:12:52.855818 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 28 02:12:52.855841 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 28 02:12:52.855847 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 28 02:12:52.855852 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 28 02:12:52.855870 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 28 02:12:52.856025 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 28 02:12:52.856107 kernel: rtc_cmos 00:04: registered as rtc0 Apr 28 02:12:52.856249 kernel: rtc_cmos 00:04: setting system clock to 2026-04-28T02:12:52 UTC (1777342372) Apr 28 02:12:52.856301 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 28 02:12:52.856308 kernel: intel_pstate: CPU model not supported Apr 28 02:12:52.856314 kernel: NET: Registered PF_INET6 protocol family Apr 28 02:12:52.856319 kernel: Segment Routing with IPv6 Apr 28 02:12:52.856325 kernel: In-situ OAM (IOAM) with IPv6 Apr 28 02:12:52.856330 kernel: NET: Registered PF_PACKET protocol family Apr 28 02:12:52.856338 kernel: Key type dns_resolver registered Apr 28 02:12:52.856344 kernel: IPI shorthand broadcast: enabled Apr 28 02:12:52.856349 kernel: sched_clock: Marking stable (911011330, 398613861)->(1431346527, -121721336) Apr 28 02:12:52.856355 kernel: registered taskstats version 1 Apr 28 02:12:52.856360 kernel: Loading compiled-in X.509 certificates Apr 28 02:12:52.856366 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 40b5c5a01382737457e1eae3e889ae587960eb18' Apr 28 02:12:52.856371 kernel: Key type .fscrypt registered Apr 28 02:12:52.856376 kernel: Key type fscrypt-provisioning registered Apr 28 02:12:52.856382 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 28 02:12:52.856389 kernel: ima: Allocated hash algorithm: sha1 Apr 28 02:12:52.856394 kernel: ima: No architecture policies found Apr 28 02:12:52.856400 kernel: clk: Disabling unused clocks Apr 28 02:12:52.856405 kernel: Freeing unused kernel image (initmem) memory: 42884K Apr 28 02:12:52.856411 kernel: Write protecting the kernel read-only data: 36864k Apr 28 02:12:52.856416 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 28 02:12:52.856422 kernel: Run /init as init process Apr 28 02:12:52.856427 kernel: with arguments: Apr 28 02:12:52.856432 kernel: /init Apr 28 02:12:52.856439 kernel: with environment: Apr 28 02:12:52.856444 kernel: HOME=/ Apr 28 02:12:52.856450 kernel: TERM=linux Apr 28 02:12:52.856457 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 02:12:52.856464 systemd[1]: Detected virtualization kvm. Apr 28 02:12:52.856470 systemd[1]: Detected architecture x86-64. Apr 28 02:12:52.856476 systemd[1]: Running in initrd. Apr 28 02:12:52.856481 systemd[1]: No hostname configured, using default hostname. Apr 28 02:12:52.856488 systemd[1]: Hostname set to . Apr 28 02:12:52.856494 systemd[1]: Initializing machine ID from VM UUID. Apr 28 02:12:52.856500 systemd[1]: Queued start job for default target initrd.target. Apr 28 02:12:52.856505 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 02:12:52.856511 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 02:12:52.856518 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 28 02:12:52.856524 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 02:12:52.856529 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 28 02:12:52.856537 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 28 02:12:52.856553 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 28 02:12:52.856559 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 28 02:12:52.856565 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 02:12:52.856572 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 02:12:52.856578 systemd[1]: Reached target paths.target - Path Units. Apr 28 02:12:52.856584 systemd[1]: Reached target slices.target - Slice Units. Apr 28 02:12:52.856590 systemd[1]: Reached target swap.target - Swaps. Apr 28 02:12:52.856595 systemd[1]: Reached target timers.target - Timer Units. Apr 28 02:12:52.856601 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 02:12:52.856607 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 02:12:52.856613 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 02:12:52.856619 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 02:12:52.856627 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 02:12:52.856633 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 02:12:52.856639 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 02:12:52.856671 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 02:12:52.856682 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 28 02:12:52.856689 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 02:12:52.856695 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 28 02:12:52.856701 systemd[1]: Starting systemd-fsck-usr.service... Apr 28 02:12:52.856707 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 02:12:52.856717 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 02:12:52.856723 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 02:12:52.856728 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 28 02:12:52.856747 systemd-journald[194]: Collecting audit messages is disabled. Apr 28 02:12:52.856763 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 02:12:52.856769 systemd[1]: Finished systemd-fsck-usr.service. Apr 28 02:12:52.856780 systemd-journald[194]: Journal started Apr 28 02:12:52.856822 systemd-journald[194]: Runtime Journal (/run/log/journal/7ce752fa45694d7caa01916a555f1414) is 6.0M, max 48.4M, 42.3M free. Apr 28 02:12:52.860394 systemd-modules-load[195]: Inserted module 'overlay' Apr 28 02:12:52.864717 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 02:12:52.870339 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 02:12:52.951978 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 28 02:12:52.952000 kernel: Bridge firewalling registered Apr 28 02:12:52.884255 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 28 02:12:52.965015 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 02:12:52.969518 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 02:12:52.973831 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:12:52.975883 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 02:12:52.979584 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 02:12:52.983314 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 02:12:52.986299 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 02:12:52.989980 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 02:12:53.001687 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 02:12:53.002218 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:12:53.004086 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 02:12:53.013937 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 02:12:53.018129 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 28 02:12:53.028820 systemd-resolved[223]: Positive Trust Anchors: Apr 28 02:12:53.028845 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 02:12:53.028870 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 02:12:53.031306 systemd-resolved[223]: Defaulting to hostname 'linux'. Apr 28 02:12:53.047928 dracut-cmdline[231]: dracut-dracut-053 Apr 28 02:12:53.047928 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 02:12:53.032374 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 02:12:53.044589 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 02:12:53.114343 kernel: SCSI subsystem initialized Apr 28 02:12:53.124231 kernel: Loading iSCSI transport class v2.0-870. Apr 28 02:12:53.136200 kernel: iscsi: registered transport (tcp) Apr 28 02:12:53.160866 kernel: iscsi: registered transport (qla4xxx) Apr 28 02:12:53.160936 kernel: QLogic iSCSI HBA Driver Apr 28 02:12:53.194927 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 28 02:12:53.208323 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 28 02:12:53.229199 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 28 02:12:53.229246 kernel: device-mapper: uevent: version 1.0.3 Apr 28 02:12:53.230193 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 28 02:12:53.269199 kernel: raid6: avx512x4 gen() 45364 MB/s Apr 28 02:12:53.286208 kernel: raid6: avx512x2 gen() 46185 MB/s Apr 28 02:12:53.303199 kernel: raid6: avx512x1 gen() 43423 MB/s Apr 28 02:12:53.320209 kernel: raid6: avx2x4 gen() 37453 MB/s Apr 28 02:12:53.337196 kernel: raid6: avx2x2 gen() 35654 MB/s Apr 28 02:12:53.355078 kernel: raid6: avx2x1 gen() 27959 MB/s Apr 28 02:12:53.355116 kernel: raid6: using algorithm avx512x2 gen() 46185 MB/s Apr 28 02:12:53.373203 kernel: raid6: .... xor() 29792 MB/s, rmw enabled Apr 28 02:12:53.373241 kernel: raid6: using avx512x2 recovery algorithm Apr 28 02:12:53.392203 kernel: xor: automatically using best checksumming function avx Apr 28 02:12:53.519209 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 28 02:12:53.528252 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 28 02:12:53.545389 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 02:12:53.554753 systemd-udevd[413]: Using default interface naming scheme 'v255'. Apr 28 02:12:53.557377 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 02:12:53.558242 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 28 02:12:53.574992 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Apr 28 02:12:53.596322 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 02:12:53.607319 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 02:12:53.635850 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 02:12:53.650371 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 28 02:12:53.662215 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 28 02:12:53.662470 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 28 02:12:53.666732 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 02:12:53.673415 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 28 02:12:53.674271 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 02:12:53.677230 kernel: cryptd: max_cpu_qlen set to 1000 Apr 28 02:12:53.684002 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 28 02:12:53.684021 kernel: GPT:9289727 != 19775487 Apr 28 02:12:53.684029 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 28 02:12:53.684041 kernel: GPT:9289727 != 19775487 Apr 28 02:12:53.684049 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 28 02:12:53.684057 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:12:53.685111 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 02:12:53.697243 kernel: libata version 3.00 loaded. Apr 28 02:12:53.697706 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 28 02:12:53.703110 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 02:12:53.710184 kernel: AVX2 version of gcm_enc/dec engaged. Apr 28 02:12:53.703222 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 02:12:53.707748 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 02:12:53.721407 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (471) Apr 28 02:12:53.721428 kernel: AES CTR mode by8 optimization enabled Apr 28 02:12:53.710494 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 02:12:53.710626 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:12:53.719242 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 02:12:53.734286 kernel: ahci 0000:00:1f.2: version 3.0 Apr 28 02:12:53.734412 kernel: BTRFS: device fsid c393bc7b-9362-4bef-afe6-6491ed4d6c93 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (463) Apr 28 02:12:53.734421 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 28 02:12:53.737358 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 28 02:12:53.737499 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 28 02:12:53.739517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 02:12:53.747939 kernel: scsi host0: ahci Apr 28 02:12:53.748071 kernel: scsi host1: ahci Apr 28 02:12:53.748141 kernel: scsi host2: ahci Apr 28 02:12:53.748257 kernel: scsi host3: ahci Apr 28 02:12:53.748325 kernel: scsi host4: ahci Apr 28 02:12:53.748390 kernel: scsi host5: ahci Apr 28 02:12:53.748455 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 28 02:12:53.748463 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 28 02:12:53.743446 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 28 02:12:53.756436 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 28 02:12:53.756456 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 28 02:12:53.756465 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 28 02:12:53.756473 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 28 02:12:53.764620 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 28 02:12:53.859082 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:12:53.869928 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 28 02:12:53.875195 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 02:12:53.881618 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 28 02:12:53.881721 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 28 02:12:53.903346 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 28 02:12:53.906302 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 02:12:53.916364 disk-uuid[555]: Primary Header is updated. Apr 28 02:12:53.916364 disk-uuid[555]: Secondary Entries is updated. Apr 28 02:12:53.916364 disk-uuid[555]: Secondary Header is updated. Apr 28 02:12:53.922171 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:12:53.923513 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 02:12:53.926178 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:12:54.063920 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 28 02:12:54.063979 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 28 02:12:54.072677 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 28 02:12:54.072720 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 28 02:12:54.072729 kernel: ata3.00: applying bridge limits Apr 28 02:12:54.074197 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 28 02:12:54.076227 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 28 02:12:54.076241 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 28 02:12:54.077203 kernel: ata3.00: configured for UDMA/100 Apr 28 02:12:54.081212 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 28 02:12:54.134397 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 28 02:12:54.134706 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 28 02:12:54.153320 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 28 02:12:54.927754 disk-uuid[557]: The operation has completed successfully. Apr 28 02:12:54.929787 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:12:54.947562 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 28 02:12:54.947687 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 28 02:12:54.961344 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 28 02:12:54.966720 sh[592]: Success Apr 28 02:12:54.980211 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 28 02:12:55.008947 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 28 02:12:55.026479 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 28 02:12:55.028205 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 28 02:12:55.041895 kernel: BTRFS info (device dm-0): first mount of filesystem c393bc7b-9362-4bef-afe6-6491ed4d6c93 Apr 28 02:12:55.041923 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 28 02:12:55.041932 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 28 02:12:55.043551 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 28 02:12:55.044749 kernel: BTRFS info (device dm-0): using free space tree Apr 28 02:12:55.050984 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 28 02:12:55.052967 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 28 02:12:55.070391 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 28 02:12:55.072855 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 28 02:12:55.083039 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:12:55.083072 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 02:12:55.083081 kernel: BTRFS info (device vda6): using free space tree Apr 28 02:12:55.089196 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 02:12:55.097916 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 28 02:12:55.101312 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:12:55.108036 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 28 02:12:55.114447 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 28 02:12:55.158840 ignition[699]: Ignition 2.19.0 Apr 28 02:12:55.158867 ignition[699]: Stage: fetch-offline Apr 28 02:12:55.158890 ignition[699]: no configs at "/usr/lib/ignition/base.d" Apr 28 02:12:55.158897 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:12:55.158962 ignition[699]: parsed url from cmdline: "" Apr 28 02:12:55.158964 ignition[699]: no config URL provided Apr 28 02:12:55.158969 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 02:12:55.158973 ignition[699]: no config at "/usr/lib/ignition/user.ign" Apr 28 02:12:55.158993 ignition[699]: op(1): [started] loading QEMU firmware config module Apr 28 02:12:55.158996 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 28 02:12:55.171118 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 02:12:55.176101 ignition[699]: op(1): [finished] loading QEMU firmware config module Apr 28 02:12:55.177403 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 02:12:55.192940 systemd-networkd[781]: lo: Link UP Apr 28 02:12:55.192962 systemd-networkd[781]: lo: Gained carrier Apr 28 02:12:55.193898 systemd-networkd[781]: Enumeration completed Apr 28 02:12:55.194497 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:12:55.194499 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 02:12:55.195213 systemd-networkd[781]: eth0: Link UP Apr 28 02:12:55.195216 systemd-networkd[781]: eth0: Gained carrier Apr 28 02:12:55.195221 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:12:55.209687 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 02:12:55.213480 systemd[1]: Reached target network.target - Network. Apr 28 02:12:55.221220 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 02:12:55.312944 ignition[699]: parsing config with SHA512: 45cec76d04e1e1db4b12d257dc27eae9cedcfca394f3091cad5bca8b26e04323643c98d0b9df49a282bf34272038ba2df622e656992b1412398015adca12dc31 Apr 28 02:12:55.316105 unknown[699]: fetched base config from "system" Apr 28 02:12:55.316119 unknown[699]: fetched user config from "qemu" Apr 28 02:12:55.317073 systemd-resolved[223]: Detected conflict on linux IN A 10.0.0.6 Apr 28 02:12:55.318247 ignition[699]: fetch-offline: fetch-offline passed Apr 28 02:12:55.317080 systemd-resolved[223]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Apr 28 02:12:55.318315 ignition[699]: Ignition finished successfully Apr 28 02:12:55.326508 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 02:12:55.330316 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 28 02:12:55.342514 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 28 02:12:55.354922 ignition[785]: Ignition 2.19.0 Apr 28 02:12:55.354940 ignition[785]: Stage: kargs Apr 28 02:12:55.355079 ignition[785]: no configs at "/usr/lib/ignition/base.d" Apr 28 02:12:55.355086 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:12:55.355805 ignition[785]: kargs: kargs passed Apr 28 02:12:55.355836 ignition[785]: Ignition finished successfully Apr 28 02:12:55.360727 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 28 02:12:55.370480 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 28 02:12:55.380762 ignition[794]: Ignition 2.19.0 Apr 28 02:12:55.380780 ignition[794]: Stage: disks Apr 28 02:12:55.380912 ignition[794]: no configs at "/usr/lib/ignition/base.d" Apr 28 02:12:55.380919 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:12:55.383134 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 28 02:12:55.381561 ignition[794]: disks: disks passed Apr 28 02:12:55.384691 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 28 02:12:55.381591 ignition[794]: Ignition finished successfully Apr 28 02:12:55.389278 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 02:12:55.391020 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 02:12:55.393887 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 02:12:55.398702 systemd[1]: Reached target basic.target - Basic System. Apr 28 02:12:55.409307 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 28 02:12:55.419392 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 28 02:12:55.423741 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 28 02:12:55.426764 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 28 02:12:55.527189 kernel: EXT4-fs (vda9): mounted filesystem f590d1f8-5181-4682-9e04-fe65400dca5c r/w with ordered data mode. Quota mode: none. Apr 28 02:12:55.527253 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 28 02:12:55.529135 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 28 02:12:55.540263 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 02:12:55.543585 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 28 02:12:55.543821 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 28 02:12:55.558994 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Apr 28 02:12:55.559020 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:12:55.559029 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 02:12:55.559039 kernel: BTRFS info (device vda6): using free space tree Apr 28 02:12:55.559048 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 02:12:55.543849 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 28 02:12:55.543865 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 02:12:55.559870 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 02:12:55.565015 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 28 02:12:55.569128 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 28 02:12:55.600347 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Apr 28 02:12:55.605245 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Apr 28 02:12:55.609977 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Apr 28 02:12:55.614874 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Apr 28 02:12:55.693087 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 28 02:12:55.707288 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 28 02:12:55.711543 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 28 02:12:55.716197 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:12:55.733627 ignition[927]: INFO : Ignition 2.19.0 Apr 28 02:12:55.733627 ignition[927]: INFO : Stage: mount Apr 28 02:12:55.737866 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 02:12:55.737866 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:12:55.737866 ignition[927]: INFO : mount: mount passed Apr 28 02:12:55.737866 ignition[927]: INFO : Ignition finished successfully Apr 28 02:12:55.735136 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 28 02:12:55.747354 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 28 02:12:55.749201 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 28 02:12:56.040376 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 28 02:12:56.053378 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 02:12:56.063236 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Apr 28 02:12:56.063277 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:12:56.066352 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 02:12:56.066369 kernel: BTRFS info (device vda6): using free space tree Apr 28 02:12:56.071185 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 02:12:56.071893 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 02:12:56.093822 ignition[957]: INFO : Ignition 2.19.0 Apr 28 02:12:56.093822 ignition[957]: INFO : Stage: files Apr 28 02:12:56.096801 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 02:12:56.096801 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:12:56.096801 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Apr 28 02:12:56.102334 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 28 02:12:56.102334 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 28 02:12:56.102334 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 28 02:12:56.109372 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 28 02:12:56.109372 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 28 02:12:56.109372 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 28 02:12:56.109372 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 28 02:12:56.109372 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 02:12:56.109372 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 28 02:12:56.102874 unknown[957]: wrote ssh authorized keys file for user: core Apr 28 02:12:56.163397 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 28 02:12:56.258091 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 02:12:56.258091 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 28 02:12:56.258091 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 28 02:12:56.412600 systemd-networkd[781]: eth0: Gained IPv6LL Apr 28 02:12:56.491478 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 28 02:12:56.548836 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 28 02:12:56.548836 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 28 02:12:56.548836 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 28 02:12:56.548836 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 28 02:12:56.561540 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 28 02:12:56.561540 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 02:12:56.561540 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 02:12:56.561540 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 02:12:56.561540 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 02:12:56.561540 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 02:12:56.561540 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 02:12:56.561540 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 02:12:56.561540 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 02:12:56.561540 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 02:12:56.561540 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 28 02:12:56.614461 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 28 02:12:56.782224 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 02:12:56.782224 ignition[957]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 28 02:12:56.788604 ignition[957]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 28 02:12:56.788604 ignition[957]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 28 02:12:56.788604 ignition[957]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 28 02:12:56.788604 ignition[957]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 28 02:12:56.788604 ignition[957]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 02:12:56.788604 ignition[957]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 02:12:56.788604 ignition[957]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 28 02:12:56.788604 ignition[957]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Apr 28 02:12:56.788604 ignition[957]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 02:12:56.788604 ignition[957]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 02:12:56.788604 ignition[957]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Apr 28 02:12:56.788604 ignition[957]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Apr 28 02:12:56.839559 ignition[957]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 02:12:56.839559 ignition[957]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 02:12:56.839559 ignition[957]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Apr 28 02:12:56.839559 ignition[957]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Apr 28 02:12:56.839559 ignition[957]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Apr 28 02:12:56.839559 ignition[957]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 28 02:12:56.839559 ignition[957]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 28 02:12:56.839559 ignition[957]: INFO : files: files passed Apr 28 02:12:56.839559 ignition[957]: INFO : Ignition finished successfully Apr 28 02:12:56.809016 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 28 02:12:56.825464 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 28 02:12:56.831359 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 28 02:12:56.875531 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Apr 28 02:12:56.835345 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 28 02:12:56.880391 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 02:12:56.880391 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 28 02:12:56.835431 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 28 02:12:56.889049 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 02:12:56.844268 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 02:12:56.847898 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 28 02:12:56.866329 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 28 02:12:56.898861 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 28 02:12:56.898960 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 28 02:12:56.905260 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 28 02:12:56.908811 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 28 02:12:56.908923 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 28 02:12:56.915667 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 28 02:12:56.929697 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 02:12:56.941772 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 28 02:12:56.953089 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 28 02:12:56.953269 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 02:12:56.957237 systemd[1]: Stopped target timers.target - Timer Units. Apr 28 02:12:56.962522 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 28 02:12:56.962726 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 02:12:56.967245 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 28 02:12:56.969065 systemd[1]: Stopped target basic.target - Basic System. Apr 28 02:12:56.974827 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 28 02:12:56.974961 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 02:12:56.982399 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 28 02:12:56.984446 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 28 02:12:56.986130 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 02:12:56.993615 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 28 02:12:56.993774 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 28 02:12:56.998380 systemd[1]: Stopped target swap.target - Swaps. Apr 28 02:12:56.999965 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 28 02:12:57.000054 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 28 02:12:57.007075 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 28 02:12:57.007241 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 02:12:57.010519 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 28 02:12:57.010642 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 02:12:57.014206 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 28 02:12:57.014315 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 28 02:12:57.022715 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 28 02:12:57.022824 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 02:12:57.024512 systemd[1]: Stopped target paths.target - Path Units. Apr 28 02:12:57.027830 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 28 02:12:57.031788 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 02:12:57.035808 systemd[1]: Stopped target slices.target - Slice Units. Apr 28 02:12:57.038203 systemd[1]: Stopped target sockets.target - Socket Units. Apr 28 02:12:57.042419 systemd[1]: iscsid.socket: Deactivated successfully. Apr 28 02:12:57.042519 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 02:12:57.043856 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 28 02:12:57.043931 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 02:12:57.049613 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 28 02:12:57.049723 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 02:12:57.054759 systemd[1]: ignition-files.service: Deactivated successfully. Apr 28 02:12:57.054848 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 28 02:12:57.080383 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 28 02:12:57.082239 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 28 02:12:57.082373 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 02:12:57.088385 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 28 02:12:57.093247 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 28 02:12:57.094959 ignition[1012]: INFO : Ignition 2.19.0 Apr 28 02:12:57.094959 ignition[1012]: INFO : Stage: umount Apr 28 02:12:57.094959 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 02:12:57.094959 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 02:12:57.094959 ignition[1012]: INFO : umount: umount passed Apr 28 02:12:57.094959 ignition[1012]: INFO : Ignition finished successfully Apr 28 02:12:57.094995 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 02:12:57.104379 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 28 02:12:57.104500 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 02:12:57.111524 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 28 02:12:57.112043 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 28 02:12:57.112122 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 28 02:12:57.116083 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 28 02:12:57.116195 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 28 02:12:57.120469 systemd[1]: Stopped target network.target - Network. Apr 28 02:12:57.123749 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 28 02:12:57.123786 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 28 02:12:57.123832 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 28 02:12:57.123852 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 28 02:12:57.127206 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 28 02:12:57.127236 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 28 02:12:57.129984 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 28 02:12:57.130016 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 28 02:12:57.134700 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 28 02:12:57.136246 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 28 02:12:57.144397 systemd-networkd[781]: eth0: DHCPv6 lease lost Apr 28 02:12:57.148745 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 28 02:12:57.148845 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 28 02:12:57.160629 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 28 02:12:57.160780 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 28 02:12:57.161629 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 28 02:12:57.161657 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 28 02:12:57.180423 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 28 02:12:57.182619 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 28 02:12:57.182698 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 02:12:57.186570 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 02:12:57.186606 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:12:57.190066 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 28 02:12:57.190102 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 28 02:12:57.194218 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 28 02:12:57.194249 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 02:12:57.196070 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 02:12:57.200022 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 28 02:12:57.200103 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 28 02:12:57.206915 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 28 02:12:57.206974 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 28 02:12:57.219091 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 28 02:12:57.219200 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 28 02:12:57.226827 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 28 02:12:57.226947 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 02:12:57.234770 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 28 02:12:57.234835 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 28 02:12:57.236802 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 28 02:12:57.236831 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 02:12:57.240436 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 28 02:12:57.240474 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 28 02:12:57.246322 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 28 02:12:57.246356 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 28 02:12:57.251214 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 02:12:57.251252 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 02:12:57.270314 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 28 02:12:57.272291 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 28 02:12:57.272349 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 02:12:57.276241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 02:12:57.276283 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:12:57.279965 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 28 02:12:57.280045 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 28 02:12:57.283997 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 28 02:12:57.306544 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 28 02:12:57.312101 systemd[1]: Switching root. Apr 28 02:12:57.344433 systemd-journald[194]: Journal stopped Apr 28 02:12:58.075977 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 28 02:12:58.076025 kernel: SELinux: policy capability network_peer_controls=1 Apr 28 02:12:58.076039 kernel: SELinux: policy capability open_perms=1 Apr 28 02:12:58.076046 kernel: SELinux: policy capability extended_socket_class=1 Apr 28 02:12:58.076056 kernel: SELinux: policy capability always_check_network=0 Apr 28 02:12:58.076064 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 28 02:12:58.076072 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 28 02:12:58.076082 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 28 02:12:58.076090 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 28 02:12:58.076100 kernel: audit: type=1403 audit(1777342377.509:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 28 02:12:58.076109 systemd[1]: Successfully loaded SELinux policy in 41.761ms. Apr 28 02:12:58.076121 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.136ms. Apr 28 02:12:58.076130 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 02:12:58.076138 systemd[1]: Detected virtualization kvm. Apr 28 02:12:58.076146 systemd[1]: Detected architecture x86-64. Apr 28 02:12:58.078302 systemd[1]: Detected first boot. Apr 28 02:12:58.078807 systemd[1]: Initializing machine ID from VM UUID. Apr 28 02:12:58.078822 zram_generator::config[1073]: No configuration found. Apr 28 02:12:58.078837 systemd[1]: Populated /etc with preset unit settings. Apr 28 02:12:58.078847 systemd[1]: Queued start job for default target multi-user.target. Apr 28 02:12:58.078855 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 28 02:12:58.078864 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 28 02:12:58.078937 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 28 02:12:58.078948 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 28 02:12:58.078958 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 28 02:12:58.078970 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 28 02:12:58.078978 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 28 02:12:58.078986 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 28 02:12:58.078994 systemd[1]: Created slice user.slice - User and Session Slice. Apr 28 02:12:58.079002 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 02:12:58.079010 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 02:12:58.079018 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 28 02:12:58.079026 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 28 02:12:58.079037 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 28 02:12:58.079045 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 02:12:58.079053 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 28 02:12:58.079061 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 02:12:58.079070 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 28 02:12:58.079078 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 02:12:58.079086 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 02:12:58.079095 systemd[1]: Reached target slices.target - Slice Units. Apr 28 02:12:58.079104 systemd[1]: Reached target swap.target - Swaps. Apr 28 02:12:58.079112 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 28 02:12:58.079120 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 28 02:12:58.079127 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 02:12:58.079135 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 02:12:58.079142 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 02:12:58.079190 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 02:12:58.079199 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 02:12:58.079207 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 28 02:12:58.079217 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 28 02:12:58.079225 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 28 02:12:58.079232 systemd[1]: Mounting media.mount - External Media Directory... Apr 28 02:12:58.079240 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:12:58.079248 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 28 02:12:58.079255 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 28 02:12:58.079263 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 28 02:12:58.079271 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 28 02:12:58.079280 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 02:12:58.079290 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 02:12:58.079298 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 28 02:12:58.079305 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 02:12:58.079313 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 02:12:58.079321 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 02:12:58.079329 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 28 02:12:58.079336 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 02:12:58.079344 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 28 02:12:58.079354 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 28 02:12:58.079362 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 28 02:12:58.079370 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 02:12:58.079377 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 02:12:58.079384 kernel: fuse: init (API version 7.39) Apr 28 02:12:58.079393 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 28 02:12:58.079401 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 28 02:12:58.079425 systemd-journald[1162]: Collecting audit messages is disabled. Apr 28 02:12:58.079444 systemd-journald[1162]: Journal started Apr 28 02:12:58.079461 systemd-journald[1162]: Runtime Journal (/run/log/journal/7ce752fa45694d7caa01916a555f1414) is 6.0M, max 48.4M, 42.3M free. Apr 28 02:12:58.082237 kernel: loop: module loaded Apr 28 02:12:58.088173 kernel: ACPI: bus type drm_connector registered Apr 28 02:12:58.102366 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 02:12:58.106201 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:12:58.108165 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 02:12:58.110332 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 28 02:12:58.111861 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 28 02:12:58.113433 systemd[1]: Mounted media.mount - External Media Directory. Apr 28 02:12:58.114847 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 28 02:12:58.116381 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 28 02:12:58.117941 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 28 02:12:58.119469 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 28 02:12:58.121307 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 02:12:58.123191 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 28 02:12:58.123331 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 28 02:12:58.125084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 02:12:58.125232 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 02:12:58.126934 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 02:12:58.127052 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 02:12:58.128645 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 02:12:58.128798 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 02:12:58.130580 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 28 02:12:58.130723 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 28 02:12:58.132356 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 02:12:58.132490 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 02:12:58.134226 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 02:12:58.135961 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 28 02:12:58.138642 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 28 02:12:58.145788 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 02:12:58.149520 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 02:12:58.158306 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 28 02:12:58.161218 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 28 02:12:58.162767 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 28 02:12:58.165511 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 28 02:12:58.167793 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 28 02:12:58.169355 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 02:12:58.172808 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 28 02:12:58.174434 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 02:12:58.176197 systemd-journald[1162]: Time spent on flushing to /var/log/journal/7ce752fa45694d7caa01916a555f1414 is 14.041ms for 944 entries. Apr 28 02:12:58.176197 systemd-journald[1162]: System Journal (/var/log/journal/7ce752fa45694d7caa01916a555f1414) is 8.0M, max 195.6M, 187.6M free. Apr 28 02:12:58.198714 systemd-journald[1162]: Received client request to flush runtime journal. Apr 28 02:12:58.177350 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 02:12:58.180206 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 02:12:58.195015 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 28 02:12:58.197702 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 28 02:12:58.199759 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 28 02:12:58.201746 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 28 02:12:58.203867 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 28 02:12:58.205911 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:12:58.208609 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Apr 28 02:12:58.208620 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Apr 28 02:12:58.210895 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 28 02:12:58.212813 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 02:12:58.219456 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 28 02:12:58.221133 udevadm[1216]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 28 02:12:58.235719 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 28 02:12:58.244280 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 02:12:58.256112 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Apr 28 02:12:58.256196 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Apr 28 02:12:58.259756 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 02:12:58.515976 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 28 02:12:58.531454 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 02:12:58.550473 systemd-udevd[1241]: Using default interface naming scheme 'v255'. Apr 28 02:12:58.564251 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 02:12:58.575844 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 02:12:58.587288 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 28 02:12:58.594367 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1244) Apr 28 02:12:58.594266 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 28 02:12:58.629909 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 02:12:58.632348 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 28 02:12:58.645191 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 28 02:12:58.652183 kernel: ACPI: button: Power Button [PWRF] Apr 28 02:12:58.664287 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 28 02:12:58.664448 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 28 02:12:58.668174 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 28 02:12:58.673197 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 28 02:12:58.677256 kernel: mousedev: PS/2 mouse device common for all mice Apr 28 02:12:58.683099 systemd-networkd[1249]: lo: Link UP Apr 28 02:12:58.683117 systemd-networkd[1249]: lo: Gained carrier Apr 28 02:12:58.683934 systemd-networkd[1249]: Enumeration completed Apr 28 02:12:58.684373 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:12:58.684376 systemd-networkd[1249]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 02:12:58.684394 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 02:12:58.685022 systemd-networkd[1249]: eth0: Link UP Apr 28 02:12:58.685024 systemd-networkd[1249]: eth0: Gained carrier Apr 28 02:12:58.685033 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:12:58.686463 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 02:12:58.699196 systemd-networkd[1249]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 02:12:58.702329 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 28 02:12:58.816857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:12:58.820734 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 28 02:12:58.846349 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 28 02:12:58.852929 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 02:12:58.890963 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 28 02:12:58.893449 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 02:12:58.907347 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 28 02:12:58.911263 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 02:12:58.941802 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 28 02:12:58.944581 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 02:12:58.946301 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 28 02:12:58.946319 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 02:12:58.947720 systemd[1]: Reached target machines.target - Containers. Apr 28 02:12:58.949804 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 28 02:12:58.963491 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 28 02:12:58.966481 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 28 02:12:58.967976 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 02:12:58.968677 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 28 02:12:58.971216 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 28 02:12:58.976304 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 28 02:12:58.979002 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 28 02:12:58.986294 kernel: loop0: detected capacity change from 0 to 228704 Apr 28 02:12:58.987641 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 28 02:12:58.995089 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 28 02:12:58.996315 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 28 02:12:59.004180 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 28 02:12:59.027187 kernel: loop1: detected capacity change from 0 to 142488 Apr 28 02:12:59.053178 kernel: loop2: detected capacity change from 0 to 140768 Apr 28 02:12:59.085189 kernel: loop3: detected capacity change from 0 to 228704 Apr 28 02:12:59.095196 kernel: loop4: detected capacity change from 0 to 142488 Apr 28 02:12:59.105318 kernel: loop5: detected capacity change from 0 to 140768 Apr 28 02:12:59.114576 (sd-merge)[1309]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 28 02:12:59.114914 (sd-merge)[1309]: Merged extensions into '/usr'. Apr 28 02:12:59.123657 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Apr 28 02:12:59.123677 systemd[1]: Reloading... Apr 28 02:12:59.166255 zram_generator::config[1343]: No configuration found. Apr 28 02:12:59.181585 ldconfig[1294]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 28 02:12:59.243880 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 02:12:59.283796 systemd[1]: Reloading finished in 159 ms. Apr 28 02:12:59.299380 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 28 02:12:59.302118 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 28 02:12:59.315337 systemd[1]: Starting ensure-sysext.service... Apr 28 02:12:59.317627 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 02:12:59.320788 systemd[1]: Reloading requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... Apr 28 02:12:59.320809 systemd[1]: Reloading... Apr 28 02:12:59.333499 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 02:12:59.333753 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 02:12:59.334267 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 02:12:59.334440 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Apr 28 02:12:59.334474 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Apr 28 02:12:59.336114 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 02:12:59.336129 systemd-tmpfiles[1382]: Skipping /boot Apr 28 02:12:59.342253 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 02:12:59.342261 systemd-tmpfiles[1382]: Skipping /boot Apr 28 02:12:59.354188 zram_generator::config[1409]: No configuration found. Apr 28 02:12:59.435014 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 02:12:59.473797 systemd[1]: Reloading finished in 152 ms. Apr 28 02:12:59.490368 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 02:12:59.513017 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 02:12:59.515785 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 28 02:12:59.518329 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 28 02:12:59.521300 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 02:12:59.527336 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 28 02:12:59.533887 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:12:59.534668 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 02:12:59.535526 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 02:12:59.537754 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 02:12:59.540391 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 02:12:59.541926 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 02:12:59.542473 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:12:59.543208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 02:12:59.543318 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 02:12:59.545393 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 02:12:59.545490 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 02:12:59.549778 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 02:12:59.550431 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 02:12:59.559041 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 28 02:12:59.562669 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 28 02:12:59.564247 augenrules[1486]: No rules Apr 28 02:12:59.565254 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 02:12:59.571749 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:12:59.571941 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 02:12:59.576386 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 02:12:59.579460 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 02:12:59.581594 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 02:12:59.584364 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 02:12:59.586130 systemd-resolved[1460]: Positive Trust Anchors: Apr 28 02:12:59.586135 systemd-resolved[1460]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 02:12:59.586251 systemd-resolved[1460]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 02:12:59.587333 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 02:12:59.588480 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 28 02:12:59.589803 systemd-resolved[1460]: Defaulting to hostname 'linux'. Apr 28 02:12:59.589902 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:12:59.591471 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 28 02:12:59.593453 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 02:12:59.595342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 02:12:59.595474 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 02:12:59.597391 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 02:12:59.597514 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 02:12:59.599388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 02:12:59.599498 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 02:12:59.601473 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 02:12:59.601586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 02:12:59.603661 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 28 02:12:59.605549 systemd[1]: Finished ensure-sysext.service. Apr 28 02:12:59.611002 systemd[1]: Reached target network.target - Network. Apr 28 02:12:59.612279 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 02:12:59.613897 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 02:12:59.613941 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 02:12:59.620265 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 28 02:12:59.621765 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 02:12:59.657586 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 28 02:12:59.659877 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 02:12:59.660652 systemd-timesyncd[1517]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 28 02:12:59.660708 systemd-timesyncd[1517]: Initial clock synchronization to Tue 2026-04-28 02:13:00.042616 UTC. Apr 28 02:12:59.661555 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 28 02:12:59.663404 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 28 02:12:59.665108 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 28 02:12:59.666811 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 28 02:12:59.666840 systemd[1]: Reached target paths.target - Path Units. Apr 28 02:12:59.668018 systemd[1]: Reached target time-set.target - System Time Set. Apr 28 02:12:59.669454 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 28 02:12:59.670968 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 28 02:12:59.672623 systemd[1]: Reached target timers.target - Timer Units. Apr 28 02:12:59.674629 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 28 02:12:59.677379 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 28 02:12:59.679954 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 28 02:12:59.683950 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 28 02:12:59.685772 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 02:12:59.687127 systemd[1]: Reached target basic.target - Basic System. Apr 28 02:12:59.688525 systemd[1]: System is tainted: cgroupsv1 Apr 28 02:12:59.688565 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 28 02:12:59.688578 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 28 02:12:59.689469 systemd[1]: Starting containerd.service - containerd container runtime... Apr 28 02:12:59.691682 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 28 02:12:59.693670 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 28 02:12:59.696971 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 28 02:12:59.698472 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 28 02:12:59.699260 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 28 02:12:59.701617 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 28 02:12:59.705609 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 28 02:12:59.707026 jq[1523]: false Apr 28 02:12:59.708363 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 28 02:12:59.713497 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 28 02:12:59.716500 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 28 02:12:59.720502 extend-filesystems[1525]: Found loop3 Apr 28 02:12:59.726296 extend-filesystems[1525]: Found loop4 Apr 28 02:12:59.726296 extend-filesystems[1525]: Found loop5 Apr 28 02:12:59.726296 extend-filesystems[1525]: Found sr0 Apr 28 02:12:59.726296 extend-filesystems[1525]: Found vda Apr 28 02:12:59.726296 extend-filesystems[1525]: Found vda1 Apr 28 02:12:59.726296 extend-filesystems[1525]: Found vda2 Apr 28 02:12:59.726296 extend-filesystems[1525]: Found vda3 Apr 28 02:12:59.726296 extend-filesystems[1525]: Found usr Apr 28 02:12:59.726296 extend-filesystems[1525]: Found vda4 Apr 28 02:12:59.726296 extend-filesystems[1525]: Found vda6 Apr 28 02:12:59.726296 extend-filesystems[1525]: Found vda7 Apr 28 02:12:59.726296 extend-filesystems[1525]: Found vda9 Apr 28 02:12:59.726296 extend-filesystems[1525]: Checking size of /dev/vda9 Apr 28 02:12:59.752219 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1247) Apr 28 02:12:59.752239 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 28 02:12:59.725889 dbus-daemon[1522]: [system] SELinux support is enabled Apr 28 02:12:59.723343 systemd[1]: Starting update-engine.service - Update Engine... Apr 28 02:12:59.752497 extend-filesystems[1525]: Resized partition /dev/vda9 Apr 28 02:12:59.726224 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 28 02:12:59.754395 extend-filesystems[1551]: resize2fs 1.47.1 (20-May-2024) Apr 28 02:12:59.757981 update_engine[1540]: I20260428 02:12:59.737974 1540 main.cc:92] Flatcar Update Engine starting Apr 28 02:12:59.757981 update_engine[1540]: I20260428 02:12:59.738864 1540 update_check_scheduler.cc:74] Next update check in 5m5s Apr 28 02:12:59.731722 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 28 02:12:59.745346 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 28 02:12:59.745503 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 28 02:12:59.745673 systemd[1]: motdgen.service: Deactivated successfully. Apr 28 02:12:59.745835 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 28 02:12:59.755516 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 28 02:12:59.755727 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 28 02:12:59.761828 jq[1546]: true Apr 28 02:12:59.769367 (ntainerd)[1555]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 28 02:12:59.770881 jq[1557]: true Apr 28 02:12:59.781853 tar[1553]: linux-amd64/LICENSE Apr 28 02:12:59.781853 tar[1553]: linux-amd64/helm Apr 28 02:12:59.782359 systemd-logind[1534]: Watching system buttons on /dev/input/event1 (Power Button) Apr 28 02:12:59.782376 systemd-logind[1534]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 28 02:12:59.782884 systemd[1]: Started update-engine.service - Update Engine. Apr 28 02:12:59.783742 systemd-logind[1534]: New seat seat0. Apr 28 02:12:59.787424 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 28 02:12:59.787444 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 28 02:12:59.794323 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 28 02:12:59.789276 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 28 02:12:59.789293 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 28 02:12:59.791759 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 28 02:12:59.799636 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 28 02:12:59.801827 systemd[1]: Started systemd-logind.service - User Login Management. Apr 28 02:12:59.809421 extend-filesystems[1551]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 28 02:12:59.809421 extend-filesystems[1551]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 28 02:12:59.809421 extend-filesystems[1551]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 28 02:12:59.821755 extend-filesystems[1525]: Resized filesystem in /dev/vda9 Apr 28 02:12:59.811732 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 28 02:12:59.811899 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 28 02:12:59.831893 bash[1583]: Updated "/home/core/.ssh/authorized_keys" Apr 28 02:12:59.833102 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 28 02:12:59.835483 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 28 02:12:59.848436 locksmithd[1582]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 28 02:12:59.917965 containerd[1555]: time="2026-04-28T02:12:59.917871113Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 28 02:12:59.934911 containerd[1555]: time="2026-04-28T02:12:59.934864853Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 28 02:12:59.936906 containerd[1555]: time="2026-04-28T02:12:59.936820802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:12:59.936906 containerd[1555]: time="2026-04-28T02:12:59.936856762Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 28 02:12:59.936906 containerd[1555]: time="2026-04-28T02:12:59.936869035Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 28 02:12:59.937004 containerd[1555]: time="2026-04-28T02:12:59.936984740Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 28 02:12:59.937021 containerd[1555]: time="2026-04-28T02:12:59.937007509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 28 02:12:59.937067 containerd[1555]: time="2026-04-28T02:12:59.937050602Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:12:59.937084 containerd[1555]: time="2026-04-28T02:12:59.937067988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 28 02:12:59.937280 containerd[1555]: time="2026-04-28T02:12:59.937261840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:12:59.937297 containerd[1555]: time="2026-04-28T02:12:59.937285115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 28 02:12:59.937310 containerd[1555]: time="2026-04-28T02:12:59.937296507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:12:59.937310 containerd[1555]: time="2026-04-28T02:12:59.937303907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 28 02:12:59.937374 containerd[1555]: time="2026-04-28T02:12:59.937358538Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 28 02:12:59.937833 containerd[1555]: time="2026-04-28T02:12:59.937506988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 28 02:12:59.937833 containerd[1555]: time="2026-04-28T02:12:59.937610452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:12:59.937833 containerd[1555]: time="2026-04-28T02:12:59.937620020Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 28 02:12:59.937833 containerd[1555]: time="2026-04-28T02:12:59.937669327Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 28 02:12:59.937833 containerd[1555]: time="2026-04-28T02:12:59.937725200Z" level=info msg="metadata content store policy set" policy=shared Apr 28 02:12:59.938813 sshd_keygen[1544]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 28 02:12:59.944068 containerd[1555]: time="2026-04-28T02:12:59.944029064Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 28 02:12:59.944122 containerd[1555]: time="2026-04-28T02:12:59.944076925Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 28 02:12:59.944122 containerd[1555]: time="2026-04-28T02:12:59.944090393Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 28 02:12:59.944122 containerd[1555]: time="2026-04-28T02:12:59.944101088Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 28 02:12:59.944122 containerd[1555]: time="2026-04-28T02:12:59.944111640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 28 02:12:59.944248 containerd[1555]: time="2026-04-28T02:12:59.944232646Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 28 02:12:59.944603 containerd[1555]: time="2026-04-28T02:12:59.944561711Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 28 02:12:59.944670 containerd[1555]: time="2026-04-28T02:12:59.944652964Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 28 02:12:59.944710 containerd[1555]: time="2026-04-28T02:12:59.944677196Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 28 02:12:59.944726 containerd[1555]: time="2026-04-28T02:12:59.944707904Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 28 02:12:59.944741 containerd[1555]: time="2026-04-28T02:12:59.944724466Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 28 02:12:59.944741 containerd[1555]: time="2026-04-28T02:12:59.944735023Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 28 02:12:59.944766 containerd[1555]: time="2026-04-28T02:12:59.944744265Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 28 02:12:59.944766 containerd[1555]: time="2026-04-28T02:12:59.944754805Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 28 02:12:59.944792 containerd[1555]: time="2026-04-28T02:12:59.944764941Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 28 02:12:59.944792 containerd[1555]: time="2026-04-28T02:12:59.944774343Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 28 02:12:59.944792 containerd[1555]: time="2026-04-28T02:12:59.944784334Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 28 02:12:59.944830 containerd[1555]: time="2026-04-28T02:12:59.944792773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 28 02:12:59.944830 containerd[1555]: time="2026-04-28T02:12:59.944807350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.944830 containerd[1555]: time="2026-04-28T02:12:59.944816425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.944830 containerd[1555]: time="2026-04-28T02:12:59.944825403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.944883 containerd[1555]: time="2026-04-28T02:12:59.944834837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.944883 containerd[1555]: time="2026-04-28T02:12:59.944848698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.944883 containerd[1555]: time="2026-04-28T02:12:59.944859094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.944883 containerd[1555]: time="2026-04-28T02:12:59.944867894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.944883 containerd[1555]: time="2026-04-28T02:12:59.944877429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.944946 containerd[1555]: time="2026-04-28T02:12:59.944886515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.944946 containerd[1555]: time="2026-04-28T02:12:59.944900847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.944946 containerd[1555]: time="2026-04-28T02:12:59.944913589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.944946 containerd[1555]: time="2026-04-28T02:12:59.944923422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.944946 containerd[1555]: time="2026-04-28T02:12:59.944932199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.944946 containerd[1555]: time="2026-04-28T02:12:59.944943590Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 28 02:12:59.945019 containerd[1555]: time="2026-04-28T02:12:59.944958026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.945019 containerd[1555]: time="2026-04-28T02:12:59.944966629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.945019 containerd[1555]: time="2026-04-28T02:12:59.944980813Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 28 02:12:59.945019 containerd[1555]: time="2026-04-28T02:12:59.945013492Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 28 02:12:59.945068 containerd[1555]: time="2026-04-28T02:12:59.945025839Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 28 02:12:59.945068 containerd[1555]: time="2026-04-28T02:12:59.945033566Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 28 02:12:59.945068 containerd[1555]: time="2026-04-28T02:12:59.945041896Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 28 02:12:59.945068 containerd[1555]: time="2026-04-28T02:12:59.945048667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.945068 containerd[1555]: time="2026-04-28T02:12:59.945060555Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 28 02:12:59.945068 containerd[1555]: time="2026-04-28T02:12:59.945068434Z" level=info msg="NRI interface is disabled by configuration." Apr 28 02:12:59.945144 containerd[1555]: time="2026-04-28T02:12:59.945079177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 28 02:12:59.945347 containerd[1555]: time="2026-04-28T02:12:59.945304231Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 28 02:12:59.945461 containerd[1555]: time="2026-04-28T02:12:59.945353612Z" level=info msg="Connect containerd service" Apr 28 02:12:59.945461 containerd[1555]: time="2026-04-28T02:12:59.945380232Z" level=info msg="using legacy CRI server" Apr 28 02:12:59.945461 containerd[1555]: time="2026-04-28T02:12:59.945386089Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 28 02:12:59.945499 containerd[1555]: time="2026-04-28T02:12:59.945475463Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 28 02:12:59.945934 containerd[1555]: time="2026-04-28T02:12:59.945911648Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 02:12:59.948003 containerd[1555]: time="2026-04-28T02:12:59.946088735Z" level=info msg="Start subscribing containerd event" Apr 28 02:12:59.948003 containerd[1555]: time="2026-04-28T02:12:59.946134464Z" level=info msg="Start recovering state" Apr 28 02:12:59.948003 containerd[1555]: time="2026-04-28T02:12:59.946190146Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 28 02:12:59.948003 containerd[1555]: time="2026-04-28T02:12:59.946204397Z" level=info msg="Start event monitor" Apr 28 02:12:59.948003 containerd[1555]: time="2026-04-28T02:12:59.946215491Z" level=info msg="Start snapshots syncer" Apr 28 02:12:59.948003 containerd[1555]: time="2026-04-28T02:12:59.946222678Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 28 02:12:59.948003 containerd[1555]: time="2026-04-28T02:12:59.946223175Z" level=info msg="Start cni network conf syncer for default" Apr 28 02:12:59.948003 containerd[1555]: time="2026-04-28T02:12:59.946262066Z" level=info msg="Start streaming server" Apr 28 02:12:59.946386 systemd[1]: Started containerd.service - containerd container runtime. Apr 28 02:12:59.948535 containerd[1555]: time="2026-04-28T02:12:59.948520150Z" level=info msg="containerd successfully booted in 0.031385s" Apr 28 02:12:59.958671 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 28 02:12:59.968440 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 28 02:12:59.973455 systemd[1]: issuegen.service: Deactivated successfully. Apr 28 02:12:59.973644 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 28 02:12:59.976684 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 28 02:12:59.988361 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 28 02:12:59.992393 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 28 02:12:59.994890 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 28 02:12:59.996755 systemd[1]: Reached target getty.target - Login Prompts. Apr 28 02:13:00.197498 tar[1553]: linux-amd64/README.md Apr 28 02:13:00.211711 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 28 02:13:00.703256 systemd-networkd[1249]: eth0: Gained IPv6LL Apr 28 02:13:00.705676 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 28 02:13:00.708039 systemd[1]: Reached target network-online.target - Network is Online. Apr 28 02:13:00.720438 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 28 02:13:00.723482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:13:00.725924 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 28 02:13:00.739552 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 28 02:13:00.739732 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 28 02:13:00.741810 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 28 02:13:00.744389 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 28 02:13:01.365968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:13:01.367999 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 28 02:13:01.371289 systemd[1]: Startup finished in 5.855s (kernel) + 3.902s (userspace) = 9.758s. Apr 28 02:13:01.400595 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 02:13:01.814194 kubelet[1658]: E0428 02:13:01.814042 1658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 02:13:01.816338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 02:13:01.816497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 02:13:05.622457 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 28 02:13:05.633427 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:53802.service - OpenSSH per-connection server daemon (10.0.0.1:53802). Apr 28 02:13:05.671959 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 53802 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:13:05.673425 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:13:05.680368 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 28 02:13:05.691408 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 28 02:13:05.692975 systemd-logind[1534]: New session 1 of user core. Apr 28 02:13:05.700464 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 28 02:13:05.702113 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 28 02:13:05.708062 (systemd)[1677]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 28 02:13:05.784258 systemd[1677]: Queued start job for default target default.target. Apr 28 02:13:05.784534 systemd[1677]: Created slice app.slice - User Application Slice. Apr 28 02:13:05.784546 systemd[1677]: Reached target paths.target - Paths. Apr 28 02:13:05.784555 systemd[1677]: Reached target timers.target - Timers. Apr 28 02:13:05.792294 systemd[1677]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 28 02:13:05.797184 systemd[1677]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 28 02:13:05.797251 systemd[1677]: Reached target sockets.target - Sockets. Apr 28 02:13:05.797265 systemd[1677]: Reached target basic.target - Basic System. Apr 28 02:13:05.797291 systemd[1677]: Reached target default.target - Main User Target. Apr 28 02:13:05.797309 systemd[1677]: Startup finished in 84ms. Apr 28 02:13:05.797698 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 28 02:13:05.798912 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 28 02:13:05.863487 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:53808.service - OpenSSH per-connection server daemon (10.0.0.1:53808). Apr 28 02:13:05.892099 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 53808 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:13:05.893345 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:13:05.897100 systemd-logind[1534]: New session 2 of user core. Apr 28 02:13:05.906414 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 28 02:13:05.959021 sshd[1689]: pam_unix(sshd:session): session closed for user core Apr 28 02:13:05.966456 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:53812.service - OpenSSH per-connection server daemon (10.0.0.1:53812). Apr 28 02:13:05.966863 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:53808.service: Deactivated successfully. Apr 28 02:13:05.968080 systemd[1]: session-2.scope: Deactivated successfully. Apr 28 02:13:05.968585 systemd-logind[1534]: Session 2 logged out. Waiting for processes to exit. Apr 28 02:13:05.969686 systemd-logind[1534]: Removed session 2. Apr 28 02:13:05.994264 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 53812 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:13:05.995140 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:13:05.998115 systemd-logind[1534]: New session 3 of user core. Apr 28 02:13:06.007416 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 28 02:13:06.058744 sshd[1695]: pam_unix(sshd:session): session closed for user core Apr 28 02:13:06.070499 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:53816.service - OpenSSH per-connection server daemon (10.0.0.1:53816). Apr 28 02:13:06.070866 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:53812.service: Deactivated successfully. Apr 28 02:13:06.072882 systemd-logind[1534]: Session 3 logged out. Waiting for processes to exit. Apr 28 02:13:06.073391 systemd[1]: session-3.scope: Deactivated successfully. Apr 28 02:13:06.074226 systemd-logind[1534]: Removed session 3. Apr 28 02:13:06.098423 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 53816 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:13:06.099537 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:13:06.102759 systemd-logind[1534]: New session 4 of user core. Apr 28 02:13:06.118511 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 28 02:13:06.172067 sshd[1702]: pam_unix(sshd:session): session closed for user core Apr 28 02:13:06.190419 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:53818.service - OpenSSH per-connection server daemon (10.0.0.1:53818). Apr 28 02:13:06.190852 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:53816.service: Deactivated successfully. Apr 28 02:13:06.192074 systemd[1]: session-4.scope: Deactivated successfully. Apr 28 02:13:06.192719 systemd-logind[1534]: Session 4 logged out. Waiting for processes to exit. Apr 28 02:13:06.193821 systemd-logind[1534]: Removed session 4. Apr 28 02:13:06.220181 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 53818 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:13:06.221267 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:13:06.225212 systemd-logind[1534]: New session 5 of user core. Apr 28 02:13:06.235532 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 28 02:13:06.292793 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 28 02:13:06.293011 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 02:13:06.307410 sudo[1717]: pam_unix(sudo:session): session closed for user root Apr 28 02:13:06.309446 sshd[1710]: pam_unix(sshd:session): session closed for user core Apr 28 02:13:06.322438 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:53828.service - OpenSSH per-connection server daemon (10.0.0.1:53828). Apr 28 02:13:06.322793 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:53818.service: Deactivated successfully. Apr 28 02:13:06.325455 systemd[1]: session-5.scope: Deactivated successfully. Apr 28 02:13:06.325599 systemd-logind[1534]: Session 5 logged out. Waiting for processes to exit. Apr 28 02:13:06.326691 systemd-logind[1534]: Removed session 5. Apr 28 02:13:06.350705 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 53828 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:13:06.351679 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:13:06.354899 systemd-logind[1534]: New session 6 of user core. Apr 28 02:13:06.364391 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 28 02:13:06.416881 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 28 02:13:06.417104 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 02:13:06.420426 sudo[1727]: pam_unix(sudo:session): session closed for user root Apr 28 02:13:06.424410 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 28 02:13:06.424603 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 02:13:06.445405 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 28 02:13:06.446993 auditctl[1730]: No rules Apr 28 02:13:06.447742 systemd[1]: audit-rules.service: Deactivated successfully. Apr 28 02:13:06.447970 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 28 02:13:06.449344 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 02:13:06.471636 augenrules[1749]: No rules Apr 28 02:13:06.472574 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 02:13:06.473321 sudo[1726]: pam_unix(sudo:session): session closed for user root Apr 28 02:13:06.474566 sshd[1719]: pam_unix(sshd:session): session closed for user core Apr 28 02:13:06.487420 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:53840.service - OpenSSH per-connection server daemon (10.0.0.1:53840). Apr 28 02:13:06.488021 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:53828.service: Deactivated successfully. Apr 28 02:13:06.489949 systemd-logind[1534]: Session 6 logged out. Waiting for processes to exit. Apr 28 02:13:06.490446 systemd[1]: session-6.scope: Deactivated successfully. Apr 28 02:13:06.491481 systemd-logind[1534]: Removed session 6. Apr 28 02:13:06.517276 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 53840 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:13:06.518692 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:13:06.522250 systemd-logind[1534]: New session 7 of user core. Apr 28 02:13:06.532426 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 28 02:13:06.584019 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 28 02:13:06.584278 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 02:13:06.807422 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 28 02:13:06.807586 (dockerd)[1780]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 28 02:13:07.053617 dockerd[1780]: time="2026-04-28T02:13:07.053527580Z" level=info msg="Starting up" Apr 28 02:13:07.251358 dockerd[1780]: time="2026-04-28T02:13:07.251093173Z" level=info msg="Loading containers: start." Apr 28 02:13:07.348203 kernel: Initializing XFRM netlink socket Apr 28 02:13:07.418525 systemd-networkd[1249]: docker0: Link UP Apr 28 02:13:07.436412 dockerd[1780]: time="2026-04-28T02:13:07.436301408Z" level=info msg="Loading containers: done." Apr 28 02:13:07.450529 dockerd[1780]: time="2026-04-28T02:13:07.450446636Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 02:13:07.450681 dockerd[1780]: time="2026-04-28T02:13:07.450558041Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 28 02:13:07.450681 dockerd[1780]: time="2026-04-28T02:13:07.450634633Z" level=info msg="Daemon has completed initialization" Apr 28 02:13:07.479549 dockerd[1780]: time="2026-04-28T02:13:07.479487452Z" level=info msg="API listen on /run/docker.sock" Apr 28 02:13:07.480074 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 28 02:13:07.869237 containerd[1555]: time="2026-04-28T02:13:07.869195020Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 28 02:13:08.315141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3390609032.mount: Deactivated successfully. Apr 28 02:13:08.933587 containerd[1555]: time="2026-04-28T02:13:08.933522133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:08.934093 containerd[1555]: time="2026-04-28T02:13:08.934052134Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 28 02:13:08.935002 containerd[1555]: time="2026-04-28T02:13:08.934979044Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:08.937267 containerd[1555]: time="2026-04-28T02:13:08.937232208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:08.938184 containerd[1555]: time="2026-04-28T02:13:08.938132724Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.068900133s" Apr 28 02:13:08.938237 containerd[1555]: time="2026-04-28T02:13:08.938215184Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 28 02:13:08.938903 containerd[1555]: time="2026-04-28T02:13:08.938807463Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 28 02:13:09.710913 containerd[1555]: time="2026-04-28T02:13:09.710843140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:09.711310 containerd[1555]: time="2026-04-28T02:13:09.711268600Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 28 02:13:09.712175 containerd[1555]: time="2026-04-28T02:13:09.712131211Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:09.714511 containerd[1555]: time="2026-04-28T02:13:09.714471653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:09.715433 containerd[1555]: time="2026-04-28T02:13:09.715397776Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 776.567298ms" Apr 28 02:13:09.715473 containerd[1555]: time="2026-04-28T02:13:09.715431146Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 28 02:13:09.716677 containerd[1555]: time="2026-04-28T02:13:09.716622518Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 28 02:13:10.401354 containerd[1555]: time="2026-04-28T02:13:10.401293720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:10.401975 containerd[1555]: time="2026-04-28T02:13:10.401910126Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 28 02:13:10.402835 containerd[1555]: time="2026-04-28T02:13:10.402800673Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:10.405419 containerd[1555]: time="2026-04-28T02:13:10.405386969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:10.406118 containerd[1555]: time="2026-04-28T02:13:10.406096247Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 689.430479ms" Apr 28 02:13:10.406118 containerd[1555]: time="2026-04-28T02:13:10.406124486Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 28 02:13:10.406561 containerd[1555]: time="2026-04-28T02:13:10.406528485Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 28 02:13:11.459597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1179960474.mount: Deactivated successfully. Apr 28 02:13:11.723309 containerd[1555]: time="2026-04-28T02:13:11.723122625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:11.723797 containerd[1555]: time="2026-04-28T02:13:11.723766805Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 28 02:13:11.724565 containerd[1555]: time="2026-04-28T02:13:11.724531493Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:11.726254 containerd[1555]: time="2026-04-28T02:13:11.726229776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:11.726793 containerd[1555]: time="2026-04-28T02:13:11.726771404Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.320211068s" Apr 28 02:13:11.726823 containerd[1555]: time="2026-04-28T02:13:11.726799271Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 28 02:13:11.727269 containerd[1555]: time="2026-04-28T02:13:11.727245820Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 28 02:13:12.067102 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 28 02:13:12.080385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:13:12.188900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:13:12.192313 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 02:13:12.230769 kubelet[2014]: E0428 02:13:12.230700 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 02:13:12.234437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 02:13:12.234584 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 02:13:12.248716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2057946324.mount: Deactivated successfully. Apr 28 02:13:12.841116 containerd[1555]: time="2026-04-28T02:13:12.841049819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:12.841596 containerd[1555]: time="2026-04-28T02:13:12.841542701Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 28 02:13:12.842368 containerd[1555]: time="2026-04-28T02:13:12.842336900Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:12.844559 containerd[1555]: time="2026-04-28T02:13:12.844521961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:12.845516 containerd[1555]: time="2026-04-28T02:13:12.845491345Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.118165522s" Apr 28 02:13:12.845553 containerd[1555]: time="2026-04-28T02:13:12.845521933Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 28 02:13:12.846213 containerd[1555]: time="2026-04-28T02:13:12.846192268Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 28 02:13:13.219115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount994098752.mount: Deactivated successfully. Apr 28 02:13:13.226207 containerd[1555]: time="2026-04-28T02:13:13.226039901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:13.226491 containerd[1555]: time="2026-04-28T02:13:13.226442950Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 28 02:13:13.227393 containerd[1555]: time="2026-04-28T02:13:13.227361903Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:13.230276 containerd[1555]: time="2026-04-28T02:13:13.230177627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:13.231054 containerd[1555]: time="2026-04-28T02:13:13.231024149Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 384.805832ms" Apr 28 02:13:13.231054 containerd[1555]: time="2026-04-28T02:13:13.231054070Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 28 02:13:13.231623 containerd[1555]: time="2026-04-28T02:13:13.231569389Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 28 02:13:13.657126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3550858346.mount: Deactivated successfully. Apr 28 02:13:14.228316 containerd[1555]: time="2026-04-28T02:13:14.228253436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:14.228988 containerd[1555]: time="2026-04-28T02:13:14.228925905Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 28 02:13:14.229806 containerd[1555]: time="2026-04-28T02:13:14.229766139Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:14.232207 containerd[1555]: time="2026-04-28T02:13:14.232143892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:14.233003 containerd[1555]: time="2026-04-28T02:13:14.232972111Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.001373888s" Apr 28 02:13:14.233034 containerd[1555]: time="2026-04-28T02:13:14.233003247Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 28 02:13:16.715821 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:13:16.726393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:13:16.746260 systemd[1]: Reloading requested from client PID 2175 ('systemctl') (unit session-7.scope)... Apr 28 02:13:16.746288 systemd[1]: Reloading... Apr 28 02:13:16.791204 zram_generator::config[2217]: No configuration found. Apr 28 02:13:16.866598 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 02:13:16.911042 systemd[1]: Reloading finished in 164 ms. Apr 28 02:13:16.946804 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 28 02:13:16.946984 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 28 02:13:16.947275 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:13:16.948598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:13:17.045738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:13:17.053515 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 02:13:17.084875 kubelet[2274]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 02:13:17.084875 kubelet[2274]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 02:13:17.084875 kubelet[2274]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 02:13:17.085341 kubelet[2274]: I0428 02:13:17.084923 2274 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 02:13:17.402805 kubelet[2274]: I0428 02:13:17.402645 2274 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 28 02:13:17.402805 kubelet[2274]: I0428 02:13:17.402680 2274 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 02:13:17.402950 kubelet[2274]: I0428 02:13:17.402884 2274 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 02:13:17.428324 kubelet[2274]: E0428 02:13:17.428268 2274 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 02:13:17.432661 kubelet[2274]: I0428 02:13:17.432573 2274 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 02:13:17.440532 kubelet[2274]: E0428 02:13:17.440489 2274 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 02:13:17.440532 kubelet[2274]: I0428 02:13:17.440522 2274 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 28 02:13:17.446116 kubelet[2274]: I0428 02:13:17.446059 2274 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 28 02:13:17.446506 kubelet[2274]: I0428 02:13:17.446445 2274 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 02:13:17.446662 kubelet[2274]: I0428 02:13:17.446481 2274 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 28 02:13:17.446662 kubelet[2274]: I0428 02:13:17.446648 2274 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 02:13:17.446662 kubelet[2274]: I0428 02:13:17.446655 2274 container_manager_linux.go:303] "Creating device plugin manager" Apr 28 02:13:17.446771 kubelet[2274]: I0428 02:13:17.446752 2274 state_mem.go:36] "Initialized new in-memory state store" Apr 28 02:13:17.450114 kubelet[2274]: I0428 02:13:17.450058 2274 kubelet.go:480] "Attempting to sync node with API server" Apr 28 02:13:17.450114 kubelet[2274]: I0428 02:13:17.450080 2274 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 02:13:17.450114 kubelet[2274]: I0428 02:13:17.450098 2274 kubelet.go:386] "Adding apiserver pod source" Apr 28 02:13:17.450114 kubelet[2274]: I0428 02:13:17.450115 2274 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 02:13:17.457203 kubelet[2274]: I0428 02:13:17.454724 2274 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 02:13:17.457203 kubelet[2274]: I0428 02:13:17.455421 2274 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 02:13:17.457203 kubelet[2274]: E0428 02:13:17.455688 2274 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 02:13:17.457203 kubelet[2274]: E0428 02:13:17.456203 2274 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 02:13:17.457203 kubelet[2274]: W0428 02:13:17.456289 2274 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 28 02:13:17.460813 kubelet[2274]: I0428 02:13:17.460786 2274 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 28 02:13:17.460867 kubelet[2274]: I0428 02:13:17.460843 2274 server.go:1289] "Started kubelet" Apr 28 02:13:17.461240 kubelet[2274]: I0428 02:13:17.461121 2274 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 02:13:17.462308 kubelet[2274]: I0428 02:13:17.462270 2274 server.go:317] "Adding debug handlers to kubelet server" Apr 28 02:13:17.463573 kubelet[2274]: I0428 02:13:17.462528 2274 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 02:13:17.463573 kubelet[2274]: I0428 02:13:17.462828 2274 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 02:13:17.464920 kubelet[2274]: E0428 02:13:17.463070 2274 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa637e64e48cb3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 02:13:17.460810931 +0000 UTC m=+0.403613072,LastTimestamp:2026-04-28 02:13:17.460810931 +0000 UTC m=+0.403613072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 02:13:17.465010 kubelet[2274]: I0428 02:13:17.464969 2274 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 02:13:17.465588 kubelet[2274]: I0428 02:13:17.465561 2274 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 02:13:17.465703 kubelet[2274]: E0428 02:13:17.465654 2274 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:13:17.465736 kubelet[2274]: I0428 02:13:17.465717 2274 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 28 02:13:17.465921 kubelet[2274]: I0428 02:13:17.465899 2274 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 28 02:13:17.465983 kubelet[2274]: I0428 02:13:17.465968 2274 reconciler.go:26] "Reconciler: start to sync state" Apr 28 02:13:17.466556 kubelet[2274]: E0428 02:13:17.466307 2274 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 02:13:17.467023 kubelet[2274]: I0428 02:13:17.466927 2274 factory.go:223] Registration of the systemd container factory successfully Apr 28 02:13:17.467023 kubelet[2274]: E0428 02:13:17.466916 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Apr 28 02:13:17.467023 kubelet[2274]: I0428 02:13:17.467017 2274 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 02:13:17.467657 kubelet[2274]: E0428 02:13:17.467635 2274 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 02:13:17.468102 kubelet[2274]: I0428 02:13:17.468082 2274 factory.go:223] Registration of the containerd container factory successfully Apr 28 02:13:17.481910 kubelet[2274]: I0428 02:13:17.481830 2274 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 28 02:13:17.484180 kubelet[2274]: I0428 02:13:17.483662 2274 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 28 02:13:17.484180 kubelet[2274]: I0428 02:13:17.484093 2274 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 28 02:13:17.484339 kubelet[2274]: I0428 02:13:17.484216 2274 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 02:13:17.484339 kubelet[2274]: I0428 02:13:17.484223 2274 kubelet.go:2436] "Starting kubelet main sync loop" Apr 28 02:13:17.484339 kubelet[2274]: E0428 02:13:17.484280 2274 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 02:13:17.485920 kubelet[2274]: I0428 02:13:17.485893 2274 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 02:13:17.485920 kubelet[2274]: I0428 02:13:17.485912 2274 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 02:13:17.485920 kubelet[2274]: I0428 02:13:17.485924 2274 state_mem.go:36] "Initialized new in-memory state store" Apr 28 02:13:17.489621 kubelet[2274]: E0428 02:13:17.489550 2274 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 02:13:17.566385 kubelet[2274]: E0428 02:13:17.566342 2274 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 02:13:17.579799 kubelet[2274]: I0428 02:13:17.579673 2274 policy_none.go:49] "None policy: Start" Apr 28 02:13:17.579799 kubelet[2274]: I0428 02:13:17.579776 2274 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 28 02:13:17.579799 kubelet[2274]: I0428 02:13:17.579790 2274 state_mem.go:35] "Initializing new in-memory state store" Apr 28 02:13:17.583976 kubelet[2274]: E0428 02:13:17.583926 2274 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 02:13:17.584078 kubelet[2274]: I0428 02:13:17.584063 2274 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 02:13:17.584108 kubelet[2274]: I0428 02:13:17.584080 2274 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 02:13:17.585117 kubelet[2274]: I0428 02:13:17.584695 2274 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 02:13:17.585689 kubelet[2274]: E0428 02:13:17.585668 2274 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 02:13:17.586502 kubelet[2274]: E0428 02:13:17.586480 2274 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 02:13:17.589226 kubelet[2274]: E0428 02:13:17.588992 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:13:17.591478 kubelet[2274]: E0428 02:13:17.591445 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:13:17.593742 kubelet[2274]: E0428 02:13:17.593699 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 02:13:17.667630 kubelet[2274]: I0428 02:13:17.667438 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1a71599518476c0ce699da5b062ff00-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1a71599518476c0ce699da5b062ff00\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:13:17.667630 kubelet[2274]: I0428 02:13:17.667485 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1a71599518476c0ce699da5b062ff00-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1a71599518476c0ce699da5b062ff00\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:13:17.667825 kubelet[2274]: E0428 02:13:17.667765 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Apr 28 02:13:17.685597 kubelet[2274]: I0428 02:13:17.685514 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 02:13:17.685903 kubelet[2274]: E0428 02:13:17.685848 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 28 02:13:17.768745 kubelet[2274]: I0428 02:13:17.768569 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1a71599518476c0ce699da5b062ff00-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b1a71599518476c0ce699da5b062ff00\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:13:17.768745 kubelet[2274]: I0428 02:13:17.768735 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:17.768745 kubelet[2274]: I0428 02:13:17.768761 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:17.768936 kubelet[2274]: I0428 02:13:17.768783 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:17.768936 kubelet[2274]: I0428 02:13:17.768804 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:17.768936 kubelet[2274]: I0428 02:13:17.768838 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:17.768936 kubelet[2274]: I0428 02:13:17.768855 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 28 02:13:17.887657 kubelet[2274]: I0428 02:13:17.887600 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 02:13:17.888050 kubelet[2274]: E0428 02:13:17.888004 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 28 02:13:17.890284 kubelet[2274]: E0428 02:13:17.890252 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:17.890828 containerd[1555]: time="2026-04-28T02:13:17.890795377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b1a71599518476c0ce699da5b062ff00,Namespace:kube-system,Attempt:0,}" Apr 28 02:13:17.891933 kubelet[2274]: E0428 02:13:17.891912 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:17.892307 containerd[1555]: time="2026-04-28T02:13:17.892259042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 28 02:13:17.894518 kubelet[2274]: E0428 02:13:17.894467 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:17.894764 containerd[1555]: time="2026-04-28T02:13:17.894739781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 28 02:13:18.068394 kubelet[2274]: E0428 02:13:18.068271 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Apr 28 02:13:18.265217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1990274881.mount: Deactivated successfully. Apr 28 02:13:18.271232 containerd[1555]: time="2026-04-28T02:13:18.271177187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:13:18.271864 containerd[1555]: time="2026-04-28T02:13:18.271808874Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 28 02:13:18.272680 containerd[1555]: time="2026-04-28T02:13:18.272641805Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:13:18.273677 containerd[1555]: time="2026-04-28T02:13:18.273636828Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:13:18.274552 containerd[1555]: time="2026-04-28T02:13:18.274508189Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 02:13:18.275675 containerd[1555]: time="2026-04-28T02:13:18.275630937Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:13:18.276421 containerd[1555]: time="2026-04-28T02:13:18.276360426Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 02:13:18.277993 containerd[1555]: time="2026-04-28T02:13:18.277970110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:13:18.279350 containerd[1555]: time="2026-04-28T02:13:18.279302904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 388.403308ms" Apr 28 02:13:18.279781 containerd[1555]: time="2026-04-28T02:13:18.279749747Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 387.431587ms" Apr 28 02:13:18.281168 containerd[1555]: time="2026-04-28T02:13:18.281122610Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 386.336137ms" Apr 28 02:13:18.289480 kubelet[2274]: I0428 02:13:18.289449 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 02:13:18.289797 kubelet[2274]: E0428 02:13:18.289718 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 28 02:13:18.369732 containerd[1555]: time="2026-04-28T02:13:18.369558117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:13:18.369732 containerd[1555]: time="2026-04-28T02:13:18.369599618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:13:18.369732 containerd[1555]: time="2026-04-28T02:13:18.369612123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:13:18.369732 containerd[1555]: time="2026-04-28T02:13:18.369590116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:13:18.369732 containerd[1555]: time="2026-04-28T02:13:18.369674690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:13:18.369732 containerd[1555]: time="2026-04-28T02:13:18.369696918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:13:18.369921 containerd[1555]: time="2026-04-28T02:13:18.369784996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:13:18.370718 containerd[1555]: time="2026-04-28T02:13:18.370505237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:13:18.372538 containerd[1555]: time="2026-04-28T02:13:18.372464317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:13:18.372538 containerd[1555]: time="2026-04-28T02:13:18.372502048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:13:18.372538 containerd[1555]: time="2026-04-28T02:13:18.372509806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:13:18.372833 containerd[1555]: time="2026-04-28T02:13:18.372744966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:13:18.421498 containerd[1555]: time="2026-04-28T02:13:18.421437280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7b6b65335bb78b454bae7783a61446525d594705ca1173a26b6161c1465e3dd\"" Apr 28 02:13:18.422518 kubelet[2274]: E0428 02:13:18.422298 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:18.423363 containerd[1555]: time="2026-04-28T02:13:18.423315809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b1a71599518476c0ce699da5b062ff00,Namespace:kube-system,Attempt:0,} returns sandbox id \"40bcec7eb8818635300836fd0e5546d8dfe03cfd9c078adb07434ae45233fa13\"" Apr 28 02:13:18.424197 containerd[1555]: time="2026-04-28T02:13:18.424111140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"84b5712c3ef32c038c9553f496edb3d28280d6417a8c7b16dc61f84b2f938845\"" Apr 28 02:13:18.424414 kubelet[2274]: E0428 02:13:18.424392 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:18.425573 kubelet[2274]: E0428 02:13:18.425543 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:18.426648 kubelet[2274]: E0428 02:13:18.426612 2274 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 02:13:18.428731 containerd[1555]: time="2026-04-28T02:13:18.428691225Z" level=info msg="CreateContainer within sandbox \"e7b6b65335bb78b454bae7783a61446525d594705ca1173a26b6161c1465e3dd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 28 02:13:18.431009 containerd[1555]: time="2026-04-28T02:13:18.430935504Z" level=info msg="CreateContainer within sandbox \"40bcec7eb8818635300836fd0e5546d8dfe03cfd9c078adb07434ae45233fa13\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 28 02:13:18.432543 containerd[1555]: time="2026-04-28T02:13:18.432380800Z" level=info msg="CreateContainer within sandbox \"84b5712c3ef32c038c9553f496edb3d28280d6417a8c7b16dc61f84b2f938845\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 28 02:13:18.448203 containerd[1555]: time="2026-04-28T02:13:18.447705994Z" level=info msg="CreateContainer within sandbox \"e7b6b65335bb78b454bae7783a61446525d594705ca1173a26b6161c1465e3dd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"db1114e60f1914b881cc6cc5da35a68293f8b237135f2079d955ee592f7d0cf0\"" Apr 28 02:13:18.448610 containerd[1555]: time="2026-04-28T02:13:18.448560979Z" level=info msg="StartContainer for \"db1114e60f1914b881cc6cc5da35a68293f8b237135f2079d955ee592f7d0cf0\"" Apr 28 02:13:18.454908 containerd[1555]: time="2026-04-28T02:13:18.454834324Z" level=info msg="CreateContainer within sandbox \"40bcec7eb8818635300836fd0e5546d8dfe03cfd9c078adb07434ae45233fa13\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"06dbaea3c291a6c6fb15bda8d220206f38d6dcad1b2341fb757b8b64dfb11afa\"" Apr 28 02:13:18.455870 containerd[1555]: time="2026-04-28T02:13:18.455820703Z" level=info msg="StartContainer for \"06dbaea3c291a6c6fb15bda8d220206f38d6dcad1b2341fb757b8b64dfb11afa\"" Apr 28 02:13:18.459754 containerd[1555]: time="2026-04-28T02:13:18.459711849Z" level=info msg="CreateContainer within sandbox \"84b5712c3ef32c038c9553f496edb3d28280d6417a8c7b16dc61f84b2f938845\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bab220af4018fca2347e1e91a3aa9169cd821752ccc66b9562e02bc37fef0e03\"" Apr 28 02:13:18.461347 containerd[1555]: time="2026-04-28T02:13:18.460328300Z" level=info msg="StartContainer for \"bab220af4018fca2347e1e91a3aa9169cd821752ccc66b9562e02bc37fef0e03\"" Apr 28 02:13:18.522889 containerd[1555]: time="2026-04-28T02:13:18.522823922Z" level=info msg="StartContainer for \"db1114e60f1914b881cc6cc5da35a68293f8b237135f2079d955ee592f7d0cf0\" returns successfully" Apr 28 02:13:18.546851 containerd[1555]: time="2026-04-28T02:13:18.542539858Z" level=info msg="StartContainer for \"bab220af4018fca2347e1e91a3aa9169cd821752ccc66b9562e02bc37fef0e03\" returns successfully" Apr 28 02:13:18.555569 containerd[1555]: time="2026-04-28T02:13:18.555508694Z" level=info msg="StartContainer for \"06dbaea3c291a6c6fb15bda8d220206f38d6dcad1b2341fb757b8b64dfb11afa\" returns successfully" Apr 28 02:13:19.091669 kubelet[2274]: I0428 02:13:19.091595 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 02:13:19.280396 kubelet[2274]: E0428 02:13:19.280349 2274 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 28 02:13:19.381875 kubelet[2274]: I0428 02:13:19.381721 2274 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 02:13:19.381875 kubelet[2274]: E0428 02:13:19.381760 2274 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 28 02:13:19.451423 kubelet[2274]: I0428 02:13:19.451272 2274 apiserver.go:52] "Watching apiserver" Apr 28 02:13:19.467051 kubelet[2274]: I0428 02:13:19.466646 2274 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 28 02:13:19.467051 kubelet[2274]: I0428 02:13:19.466677 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 02:13:19.473271 kubelet[2274]: E0428 02:13:19.473236 2274 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 28 02:13:19.473271 kubelet[2274]: I0428 02:13:19.473270 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:19.476781 kubelet[2274]: E0428 02:13:19.475819 2274 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:19.476781 kubelet[2274]: I0428 02:13:19.475861 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 02:13:19.477004 kubelet[2274]: E0428 02:13:19.476961 2274 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 28 02:13:19.499836 kubelet[2274]: I0428 02:13:19.499746 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 02:13:19.501447 kubelet[2274]: E0428 02:13:19.501379 2274 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 28 02:13:19.501594 kubelet[2274]: E0428 02:13:19.501559 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:19.503489 kubelet[2274]: I0428 02:13:19.503256 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:19.505055 kubelet[2274]: E0428 02:13:19.505025 2274 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:19.505368 kubelet[2274]: E0428 02:13:19.505309 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:19.506098 kubelet[2274]: I0428 02:13:19.505741 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 02:13:19.507478 kubelet[2274]: E0428 02:13:19.507449 2274 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 28 02:13:19.507682 kubelet[2274]: E0428 02:13:19.507614 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:20.507138 kubelet[2274]: I0428 02:13:20.507095 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:20.507495 kubelet[2274]: I0428 02:13:20.507196 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 02:13:20.507495 kubelet[2274]: I0428 02:13:20.507369 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 02:13:20.512018 kubelet[2274]: E0428 02:13:20.511978 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:20.519177 kubelet[2274]: E0428 02:13:20.516519 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:20.519177 kubelet[2274]: E0428 02:13:20.516636 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:21.490268 systemd[1]: Reloading requested from client PID 2561 ('systemctl') (unit session-7.scope)... Apr 28 02:13:21.490288 systemd[1]: Reloading... Apr 28 02:13:21.508608 kubelet[2274]: I0428 02:13:21.508525 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 02:13:21.508848 kubelet[2274]: E0428 02:13:21.508527 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:21.510251 kubelet[2274]: E0428 02:13:21.510228 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:21.514631 kubelet[2274]: E0428 02:13:21.514608 2274 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 28 02:13:21.514733 kubelet[2274]: E0428 02:13:21.514720 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:21.543212 zram_generator::config[2603]: No configuration found. Apr 28 02:13:21.618927 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 02:13:21.666602 systemd[1]: Reloading finished in 176 ms. Apr 28 02:13:21.690633 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:13:21.707240 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 02:13:21.707516 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:13:21.719639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:13:21.829236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:13:21.840552 (kubelet)[2655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 02:13:21.878753 kubelet[2655]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 02:13:21.878753 kubelet[2655]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 02:13:21.878753 kubelet[2655]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 02:13:21.879277 kubelet[2655]: I0428 02:13:21.878797 2655 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 02:13:21.885005 kubelet[2655]: I0428 02:13:21.884952 2655 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 28 02:13:21.885005 kubelet[2655]: I0428 02:13:21.884986 2655 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 02:13:21.885256 kubelet[2655]: I0428 02:13:21.885221 2655 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 02:13:21.886393 kubelet[2655]: I0428 02:13:21.886353 2655 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 28 02:13:21.888542 kubelet[2655]: I0428 02:13:21.888513 2655 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 02:13:21.891775 kubelet[2655]: E0428 02:13:21.891752 2655 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 02:13:21.891775 kubelet[2655]: I0428 02:13:21.891775 2655 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 28 02:13:21.895021 kubelet[2655]: I0428 02:13:21.894982 2655 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 28 02:13:21.895828 kubelet[2655]: I0428 02:13:21.895391 2655 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 02:13:21.895828 kubelet[2655]: I0428 02:13:21.895411 2655 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 28 02:13:21.895828 kubelet[2655]: I0428 02:13:21.895568 2655 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 02:13:21.895828 kubelet[2655]: I0428 02:13:21.895576 2655 container_manager_linux.go:303] "Creating device plugin manager" Apr 28 02:13:21.895828 kubelet[2655]: I0428 02:13:21.895613 2655 state_mem.go:36] "Initialized new in-memory state store" Apr 28 02:13:21.896045 kubelet[2655]: I0428 02:13:21.895762 2655 kubelet.go:480] "Attempting to sync node with API server" Apr 28 02:13:21.896045 kubelet[2655]: I0428 02:13:21.895771 2655 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 02:13:21.896045 kubelet[2655]: I0428 02:13:21.895792 2655 kubelet.go:386] "Adding apiserver pod source" Apr 28 02:13:21.896045 kubelet[2655]: I0428 02:13:21.895803 2655 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 02:13:21.896968 kubelet[2655]: I0428 02:13:21.896953 2655 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 02:13:21.897677 kubelet[2655]: I0428 02:13:21.897655 2655 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 02:13:21.902383 kubelet[2655]: I0428 02:13:21.902343 2655 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 28 02:13:21.902458 kubelet[2655]: I0428 02:13:21.902453 2655 server.go:1289] "Started kubelet" Apr 28 02:13:21.903618 kubelet[2655]: I0428 02:13:21.903606 2655 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 02:13:21.909527 kubelet[2655]: I0428 02:13:21.909490 2655 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 02:13:21.910170 kubelet[2655]: I0428 02:13:21.910070 2655 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 02:13:21.910393 kubelet[2655]: I0428 02:13:21.910379 2655 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 28 02:13:21.910505 kubelet[2655]: I0428 02:13:21.910453 2655 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 02:13:21.910530 kubelet[2655]: I0428 02:13:21.910521 2655 server.go:317] "Adding debug handlers to kubelet server" Apr 28 02:13:21.911281 kubelet[2655]: I0428 02:13:21.911249 2655 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 02:13:21.912236 kubelet[2655]: E0428 02:13:21.912215 2655 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 02:13:21.914269 kubelet[2655]: I0428 02:13:21.914244 2655 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 28 02:13:21.914416 kubelet[2655]: I0428 02:13:21.914369 2655 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 28 02:13:21.914511 kubelet[2655]: I0428 02:13:21.914499 2655 reconciler.go:26] "Reconciler: start to sync state" Apr 28 02:13:21.915259 kubelet[2655]: I0428 02:13:21.915222 2655 factory.go:223] Registration of the systemd container factory successfully Apr 28 02:13:21.915465 kubelet[2655]: I0428 02:13:21.915410 2655 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 02:13:21.918938 kubelet[2655]: I0428 02:13:21.918896 2655 factory.go:223] Registration of the containerd container factory successfully Apr 28 02:13:21.920709 kubelet[2655]: I0428 02:13:21.919991 2655 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 28 02:13:21.920787 kubelet[2655]: I0428 02:13:21.920779 2655 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 28 02:13:21.920824 kubelet[2655]: I0428 02:13:21.920819 2655 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 02:13:21.920848 kubelet[2655]: I0428 02:13:21.920845 2655 kubelet.go:2436] "Starting kubelet main sync loop" Apr 28 02:13:21.920900 kubelet[2655]: E0428 02:13:21.920891 2655 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 02:13:21.952384 kubelet[2655]: I0428 02:13:21.952346 2655 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 02:13:21.952384 kubelet[2655]: I0428 02:13:21.952370 2655 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 02:13:21.952384 kubelet[2655]: I0428 02:13:21.952385 2655 state_mem.go:36] "Initialized new in-memory state store" Apr 28 02:13:21.952559 kubelet[2655]: I0428 02:13:21.952511 2655 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 28 02:13:21.952559 kubelet[2655]: I0428 02:13:21.952519 2655 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 28 02:13:21.952559 kubelet[2655]: I0428 02:13:21.952533 2655 policy_none.go:49] "None policy: Start" Apr 28 02:13:21.952559 kubelet[2655]: I0428 02:13:21.952550 2655 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 28 02:13:21.952559 kubelet[2655]: I0428 02:13:21.952559 2655 state_mem.go:35] "Initializing new in-memory state store" Apr 28 02:13:21.952648 kubelet[2655]: I0428 02:13:21.952626 2655 state_mem.go:75] "Updated machine memory state" Apr 28 02:13:21.953675 kubelet[2655]: E0428 02:13:21.953537 2655 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 02:13:21.953675 kubelet[2655]: I0428 02:13:21.953672 2655 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 02:13:21.953745 kubelet[2655]: I0428 02:13:21.953681 2655 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 02:13:21.954566 kubelet[2655]: I0428 02:13:21.953927 2655 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 02:13:21.955103 kubelet[2655]: E0428 02:13:21.955063 2655 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 02:13:22.022734 kubelet[2655]: I0428 02:13:22.022612 2655 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 02:13:22.022734 kubelet[2655]: I0428 02:13:22.022701 2655 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 02:13:22.022734 kubelet[2655]: I0428 02:13:22.022719 2655 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:22.029002 kubelet[2655]: E0428 02:13:22.028926 2655 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 28 02:13:22.029002 kubelet[2655]: E0428 02:13:22.028926 2655 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 28 02:13:22.029095 kubelet[2655]: E0428 02:13:22.029048 2655 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:22.059527 kubelet[2655]: I0428 02:13:22.059472 2655 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 02:13:22.066455 kubelet[2655]: I0428 02:13:22.066418 2655 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 28 02:13:22.066532 kubelet[2655]: I0428 02:13:22.066479 2655 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 02:13:22.116209 kubelet[2655]: I0428 02:13:22.116016 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1a71599518476c0ce699da5b062ff00-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b1a71599518476c0ce699da5b062ff00\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:13:22.116209 kubelet[2655]: I0428 02:13:22.116079 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:22.116209 kubelet[2655]: I0428 02:13:22.116135 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:22.116209 kubelet[2655]: I0428 02:13:22.116189 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:22.116209 kubelet[2655]: I0428 02:13:22.116208 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 28 02:13:22.116400 kubelet[2655]: I0428 02:13:22.116222 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1a71599518476c0ce699da5b062ff00-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1a71599518476c0ce699da5b062ff00\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:13:22.116400 kubelet[2655]: I0428 02:13:22.116235 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:22.116400 kubelet[2655]: I0428 02:13:22.116249 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 02:13:22.116400 kubelet[2655]: I0428 02:13:22.116296 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1a71599518476c0ce699da5b062ff00-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1a71599518476c0ce699da5b062ff00\") " pod="kube-system/kube-apiserver-localhost" Apr 28 02:13:22.330237 kubelet[2655]: E0428 02:13:22.330081 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:22.330237 kubelet[2655]: E0428 02:13:22.330100 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:22.330237 kubelet[2655]: E0428 02:13:22.330091 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:22.502948 sudo[2693]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 28 02:13:22.503205 sudo[2693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 28 02:13:22.897284 kubelet[2655]: I0428 02:13:22.897101 2655 apiserver.go:52] "Watching apiserver" Apr 28 02:13:22.914813 kubelet[2655]: I0428 02:13:22.914754 2655 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 28 02:13:22.931571 kubelet[2655]: E0428 02:13:22.931486 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:22.931571 kubelet[2655]: I0428 02:13:22.931554 2655 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 02:13:22.932000 kubelet[2655]: E0428 02:13:22.931931 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:22.942190 kubelet[2655]: E0428 02:13:22.939896 2655 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 28 02:13:22.942190 kubelet[2655]: E0428 02:13:22.940060 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:22.946793 kubelet[2655]: I0428 02:13:22.946721 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.946712374 podStartE2EDuration="2.946712374s" podCreationTimestamp="2026-04-28 02:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:13:22.945763456 +0000 UTC m=+1.101238440" watchObservedRunningTime="2026-04-28 02:13:22.946712374 +0000 UTC m=+1.102187354" Apr 28 02:13:22.952596 kubelet[2655]: I0428 02:13:22.952203 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.952191499 podStartE2EDuration="2.952191499s" podCreationTimestamp="2026-04-28 02:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:13:22.951915636 +0000 UTC m=+1.107390617" watchObservedRunningTime="2026-04-28 02:13:22.952191499 +0000 UTC m=+1.107666472" Apr 28 02:13:22.964430 kubelet[2655]: I0428 02:13:22.964305 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.96429188 podStartE2EDuration="2.96429188s" podCreationTimestamp="2026-04-28 02:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:13:22.95866367 +0000 UTC m=+1.114138650" watchObservedRunningTime="2026-04-28 02:13:22.96429188 +0000 UTC m=+1.119766864" Apr 28 02:13:23.004971 sudo[2693]: pam_unix(sudo:session): session closed for user root Apr 28 02:13:23.933729 kubelet[2655]: E0428 02:13:23.933678 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:23.934191 kubelet[2655]: E0428 02:13:23.934131 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:24.381433 sudo[1762]: pam_unix(sudo:session): session closed for user root Apr 28 02:13:24.382988 sshd[1755]: pam_unix(sshd:session): session closed for user core Apr 28 02:13:24.387032 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:53840.service: Deactivated successfully. Apr 28 02:13:24.388762 systemd-logind[1534]: Session 7 logged out. Waiting for processes to exit. Apr 28 02:13:24.388818 systemd[1]: session-7.scope: Deactivated successfully. Apr 28 02:13:24.389623 systemd-logind[1534]: Removed session 7. Apr 28 02:13:24.596465 kubelet[2655]: E0428 02:13:24.596383 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:24.935507 kubelet[2655]: E0428 02:13:24.935464 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:28.023910 kubelet[2655]: I0428 02:13:28.023854 2655 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 28 02:13:28.024364 containerd[1555]: time="2026-04-28T02:13:28.024219472Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 28 02:13:28.024606 kubelet[2655]: I0428 02:13:28.024425 2655 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 28 02:13:28.556855 kubelet[2655]: I0428 02:13:28.556784 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-hostproc\") pod \"cilium-86pzm\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " pod="kube-system/cilium-86pzm" Apr 28 02:13:28.556855 kubelet[2655]: I0428 02:13:28.556824 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-cni-path\") pod \"cilium-86pzm\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " pod="kube-system/cilium-86pzm" Apr 28 02:13:28.556855 kubelet[2655]: I0428 02:13:28.556842 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-etc-cni-netd\") pod \"cilium-86pzm\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " pod="kube-system/cilium-86pzm" Apr 28 02:13:28.556855 kubelet[2655]: I0428 02:13:28.556856 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-host-proc-sys-net\") pod \"cilium-86pzm\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " pod="kube-system/cilium-86pzm" Apr 28 02:13:28.556855 kubelet[2655]: I0428 02:13:28.556875 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26fc77d4-4d1c-4466-bbe6-aec5428ad4ba-lib-modules\") pod \"kube-proxy-9crnb\" (UID: \"26fc77d4-4d1c-4466-bbe6-aec5428ad4ba\") " pod="kube-system/kube-proxy-9crnb" Apr 28 02:13:28.557194 kubelet[2655]: I0428 02:13:28.556948 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-lib-modules\") pod \"cilium-86pzm\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " pod="kube-system/cilium-86pzm" Apr 28 02:13:28.557194 kubelet[2655]: I0428 02:13:28.557185 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-xtables-lock\") pod \"cilium-86pzm\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " pod="kube-system/cilium-86pzm" Apr 28 02:13:28.557241 kubelet[2655]: I0428 02:13:28.557204 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/836e2b6e-a9ad-44c6-a036-3460365dff88-cilium-config-path\") pod \"cilium-86pzm\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " pod="kube-system/cilium-86pzm" Apr 28 02:13:28.557241 kubelet[2655]: I0428 02:13:28.557220 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/26fc77d4-4d1c-4466-bbe6-aec5428ad4ba-kube-proxy\") pod \"kube-proxy-9crnb\" (UID: \"26fc77d4-4d1c-4466-bbe6-aec5428ad4ba\") " pod="kube-system/kube-proxy-9crnb" Apr 28 02:13:28.557279 kubelet[2655]: I0428 02:13:28.557237 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8shm\" (UniqueName: \"kubernetes.io/projected/26fc77d4-4d1c-4466-bbe6-aec5428ad4ba-kube-api-access-p8shm\") pod \"kube-proxy-9crnb\" (UID: \"26fc77d4-4d1c-4466-bbe6-aec5428ad4ba\") " pod="kube-system/kube-proxy-9crnb" Apr 28 02:13:28.557301 kubelet[2655]: I0428 02:13:28.557278 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-cilium-cgroup\") pod \"cilium-86pzm\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " pod="kube-system/cilium-86pzm" Apr 28 02:13:28.557324 kubelet[2655]: I0428 02:13:28.557310 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-host-proc-sys-kernel\") pod \"cilium-86pzm\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " pod="kube-system/cilium-86pzm" Apr 28 02:13:28.557360 kubelet[2655]: I0428 02:13:28.557345 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-665pk\" (UniqueName: \"kubernetes.io/projected/836e2b6e-a9ad-44c6-a036-3460365dff88-kube-api-access-665pk\") pod \"cilium-86pzm\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " pod="kube-system/cilium-86pzm" Apr 28 02:13:28.557380 kubelet[2655]: I0428 02:13:28.557366 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-cilium-run\") pod \"cilium-86pzm\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " pod="kube-system/cilium-86pzm" Apr 28 02:13:28.557412 kubelet[2655]: I0428 02:13:28.557380 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-bpf-maps\") pod \"cilium-86pzm\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " pod="kube-system/cilium-86pzm" Apr 28 02:13:28.557412 kubelet[2655]: I0428 02:13:28.557406 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/836e2b6e-a9ad-44c6-a036-3460365dff88-clustermesh-secrets\") pod \"cilium-86pzm\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " pod="kube-system/cilium-86pzm" Apr 28 02:13:28.557447 kubelet[2655]: I0428 02:13:28.557418 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/836e2b6e-a9ad-44c6-a036-3460365dff88-hubble-tls\") pod \"cilium-86pzm\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " pod="kube-system/cilium-86pzm" Apr 28 02:13:28.557447 kubelet[2655]: I0428 02:13:28.557431 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26fc77d4-4d1c-4466-bbe6-aec5428ad4ba-xtables-lock\") pod \"kube-proxy-9crnb\" (UID: \"26fc77d4-4d1c-4466-bbe6-aec5428ad4ba\") " pod="kube-system/kube-proxy-9crnb" Apr 28 02:13:28.669438 kubelet[2655]: E0428 02:13:28.669268 2655 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 28 02:13:28.669438 kubelet[2655]: E0428 02:13:28.669295 2655 projected.go:194] Error preparing data for projected volume kube-api-access-p8shm for pod kube-system/kube-proxy-9crnb: configmap "kube-root-ca.crt" not found Apr 28 02:13:28.669438 kubelet[2655]: E0428 02:13:28.669348 2655 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/26fc77d4-4d1c-4466-bbe6-aec5428ad4ba-kube-api-access-p8shm podName:26fc77d4-4d1c-4466-bbe6-aec5428ad4ba nodeName:}" failed. No retries permitted until 2026-04-28 02:13:29.169328202 +0000 UTC m=+7.324803180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p8shm" (UniqueName: "kubernetes.io/projected/26fc77d4-4d1c-4466-bbe6-aec5428ad4ba-kube-api-access-p8shm") pod "kube-proxy-9crnb" (UID: "26fc77d4-4d1c-4466-bbe6-aec5428ad4ba") : configmap "kube-root-ca.crt" not found Apr 28 02:13:28.671603 kubelet[2655]: E0428 02:13:28.671561 2655 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 28 02:13:28.671603 kubelet[2655]: E0428 02:13:28.671591 2655 projected.go:194] Error preparing data for projected volume kube-api-access-665pk for pod kube-system/cilium-86pzm: configmap "kube-root-ca.crt" not found Apr 28 02:13:28.671603 kubelet[2655]: E0428 02:13:28.671630 2655 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/836e2b6e-a9ad-44c6-a036-3460365dff88-kube-api-access-665pk podName:836e2b6e-a9ad-44c6-a036-3460365dff88 nodeName:}" failed. No retries permitted until 2026-04-28 02:13:29.171615797 +0000 UTC m=+7.327090773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-665pk" (UniqueName: "kubernetes.io/projected/836e2b6e-a9ad-44c6-a036-3460365dff88-kube-api-access-665pk") pod "cilium-86pzm" (UID: "836e2b6e-a9ad-44c6-a036-3460365dff88") : configmap "kube-root-ca.crt" not found Apr 28 02:13:29.362873 kubelet[2655]: I0428 02:13:29.362792 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72s6g\" (UniqueName: \"kubernetes.io/projected/5658dd35-b9cb-4b02-8176-45358b28fa2c-kube-api-access-72s6g\") pod \"cilium-operator-6c4d7847fc-vk6qw\" (UID: \"5658dd35-b9cb-4b02-8176-45358b28fa2c\") " pod="kube-system/cilium-operator-6c4d7847fc-vk6qw" Apr 28 02:13:29.362873 kubelet[2655]: I0428 02:13:29.362855 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5658dd35-b9cb-4b02-8176-45358b28fa2c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vk6qw\" (UID: \"5658dd35-b9cb-4b02-8176-45358b28fa2c\") " pod="kube-system/cilium-operator-6c4d7847fc-vk6qw" Apr 28 02:13:29.375065 kubelet[2655]: E0428 02:13:29.375026 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:29.375794 containerd[1555]: time="2026-04-28T02:13:29.375423670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-86pzm,Uid:836e2b6e-a9ad-44c6-a036-3460365dff88,Namespace:kube-system,Attempt:0,}" Apr 28 02:13:29.376629 kubelet[2655]: E0428 02:13:29.376596 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:29.376891 containerd[1555]: time="2026-04-28T02:13:29.376867662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9crnb,Uid:26fc77d4-4d1c-4466-bbe6-aec5428ad4ba,Namespace:kube-system,Attempt:0,}" Apr 28 02:13:29.400463 containerd[1555]: time="2026-04-28T02:13:29.400402470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:13:29.400463 containerd[1555]: time="2026-04-28T02:13:29.400441281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:13:29.400463 containerd[1555]: time="2026-04-28T02:13:29.400449777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:13:29.400620 containerd[1555]: time="2026-04-28T02:13:29.400507235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:13:29.404842 containerd[1555]: time="2026-04-28T02:13:29.404770545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:13:29.404842 containerd[1555]: time="2026-04-28T02:13:29.404817368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:13:29.404842 containerd[1555]: time="2026-04-28T02:13:29.404827372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:13:29.404996 containerd[1555]: time="2026-04-28T02:13:29.404896311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:13:29.433828 containerd[1555]: time="2026-04-28T02:13:29.433762117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-86pzm,Uid:836e2b6e-a9ad-44c6-a036-3460365dff88,Namespace:kube-system,Attempt:0,} returns sandbox id \"74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3\"" Apr 28 02:13:29.434426 kubelet[2655]: E0428 02:13:29.434393 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:29.435237 containerd[1555]: time="2026-04-28T02:13:29.435128069Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 28 02:13:29.440298 containerd[1555]: time="2026-04-28T02:13:29.440180970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9crnb,Uid:26fc77d4-4d1c-4466-bbe6-aec5428ad4ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea498f1bf14b5fd0e4f13e4bbd3b33b29ce643821db28f2bfefbae62bb354704\"" Apr 28 02:13:29.441587 kubelet[2655]: E0428 02:13:29.441049 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:29.446255 containerd[1555]: time="2026-04-28T02:13:29.446218569Z" level=info msg="CreateContainer within sandbox \"ea498f1bf14b5fd0e4f13e4bbd3b33b29ce643821db28f2bfefbae62bb354704\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 28 02:13:29.459228 containerd[1555]: time="2026-04-28T02:13:29.459182929Z" level=info msg="CreateContainer within sandbox \"ea498f1bf14b5fd0e4f13e4bbd3b33b29ce643821db28f2bfefbae62bb354704\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"64c9976233f72a5c46036824e657a06eb7ad726abb837bb6102b262b66581eda\"" Apr 28 02:13:29.459659 containerd[1555]: time="2026-04-28T02:13:29.459642803Z" level=info msg="StartContainer for \"64c9976233f72a5c46036824e657a06eb7ad726abb837bb6102b262b66581eda\"" Apr 28 02:13:29.510202 containerd[1555]: time="2026-04-28T02:13:29.509442064Z" level=info msg="StartContainer for \"64c9976233f72a5c46036824e657a06eb7ad726abb837bb6102b262b66581eda\" returns successfully" Apr 28 02:13:29.594940 kubelet[2655]: E0428 02:13:29.594907 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:29.595374 containerd[1555]: time="2026-04-28T02:13:29.595333097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vk6qw,Uid:5658dd35-b9cb-4b02-8176-45358b28fa2c,Namespace:kube-system,Attempt:0,}" Apr 28 02:13:29.626483 containerd[1555]: time="2026-04-28T02:13:29.625502867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:13:29.626483 containerd[1555]: time="2026-04-28T02:13:29.626083353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:13:29.626483 containerd[1555]: time="2026-04-28T02:13:29.626112583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:13:29.626684 containerd[1555]: time="2026-04-28T02:13:29.626512776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:13:29.680352 containerd[1555]: time="2026-04-28T02:13:29.680311417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vk6qw,Uid:5658dd35-b9cb-4b02-8176-45358b28fa2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6ba81f1913e88e114712834bd6d39f6a9de4515ccf0e08916086a3f1f245879\"" Apr 28 02:13:29.680925 kubelet[2655]: E0428 02:13:29.680892 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:29.947671 kubelet[2655]: E0428 02:13:29.947266 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:30.152387 kubelet[2655]: E0428 02:13:30.152298 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:30.162776 kubelet[2655]: I0428 02:13:30.162722 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9crnb" podStartSLOduration=2.162708117 podStartE2EDuration="2.162708117s" podCreationTimestamp="2026-04-28 02:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:13:29.959744695 +0000 UTC m=+8.115219678" watchObservedRunningTime="2026-04-28 02:13:30.162708117 +0000 UTC m=+8.318183098" Apr 28 02:13:30.952355 kubelet[2655]: E0428 02:13:30.952318 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:31.953655 kubelet[2655]: E0428 02:13:31.953492 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:32.363478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3614141845.mount: Deactivated successfully. Apr 28 02:13:33.686532 containerd[1555]: time="2026-04-28T02:13:33.686468555Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:33.687110 containerd[1555]: time="2026-04-28T02:13:33.687063429Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 28 02:13:33.688075 containerd[1555]: time="2026-04-28T02:13:33.688017316Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:33.689796 containerd[1555]: time="2026-04-28T02:13:33.689749147Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.254548516s" Apr 28 02:13:33.689865 containerd[1555]: time="2026-04-28T02:13:33.689794906Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 28 02:13:33.694888 containerd[1555]: time="2026-04-28T02:13:33.694824918Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 28 02:13:33.710249 containerd[1555]: time="2026-04-28T02:13:33.710201019Z" level=info msg="CreateContainer within sandbox \"74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 28 02:13:33.720379 containerd[1555]: time="2026-04-28T02:13:33.720325709Z" level=info msg="CreateContainer within sandbox \"74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12\"" Apr 28 02:13:33.721032 containerd[1555]: time="2026-04-28T02:13:33.720970663Z" level=info msg="StartContainer for \"4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12\"" Apr 28 02:13:33.765985 containerd[1555]: time="2026-04-28T02:13:33.765907482Z" level=info msg="StartContainer for \"4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12\" returns successfully" Apr 28 02:13:33.842800 containerd[1555]: time="2026-04-28T02:13:33.841478531Z" level=info msg="shim disconnected" id=4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12 namespace=k8s.io Apr 28 02:13:33.842800 containerd[1555]: time="2026-04-28T02:13:33.842784241Z" level=warning msg="cleaning up after shim disconnected" id=4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12 namespace=k8s.io Apr 28 02:13:33.842800 containerd[1555]: time="2026-04-28T02:13:33.842796379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:13:33.843882 kubelet[2655]: E0428 02:13:33.843812 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:33.962262 kubelet[2655]: E0428 02:13:33.961948 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:33.962262 kubelet[2655]: E0428 02:13:33.962015 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:33.966970 containerd[1555]: time="2026-04-28T02:13:33.966908988Z" level=info msg="CreateContainer within sandbox \"74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 28 02:13:33.978926 containerd[1555]: time="2026-04-28T02:13:33.978861397Z" level=info msg="CreateContainer within sandbox \"74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e\"" Apr 28 02:13:33.979586 containerd[1555]: time="2026-04-28T02:13:33.979531281Z" level=info msg="StartContainer for \"e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e\"" Apr 28 02:13:34.022899 containerd[1555]: time="2026-04-28T02:13:34.022860536Z" level=info msg="StartContainer for \"e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e\" returns successfully" Apr 28 02:13:34.031962 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 02:13:34.032563 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:13:34.032623 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 28 02:13:34.037439 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 02:13:34.056435 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:13:34.058344 containerd[1555]: time="2026-04-28T02:13:34.058291384Z" level=info msg="shim disconnected" id=e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e namespace=k8s.io Apr 28 02:13:34.058459 containerd[1555]: time="2026-04-28T02:13:34.058348078Z" level=warning msg="cleaning up after shim disconnected" id=e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e namespace=k8s.io Apr 28 02:13:34.058459 containerd[1555]: time="2026-04-28T02:13:34.058372756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:13:34.600684 kubelet[2655]: E0428 02:13:34.600644 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:34.717947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12-rootfs.mount: Deactivated successfully. Apr 28 02:13:34.964825 kubelet[2655]: E0428 02:13:34.964483 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:34.980832 containerd[1555]: time="2026-04-28T02:13:34.980760401Z" level=info msg="CreateContainer within sandbox \"74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 28 02:13:35.003361 containerd[1555]: time="2026-04-28T02:13:35.003300928Z" level=info msg="CreateContainer within sandbox \"74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6\"" Apr 28 02:13:35.003922 containerd[1555]: time="2026-04-28T02:13:35.003887652Z" level=info msg="StartContainer for \"47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6\"" Apr 28 02:13:35.051052 containerd[1555]: time="2026-04-28T02:13:35.050434297Z" level=info msg="StartContainer for \"47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6\" returns successfully" Apr 28 02:13:35.073026 containerd[1555]: time="2026-04-28T02:13:35.072952301Z" level=info msg="shim disconnected" id=47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6 namespace=k8s.io Apr 28 02:13:35.073026 containerd[1555]: time="2026-04-28T02:13:35.073006524Z" level=warning msg="cleaning up after shim disconnected" id=47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6 namespace=k8s.io Apr 28 02:13:35.073026 containerd[1555]: time="2026-04-28T02:13:35.073015153Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:13:35.394702 containerd[1555]: time="2026-04-28T02:13:35.394562082Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:35.395321 containerd[1555]: time="2026-04-28T02:13:35.395292638Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 28 02:13:35.396626 containerd[1555]: time="2026-04-28T02:13:35.396582961Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:13:35.397573 containerd[1555]: time="2026-04-28T02:13:35.397538821Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.702681492s" Apr 28 02:13:35.397573 containerd[1555]: time="2026-04-28T02:13:35.397572359Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 28 02:13:35.420270 containerd[1555]: time="2026-04-28T02:13:35.420202776Z" level=info msg="CreateContainer within sandbox \"d6ba81f1913e88e114712834bd6d39f6a9de4515ccf0e08916086a3f1f245879\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 28 02:13:35.430391 containerd[1555]: time="2026-04-28T02:13:35.430327979Z" level=info msg="CreateContainer within sandbox \"d6ba81f1913e88e114712834bd6d39f6a9de4515ccf0e08916086a3f1f245879\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4\"" Apr 28 02:13:35.430912 containerd[1555]: time="2026-04-28T02:13:35.430859267Z" level=info msg="StartContainer for \"c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4\"" Apr 28 02:13:35.483377 containerd[1555]: time="2026-04-28T02:13:35.483335642Z" level=info msg="StartContainer for \"c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4\" returns successfully" Apr 28 02:13:35.718549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6-rootfs.mount: Deactivated successfully. Apr 28 02:13:36.023498 kubelet[2655]: E0428 02:13:36.023382 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:36.025968 kubelet[2655]: E0428 02:13:36.025290 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:36.032399 containerd[1555]: time="2026-04-28T02:13:36.032357732Z" level=info msg="CreateContainer within sandbox \"74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 28 02:13:36.036483 kubelet[2655]: I0428 02:13:36.036065 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vk6qw" podStartSLOduration=1.3195136299999999 podStartE2EDuration="7.03604979s" podCreationTimestamp="2026-04-28 02:13:29 +0000 UTC" firstStartedPulling="2026-04-28 02:13:29.6816788 +0000 UTC m=+7.837153775" lastFinishedPulling="2026-04-28 02:13:35.398214958 +0000 UTC m=+13.553689935" observedRunningTime="2026-04-28 02:13:36.035637106 +0000 UTC m=+14.191112080" watchObservedRunningTime="2026-04-28 02:13:36.03604979 +0000 UTC m=+14.191524774" Apr 28 02:13:36.124767 containerd[1555]: time="2026-04-28T02:13:36.124721373Z" level=info msg="CreateContainer within sandbox \"74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0\"" Apr 28 02:13:36.125562 containerd[1555]: time="2026-04-28T02:13:36.125495923Z" level=info msg="StartContainer for \"7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0\"" Apr 28 02:13:36.186940 containerd[1555]: time="2026-04-28T02:13:36.186876360Z" level=info msg="StartContainer for \"7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0\" returns successfully" Apr 28 02:13:36.205680 containerd[1555]: time="2026-04-28T02:13:36.205602154Z" level=info msg="shim disconnected" id=7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0 namespace=k8s.io Apr 28 02:13:36.205680 containerd[1555]: time="2026-04-28T02:13:36.205663935Z" level=warning msg="cleaning up after shim disconnected" id=7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0 namespace=k8s.io Apr 28 02:13:36.205680 containerd[1555]: time="2026-04-28T02:13:36.205671054Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:13:36.718628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0-rootfs.mount: Deactivated successfully. Apr 28 02:13:37.031141 kubelet[2655]: E0428 02:13:37.030957 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:37.031621 kubelet[2655]: E0428 02:13:37.031503 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:37.040885 containerd[1555]: time="2026-04-28T02:13:37.040619156Z" level=info msg="CreateContainer within sandbox \"74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 28 02:13:37.061956 containerd[1555]: time="2026-04-28T02:13:37.061820036Z" level=info msg="CreateContainer within sandbox \"74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c\"" Apr 28 02:13:37.065815 containerd[1555]: time="2026-04-28T02:13:37.063073320Z" level=info msg="StartContainer for \"46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c\"" Apr 28 02:13:37.122274 containerd[1555]: time="2026-04-28T02:13:37.122083070Z" level=info msg="StartContainer for \"46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c\" returns successfully" Apr 28 02:13:37.249596 kubelet[2655]: I0428 02:13:37.249550 2655 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 28 02:13:37.332108 kubelet[2655]: I0428 02:13:37.331954 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4wfv\" (UniqueName: \"kubernetes.io/projected/a4e97339-98dd-498e-bd82-a14b9241c911-kube-api-access-l4wfv\") pod \"coredns-674b8bbfcf-l6nmf\" (UID: \"a4e97339-98dd-498e-bd82-a14b9241c911\") " pod="kube-system/coredns-674b8bbfcf-l6nmf" Apr 28 02:13:37.332108 kubelet[2655]: I0428 02:13:37.332004 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28196de0-72bb-4d3a-8967-869b66b7ecac-config-volume\") pod \"coredns-674b8bbfcf-vbspn\" (UID: \"28196de0-72bb-4d3a-8967-869b66b7ecac\") " pod="kube-system/coredns-674b8bbfcf-vbspn" Apr 28 02:13:37.332108 kubelet[2655]: I0428 02:13:37.332034 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4e97339-98dd-498e-bd82-a14b9241c911-config-volume\") pod \"coredns-674b8bbfcf-l6nmf\" (UID: \"a4e97339-98dd-498e-bd82-a14b9241c911\") " pod="kube-system/coredns-674b8bbfcf-l6nmf" Apr 28 02:13:37.332108 kubelet[2655]: I0428 02:13:37.332055 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjvcc\" (UniqueName: \"kubernetes.io/projected/28196de0-72bb-4d3a-8967-869b66b7ecac-kube-api-access-sjvcc\") pod \"coredns-674b8bbfcf-vbspn\" (UID: \"28196de0-72bb-4d3a-8967-869b66b7ecac\") " pod="kube-system/coredns-674b8bbfcf-vbspn" Apr 28 02:13:37.580812 kubelet[2655]: E0428 02:13:37.580683 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:37.581572 containerd[1555]: time="2026-04-28T02:13:37.581526996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vbspn,Uid:28196de0-72bb-4d3a-8967-869b66b7ecac,Namespace:kube-system,Attempt:0,}" Apr 28 02:13:37.585685 kubelet[2655]: E0428 02:13:37.584982 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:37.585830 containerd[1555]: time="2026-04-28T02:13:37.585776954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l6nmf,Uid:a4e97339-98dd-498e-bd82-a14b9241c911,Namespace:kube-system,Attempt:0,}" Apr 28 02:13:38.035915 kubelet[2655]: E0428 02:13:38.035771 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:39.037916 kubelet[2655]: E0428 02:13:39.037885 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:39.052354 systemd-networkd[1249]: cilium_host: Link UP Apr 28 02:13:39.052449 systemd-networkd[1249]: cilium_net: Link UP Apr 28 02:13:39.052452 systemd-networkd[1249]: cilium_net: Gained carrier Apr 28 02:13:39.052543 systemd-networkd[1249]: cilium_host: Gained carrier Apr 28 02:13:39.052644 systemd-networkd[1249]: cilium_host: Gained IPv6LL Apr 28 02:13:39.131459 systemd-networkd[1249]: cilium_vxlan: Link UP Apr 28 02:13:39.131465 systemd-networkd[1249]: cilium_vxlan: Gained carrier Apr 28 02:13:39.360224 kernel: NET: Registered PF_ALG protocol family Apr 28 02:13:39.676611 systemd-networkd[1249]: cilium_net: Gained IPv6LL Apr 28 02:13:39.876389 systemd-networkd[1249]: lxc_health: Link UP Apr 28 02:13:39.884994 systemd-networkd[1249]: lxc_health: Gained carrier Apr 28 02:13:40.039188 kubelet[2655]: E0428 02:13:40.039030 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:40.137449 systemd-networkd[1249]: lxca8e694076c4f: Link UP Apr 28 02:13:40.143615 systemd-networkd[1249]: lxcea744cd14c35: Link UP Apr 28 02:13:40.152205 kernel: eth0: renamed from tmpac378 Apr 28 02:13:40.163197 kernel: eth0: renamed from tmp0ea17 Apr 28 02:13:40.167726 systemd-networkd[1249]: lxcea744cd14c35: Gained carrier Apr 28 02:13:40.168059 systemd-networkd[1249]: lxca8e694076c4f: Gained carrier Apr 28 02:13:40.828451 systemd-networkd[1249]: cilium_vxlan: Gained IPv6LL Apr 28 02:13:41.340797 systemd-networkd[1249]: lxc_health: Gained IPv6LL Apr 28 02:13:41.378084 kubelet[2655]: E0428 02:13:41.378023 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:41.398818 kubelet[2655]: I0428 02:13:41.398771 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-86pzm" podStartSLOduration=9.13901234 podStartE2EDuration="13.398760508s" podCreationTimestamp="2026-04-28 02:13:28 +0000 UTC" firstStartedPulling="2026-04-28 02:13:29.434916987 +0000 UTC m=+7.590391959" lastFinishedPulling="2026-04-28 02:13:33.694665153 +0000 UTC m=+11.850140127" observedRunningTime="2026-04-28 02:13:38.055677404 +0000 UTC m=+16.211152388" watchObservedRunningTime="2026-04-28 02:13:41.398760508 +0000 UTC m=+19.554235493" Apr 28 02:13:41.404411 systemd-networkd[1249]: lxcea744cd14c35: Gained IPv6LL Apr 28 02:13:41.980371 systemd-networkd[1249]: lxca8e694076c4f: Gained IPv6LL Apr 28 02:13:43.406130 containerd[1555]: time="2026-04-28T02:13:43.406062518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:13:43.406858 containerd[1555]: time="2026-04-28T02:13:43.406585195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:13:43.406858 containerd[1555]: time="2026-04-28T02:13:43.406612005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:13:43.407022 containerd[1555]: time="2026-04-28T02:13:43.406788935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:13:43.430313 containerd[1555]: time="2026-04-28T02:13:43.429545158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:13:43.430313 containerd[1555]: time="2026-04-28T02:13:43.430120532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:13:43.430313 containerd[1555]: time="2026-04-28T02:13:43.430130205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:13:43.430313 containerd[1555]: time="2026-04-28T02:13:43.430235814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:13:43.432192 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 02:13:43.454982 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 02:13:43.469239 containerd[1555]: time="2026-04-28T02:13:43.469186892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l6nmf,Uid:a4e97339-98dd-498e-bd82-a14b9241c911,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ea1774ffa029c53471565a91511a0d149ba24b5b2da3771e86313521648fd9e\"" Apr 28 02:13:43.470764 kubelet[2655]: E0428 02:13:43.470717 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:43.477057 containerd[1555]: time="2026-04-28T02:13:43.476975434Z" level=info msg="CreateContainer within sandbox \"0ea1774ffa029c53471565a91511a0d149ba24b5b2da3771e86313521648fd9e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 02:13:43.490461 containerd[1555]: time="2026-04-28T02:13:43.490403333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vbspn,Uid:28196de0-72bb-4d3a-8967-869b66b7ecac,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac378e9d123aba76a2c100ae7b3fe8a4374a3460bd6c70bb7ee9ab110a2a8487\"" Apr 28 02:13:43.492824 kubelet[2655]: E0428 02:13:43.492666 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:43.497615 containerd[1555]: time="2026-04-28T02:13:43.497479137Z" level=info msg="CreateContainer within sandbox \"ac378e9d123aba76a2c100ae7b3fe8a4374a3460bd6c70bb7ee9ab110a2a8487\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 02:13:43.513922 containerd[1555]: time="2026-04-28T02:13:43.513874873Z" level=info msg="CreateContainer within sandbox \"0ea1774ffa029c53471565a91511a0d149ba24b5b2da3771e86313521648fd9e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7bc686ef8be40197196f6928e086d9a22782e30e20fad0cf4569594c6653c4f\"" Apr 28 02:13:43.514450 containerd[1555]: time="2026-04-28T02:13:43.514382976Z" level=info msg="StartContainer for \"d7bc686ef8be40197196f6928e086d9a22782e30e20fad0cf4569594c6653c4f\"" Apr 28 02:13:43.521512 containerd[1555]: time="2026-04-28T02:13:43.521449697Z" level=info msg="CreateContainer within sandbox \"ac378e9d123aba76a2c100ae7b3fe8a4374a3460bd6c70bb7ee9ab110a2a8487\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b28a4357c2158f573a7c712158e34b68d4dc9b9a7d3a62ba5ee30156f5bc9820\"" Apr 28 02:13:43.522439 containerd[1555]: time="2026-04-28T02:13:43.522394629Z" level=info msg="StartContainer for \"b28a4357c2158f573a7c712158e34b68d4dc9b9a7d3a62ba5ee30156f5bc9820\"" Apr 28 02:13:43.570486 containerd[1555]: time="2026-04-28T02:13:43.570424766Z" level=info msg="StartContainer for \"d7bc686ef8be40197196f6928e086d9a22782e30e20fad0cf4569594c6653c4f\" returns successfully" Apr 28 02:13:43.578469 containerd[1555]: time="2026-04-28T02:13:43.578423127Z" level=info msg="StartContainer for \"b28a4357c2158f573a7c712158e34b68d4dc9b9a7d3a62ba5ee30156f5bc9820\" returns successfully" Apr 28 02:13:44.047787 kubelet[2655]: E0428 02:13:44.047733 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:44.051095 kubelet[2655]: E0428 02:13:44.051030 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:44.061173 kubelet[2655]: I0428 02:13:44.061093 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-l6nmf" podStartSLOduration=15.061076572 podStartE2EDuration="15.061076572s" podCreationTimestamp="2026-04-28 02:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:13:44.060826788 +0000 UTC m=+22.216301770" watchObservedRunningTime="2026-04-28 02:13:44.061076572 +0000 UTC m=+22.216551563" Apr 28 02:13:44.086112 kubelet[2655]: I0428 02:13:44.086044 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vbspn" podStartSLOduration=15.086016637 podStartE2EDuration="15.086016637s" podCreationTimestamp="2026-04-28 02:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:13:44.073399821 +0000 UTC m=+22.228874814" watchObservedRunningTime="2026-04-28 02:13:44.086016637 +0000 UTC m=+22.241491626" Apr 28 02:13:44.411442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1300800375.mount: Deactivated successfully. Apr 28 02:13:44.509643 kubelet[2655]: I0428 02:13:44.509534 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 02:13:44.510219 kubelet[2655]: E0428 02:13:44.510188 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:44.628480 update_engine[1540]: I20260428 02:13:44.628324 1540 update_attempter.cc:509] Updating boot flags... Apr 28 02:13:44.650246 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (4051) Apr 28 02:13:45.053406 kubelet[2655]: E0428 02:13:45.053034 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:45.053406 kubelet[2655]: E0428 02:13:45.053135 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:45.053406 kubelet[2655]: E0428 02:13:45.053347 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:45.637552 systemd[1]: Started sshd@7-10.0.0.6:22-10.0.0.1:52964.service - OpenSSH per-connection server daemon (10.0.0.1:52964). Apr 28 02:13:45.672737 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 52964 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:13:45.673961 sshd[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:13:45.677853 systemd-logind[1534]: New session 8 of user core. Apr 28 02:13:45.692651 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 28 02:13:45.985780 sshd[4057]: pam_unix(sshd:session): session closed for user core Apr 28 02:13:45.988552 systemd[1]: sshd@7-10.0.0.6:22-10.0.0.1:52964.service: Deactivated successfully. Apr 28 02:13:45.990125 systemd-logind[1534]: Session 8 logged out. Waiting for processes to exit. Apr 28 02:13:45.990191 systemd[1]: session-8.scope: Deactivated successfully. Apr 28 02:13:45.991213 systemd-logind[1534]: Removed session 8. Apr 28 02:13:46.056367 kubelet[2655]: E0428 02:13:46.056307 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:46.056752 kubelet[2655]: E0428 02:13:46.056386 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:13:50.998376 systemd[1]: Started sshd@8-10.0.0.6:22-10.0.0.1:50342.service - OpenSSH per-connection server daemon (10.0.0.1:50342). Apr 28 02:13:51.030223 sshd[4073]: Accepted publickey for core from 10.0.0.1 port 50342 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:13:51.031544 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:13:51.035776 systemd-logind[1534]: New session 9 of user core. Apr 28 02:13:51.054549 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 28 02:13:51.170490 sshd[4073]: pam_unix(sshd:session): session closed for user core Apr 28 02:13:51.173228 systemd[1]: sshd@8-10.0.0.6:22-10.0.0.1:50342.service: Deactivated successfully. Apr 28 02:13:51.175637 systemd-logind[1534]: Session 9 logged out. Waiting for processes to exit. Apr 28 02:13:51.175868 systemd[1]: session-9.scope: Deactivated successfully. Apr 28 02:13:51.176741 systemd-logind[1534]: Removed session 9. Apr 28 02:13:56.189939 systemd[1]: Started sshd@9-10.0.0.6:22-10.0.0.1:50348.service - OpenSSH per-connection server daemon (10.0.0.1:50348). Apr 28 02:13:56.223858 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 50348 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:13:56.225488 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:13:56.230290 systemd-logind[1534]: New session 10 of user core. Apr 28 02:13:56.240597 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 28 02:13:56.391027 sshd[4089]: pam_unix(sshd:session): session closed for user core Apr 28 02:13:56.405259 systemd[1]: Started sshd@10-10.0.0.6:22-10.0.0.1:50362.service - OpenSSH per-connection server daemon (10.0.0.1:50362). Apr 28 02:13:56.405788 systemd[1]: sshd@9-10.0.0.6:22-10.0.0.1:50348.service: Deactivated successfully. Apr 28 02:13:56.408750 systemd-logind[1534]: Session 10 logged out. Waiting for processes to exit. Apr 28 02:13:56.409227 systemd[1]: session-10.scope: Deactivated successfully. Apr 28 02:13:56.410520 systemd-logind[1534]: Removed session 10. Apr 28 02:13:56.439990 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 50362 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:13:56.441685 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:13:56.447220 systemd-logind[1534]: New session 11 of user core. Apr 28 02:13:56.457728 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 28 02:13:56.610532 sshd[4102]: pam_unix(sshd:session): session closed for user core Apr 28 02:13:56.620500 systemd[1]: Started sshd@11-10.0.0.6:22-10.0.0.1:50364.service - OpenSSH per-connection server daemon (10.0.0.1:50364). Apr 28 02:13:56.620858 systemd[1]: sshd@10-10.0.0.6:22-10.0.0.1:50362.service: Deactivated successfully. Apr 28 02:13:56.622209 systemd[1]: session-11.scope: Deactivated successfully. Apr 28 02:13:56.627761 systemd-logind[1534]: Session 11 logged out. Waiting for processes to exit. Apr 28 02:13:56.630220 systemd-logind[1534]: Removed session 11. Apr 28 02:13:56.661906 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 50364 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:13:56.663342 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:13:56.666900 systemd-logind[1534]: New session 12 of user core. Apr 28 02:13:56.679865 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 28 02:13:56.798930 sshd[4116]: pam_unix(sshd:session): session closed for user core Apr 28 02:13:56.802722 systemd[1]: sshd@11-10.0.0.6:22-10.0.0.1:50364.service: Deactivated successfully. Apr 28 02:13:56.804579 systemd-logind[1534]: Session 12 logged out. Waiting for processes to exit. Apr 28 02:13:56.804661 systemd[1]: session-12.scope: Deactivated successfully. Apr 28 02:13:56.805784 systemd-logind[1534]: Removed session 12. Apr 28 02:14:01.818124 systemd[1]: Started sshd@12-10.0.0.6:22-10.0.0.1:36784.service - OpenSSH per-connection server daemon (10.0.0.1:36784). Apr 28 02:14:01.849892 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 36784 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:01.851561 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:01.856072 systemd-logind[1534]: New session 13 of user core. Apr 28 02:14:01.865268 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 28 02:14:01.972127 sshd[4137]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:01.975691 systemd[1]: sshd@12-10.0.0.6:22-10.0.0.1:36784.service: Deactivated successfully. Apr 28 02:14:01.977785 systemd-logind[1534]: Session 13 logged out. Waiting for processes to exit. Apr 28 02:14:01.977828 systemd[1]: session-13.scope: Deactivated successfully. Apr 28 02:14:01.978864 systemd-logind[1534]: Removed session 13. Apr 28 02:14:06.992999 systemd[1]: Started sshd@13-10.0.0.6:22-10.0.0.1:36794.service - OpenSSH per-connection server daemon (10.0.0.1:36794). Apr 28 02:14:07.034000 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 36794 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:07.035535 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:07.039630 systemd-logind[1534]: New session 14 of user core. Apr 28 02:14:07.052049 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 28 02:14:07.163353 sshd[4152]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:07.166572 systemd[1]: sshd@13-10.0.0.6:22-10.0.0.1:36794.service: Deactivated successfully. Apr 28 02:14:07.168185 systemd-logind[1534]: Session 14 logged out. Waiting for processes to exit. Apr 28 02:14:07.168266 systemd[1]: session-14.scope: Deactivated successfully. Apr 28 02:14:07.169095 systemd-logind[1534]: Removed session 14. Apr 28 02:14:12.178510 systemd[1]: Started sshd@14-10.0.0.6:22-10.0.0.1:38810.service - OpenSSH per-connection server daemon (10.0.0.1:38810). Apr 28 02:14:12.212489 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 38810 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:12.214390 sshd[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:12.219662 systemd-logind[1534]: New session 15 of user core. Apr 28 02:14:12.232727 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 28 02:14:12.342687 sshd[4168]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:12.351430 systemd[1]: Started sshd@15-10.0.0.6:22-10.0.0.1:38812.service - OpenSSH per-connection server daemon (10.0.0.1:38812). Apr 28 02:14:12.351712 systemd[1]: sshd@14-10.0.0.6:22-10.0.0.1:38810.service: Deactivated successfully. Apr 28 02:14:12.355047 systemd[1]: session-15.scope: Deactivated successfully. Apr 28 02:14:12.355904 systemd-logind[1534]: Session 15 logged out. Waiting for processes to exit. Apr 28 02:14:12.356943 systemd-logind[1534]: Removed session 15. Apr 28 02:14:12.380933 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 38812 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:12.382762 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:12.387323 systemd-logind[1534]: New session 16 of user core. Apr 28 02:14:12.397782 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 28 02:14:12.676190 sshd[4180]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:12.681422 systemd[1]: Started sshd@16-10.0.0.6:22-10.0.0.1:38822.service - OpenSSH per-connection server daemon (10.0.0.1:38822). Apr 28 02:14:12.681728 systemd[1]: sshd@15-10.0.0.6:22-10.0.0.1:38812.service: Deactivated successfully. Apr 28 02:14:12.683017 systemd[1]: session-16.scope: Deactivated successfully. Apr 28 02:14:12.684604 systemd-logind[1534]: Session 16 logged out. Waiting for processes to exit. Apr 28 02:14:12.685633 systemd-logind[1534]: Removed session 16. Apr 28 02:14:12.720177 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 38822 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:12.721426 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:12.726086 systemd-logind[1534]: New session 17 of user core. Apr 28 02:14:12.742550 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 28 02:14:13.171521 sshd[4193]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:13.180604 systemd[1]: Started sshd@17-10.0.0.6:22-10.0.0.1:38824.service - OpenSSH per-connection server daemon (10.0.0.1:38824). Apr 28 02:14:13.181073 systemd[1]: sshd@16-10.0.0.6:22-10.0.0.1:38822.service: Deactivated successfully. Apr 28 02:14:13.182517 systemd[1]: session-17.scope: Deactivated successfully. Apr 28 02:14:13.184587 systemd-logind[1534]: Session 17 logged out. Waiting for processes to exit. Apr 28 02:14:13.186783 systemd-logind[1534]: Removed session 17. Apr 28 02:14:13.216591 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 38824 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:13.218085 sshd[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:13.222407 systemd-logind[1534]: New session 18 of user core. Apr 28 02:14:13.234610 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 28 02:14:13.471483 sshd[4213]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:13.485499 systemd[1]: Started sshd@18-10.0.0.6:22-10.0.0.1:38838.service - OpenSSH per-connection server daemon (10.0.0.1:38838). Apr 28 02:14:13.485903 systemd[1]: sshd@17-10.0.0.6:22-10.0.0.1:38824.service: Deactivated successfully. Apr 28 02:14:13.487258 systemd[1]: session-18.scope: Deactivated successfully. Apr 28 02:14:13.488416 systemd-logind[1534]: Session 18 logged out. Waiting for processes to exit. Apr 28 02:14:13.490232 systemd-logind[1534]: Removed session 18. Apr 28 02:14:13.523547 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 38838 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:13.524875 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:13.528909 systemd-logind[1534]: New session 19 of user core. Apr 28 02:14:13.535349 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 28 02:14:13.643396 sshd[4228]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:13.647169 systemd[1]: sshd@18-10.0.0.6:22-10.0.0.1:38838.service: Deactivated successfully. Apr 28 02:14:13.648680 systemd[1]: session-19.scope: Deactivated successfully. Apr 28 02:14:13.648685 systemd-logind[1534]: Session 19 logged out. Waiting for processes to exit. Apr 28 02:14:13.649463 systemd-logind[1534]: Removed session 19. Apr 28 02:14:15.290391 kernel: hrtimer: interrupt took 3245767 ns Apr 28 02:14:18.653413 systemd[1]: Started sshd@19-10.0.0.6:22-10.0.0.1:38844.service - OpenSSH per-connection server daemon (10.0.0.1:38844). Apr 28 02:14:18.683357 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 38844 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:18.684629 sshd[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:18.688319 systemd-logind[1534]: New session 20 of user core. Apr 28 02:14:18.698407 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 28 02:14:18.800606 sshd[4249]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:18.803959 systemd[1]: sshd@19-10.0.0.6:22-10.0.0.1:38844.service: Deactivated successfully. Apr 28 02:14:18.806185 systemd[1]: session-20.scope: Deactivated successfully. Apr 28 02:14:18.806769 systemd-logind[1534]: Session 20 logged out. Waiting for processes to exit. Apr 28 02:14:18.807789 systemd-logind[1534]: Removed session 20. Apr 28 02:14:23.816544 systemd[1]: Started sshd@20-10.0.0.6:22-10.0.0.1:57776.service - OpenSSH per-connection server daemon (10.0.0.1:57776). Apr 28 02:14:23.849700 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 57776 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:23.851028 sshd[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:23.854621 systemd-logind[1534]: New session 21 of user core. Apr 28 02:14:23.861398 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 28 02:14:23.971226 sshd[4267]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:23.974328 systemd[1]: sshd@20-10.0.0.6:22-10.0.0.1:57776.service: Deactivated successfully. Apr 28 02:14:23.976114 systemd-logind[1534]: Session 21 logged out. Waiting for processes to exit. Apr 28 02:14:23.976116 systemd[1]: session-21.scope: Deactivated successfully. Apr 28 02:14:23.977417 systemd-logind[1534]: Removed session 21. Apr 28 02:14:28.992193 systemd[1]: Started sshd@21-10.0.0.6:22-10.0.0.1:57778.service - OpenSSH per-connection server daemon (10.0.0.1:57778). Apr 28 02:14:29.027856 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 57778 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:29.028302 sshd[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:29.035289 systemd-logind[1534]: New session 22 of user core. Apr 28 02:14:29.041536 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 28 02:14:29.150119 sshd[4282]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:29.159452 systemd[1]: Started sshd@22-10.0.0.6:22-10.0.0.1:57780.service - OpenSSH per-connection server daemon (10.0.0.1:57780). Apr 28 02:14:29.159873 systemd[1]: sshd@21-10.0.0.6:22-10.0.0.1:57778.service: Deactivated successfully. Apr 28 02:14:29.161252 systemd[1]: session-22.scope: Deactivated successfully. Apr 28 02:14:29.161801 systemd-logind[1534]: Session 22 logged out. Waiting for processes to exit. Apr 28 02:14:29.162766 systemd-logind[1534]: Removed session 22. Apr 28 02:14:29.188869 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 57780 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:29.190207 sshd[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:29.195565 systemd-logind[1534]: New session 23 of user core. Apr 28 02:14:29.205388 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 28 02:14:30.545390 containerd[1555]: time="2026-04-28T02:14:30.545338224Z" level=info msg="StopContainer for \"c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4\" with timeout 30 (s)" Apr 28 02:14:30.546340 containerd[1555]: time="2026-04-28T02:14:30.546296779Z" level=info msg="Stop container \"c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4\" with signal terminated" Apr 28 02:14:30.585108 containerd[1555]: time="2026-04-28T02:14:30.585064921Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 02:14:30.590935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4-rootfs.mount: Deactivated successfully. Apr 28 02:14:30.596556 containerd[1555]: time="2026-04-28T02:14:30.596501041Z" level=info msg="shim disconnected" id=c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4 namespace=k8s.io Apr 28 02:14:30.596556 containerd[1555]: time="2026-04-28T02:14:30.596551283Z" level=warning msg="cleaning up after shim disconnected" id=c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4 namespace=k8s.io Apr 28 02:14:30.596702 containerd[1555]: time="2026-04-28T02:14:30.596582581Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:14:30.596904 containerd[1555]: time="2026-04-28T02:14:30.596858030Z" level=info msg="StopContainer for \"46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c\" with timeout 2 (s)" Apr 28 02:14:30.597422 containerd[1555]: time="2026-04-28T02:14:30.597389163Z" level=info msg="Stop container \"46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c\" with signal terminated" Apr 28 02:14:30.602845 systemd-networkd[1249]: lxc_health: Link DOWN Apr 28 02:14:30.602849 systemd-networkd[1249]: lxc_health: Lost carrier Apr 28 02:14:30.616616 containerd[1555]: time="2026-04-28T02:14:30.616488619Z" level=info msg="StopContainer for \"c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4\" returns successfully" Apr 28 02:14:30.624697 containerd[1555]: time="2026-04-28T02:14:30.624634577Z" level=info msg="StopPodSandbox for \"d6ba81f1913e88e114712834bd6d39f6a9de4515ccf0e08916086a3f1f245879\"" Apr 28 02:14:30.624697 containerd[1555]: time="2026-04-28T02:14:30.624699098Z" level=info msg="Container to stop \"c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:14:30.627520 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6ba81f1913e88e114712834bd6d39f6a9de4515ccf0e08916086a3f1f245879-shm.mount: Deactivated successfully. Apr 28 02:14:30.654932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c-rootfs.mount: Deactivated successfully. Apr 28 02:14:30.661059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6ba81f1913e88e114712834bd6d39f6a9de4515ccf0e08916086a3f1f245879-rootfs.mount: Deactivated successfully. Apr 28 02:14:30.664747 containerd[1555]: time="2026-04-28T02:14:30.664680543Z" level=info msg="shim disconnected" id=d6ba81f1913e88e114712834bd6d39f6a9de4515ccf0e08916086a3f1f245879 namespace=k8s.io Apr 28 02:14:30.664747 containerd[1555]: time="2026-04-28T02:14:30.664747968Z" level=warning msg="cleaning up after shim disconnected" id=d6ba81f1913e88e114712834bd6d39f6a9de4515ccf0e08916086a3f1f245879 namespace=k8s.io Apr 28 02:14:30.664971 containerd[1555]: time="2026-04-28T02:14:30.664758814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:14:30.664971 containerd[1555]: time="2026-04-28T02:14:30.664693977Z" level=info msg="shim disconnected" id=46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c namespace=k8s.io Apr 28 02:14:30.664971 containerd[1555]: time="2026-04-28T02:14:30.664877422Z" level=warning msg="cleaning up after shim disconnected" id=46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c namespace=k8s.io Apr 28 02:14:30.664971 containerd[1555]: time="2026-04-28T02:14:30.664883688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:14:30.676717 containerd[1555]: time="2026-04-28T02:14:30.676674341Z" level=warning msg="cleanup warnings time=\"2026-04-28T02:14:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 02:14:30.677599 containerd[1555]: time="2026-04-28T02:14:30.677539297Z" level=info msg="TearDown network for sandbox \"d6ba81f1913e88e114712834bd6d39f6a9de4515ccf0e08916086a3f1f245879\" successfully" Apr 28 02:14:30.677599 containerd[1555]: time="2026-04-28T02:14:30.677568385Z" level=info msg="StopPodSandbox for \"d6ba81f1913e88e114712834bd6d39f6a9de4515ccf0e08916086a3f1f245879\" returns successfully" Apr 28 02:14:30.679122 containerd[1555]: time="2026-04-28T02:14:30.679063737Z" level=info msg="StopContainer for \"46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c\" returns successfully" Apr 28 02:14:30.679998 containerd[1555]: time="2026-04-28T02:14:30.679329432Z" level=info msg="StopPodSandbox for \"74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3\"" Apr 28 02:14:30.679998 containerd[1555]: time="2026-04-28T02:14:30.679383663Z" level=info msg="Container to stop \"46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:14:30.679998 containerd[1555]: time="2026-04-28T02:14:30.679394589Z" level=info msg="Container to stop \"4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:14:30.679998 containerd[1555]: time="2026-04-28T02:14:30.679401488Z" level=info msg="Container to stop \"e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:14:30.679998 containerd[1555]: time="2026-04-28T02:14:30.679408597Z" level=info msg="Container to stop \"47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:14:30.679998 containerd[1555]: time="2026-04-28T02:14:30.679414808Z" level=info msg="Container to stop \"7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:14:30.732877 kubelet[2655]: I0428 02:14:30.732592 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72s6g\" (UniqueName: \"kubernetes.io/projected/5658dd35-b9cb-4b02-8176-45358b28fa2c-kube-api-access-72s6g\") pod \"5658dd35-b9cb-4b02-8176-45358b28fa2c\" (UID: \"5658dd35-b9cb-4b02-8176-45358b28fa2c\") " Apr 28 02:14:30.733492 kubelet[2655]: I0428 02:14:30.732997 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5658dd35-b9cb-4b02-8176-45358b28fa2c-cilium-config-path\") pod \"5658dd35-b9cb-4b02-8176-45358b28fa2c\" (UID: \"5658dd35-b9cb-4b02-8176-45358b28fa2c\") " Apr 28 02:14:30.738207 kubelet[2655]: I0428 02:14:30.736364 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5658dd35-b9cb-4b02-8176-45358b28fa2c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5658dd35-b9cb-4b02-8176-45358b28fa2c" (UID: "5658dd35-b9cb-4b02-8176-45358b28fa2c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 02:14:30.738207 kubelet[2655]: I0428 02:14:30.737018 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5658dd35-b9cb-4b02-8176-45358b28fa2c-kube-api-access-72s6g" (OuterVolumeSpecName: "kube-api-access-72s6g") pod "5658dd35-b9cb-4b02-8176-45358b28fa2c" (UID: "5658dd35-b9cb-4b02-8176-45358b28fa2c"). InnerVolumeSpecName "kube-api-access-72s6g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 02:14:30.738362 containerd[1555]: time="2026-04-28T02:14:30.736958813Z" level=info msg="shim disconnected" id=74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3 namespace=k8s.io Apr 28 02:14:30.738362 containerd[1555]: time="2026-04-28T02:14:30.737028224Z" level=warning msg="cleaning up after shim disconnected" id=74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3 namespace=k8s.io Apr 28 02:14:30.738362 containerd[1555]: time="2026-04-28T02:14:30.737035645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:14:30.750443 containerd[1555]: time="2026-04-28T02:14:30.750393171Z" level=info msg="TearDown network for sandbox \"74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3\" successfully" Apr 28 02:14:30.750443 containerd[1555]: time="2026-04-28T02:14:30.750428613Z" level=info msg="StopPodSandbox for \"74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3\" returns successfully" Apr 28 02:14:30.833910 kubelet[2655]: I0428 02:14:30.833739 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-bpf-maps\") pod \"836e2b6e-a9ad-44c6-a036-3460365dff88\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " Apr 28 02:14:30.833910 kubelet[2655]: I0428 02:14:30.833793 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-xtables-lock\") pod \"836e2b6e-a9ad-44c6-a036-3460365dff88\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " Apr 28 02:14:30.833910 kubelet[2655]: I0428 02:14:30.833812 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-host-proc-sys-kernel\") pod \"836e2b6e-a9ad-44c6-a036-3460365dff88\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " Apr 28 02:14:30.833910 kubelet[2655]: I0428 02:14:30.833833 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-665pk\" (UniqueName: \"kubernetes.io/projected/836e2b6e-a9ad-44c6-a036-3460365dff88-kube-api-access-665pk\") pod \"836e2b6e-a9ad-44c6-a036-3460365dff88\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " Apr 28 02:14:30.833910 kubelet[2655]: I0428 02:14:30.833847 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-cni-path\") pod \"836e2b6e-a9ad-44c6-a036-3460365dff88\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " Apr 28 02:14:30.833910 kubelet[2655]: I0428 02:14:30.833858 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-etc-cni-netd\") pod \"836e2b6e-a9ad-44c6-a036-3460365dff88\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " Apr 28 02:14:30.834137 kubelet[2655]: I0428 02:14:30.833871 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-lib-modules\") pod \"836e2b6e-a9ad-44c6-a036-3460365dff88\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " Apr 28 02:14:30.834137 kubelet[2655]: I0428 02:14:30.833886 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-hostproc\") pod \"836e2b6e-a9ad-44c6-a036-3460365dff88\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " Apr 28 02:14:30.835639 kubelet[2655]: I0428 02:14:30.835341 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-cilium-run\") pod \"836e2b6e-a9ad-44c6-a036-3460365dff88\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " Apr 28 02:14:30.835639 kubelet[2655]: I0428 02:14:30.835409 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-cilium-cgroup\") pod \"836e2b6e-a9ad-44c6-a036-3460365dff88\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " Apr 28 02:14:30.835639 kubelet[2655]: I0428 02:14:30.835427 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/836e2b6e-a9ad-44c6-a036-3460365dff88-hubble-tls\") pod \"836e2b6e-a9ad-44c6-a036-3460365dff88\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " Apr 28 02:14:30.835639 kubelet[2655]: I0428 02:14:30.833896 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "836e2b6e-a9ad-44c6-a036-3460365dff88" (UID: "836e2b6e-a9ad-44c6-a036-3460365dff88"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:14:30.835639 kubelet[2655]: I0428 02:14:30.835438 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "836e2b6e-a9ad-44c6-a036-3460365dff88" (UID: "836e2b6e-a9ad-44c6-a036-3460365dff88"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:14:30.835777 kubelet[2655]: I0428 02:14:30.835447 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/836e2b6e-a9ad-44c6-a036-3460365dff88-cilium-config-path\") pod \"836e2b6e-a9ad-44c6-a036-3460365dff88\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " Apr 28 02:14:30.835777 kubelet[2655]: I0428 02:14:30.833896 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "836e2b6e-a9ad-44c6-a036-3460365dff88" (UID: "836e2b6e-a9ad-44c6-a036-3460365dff88"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:14:30.835777 kubelet[2655]: I0428 02:14:30.833920 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-cni-path" (OuterVolumeSpecName: "cni-path") pod "836e2b6e-a9ad-44c6-a036-3460365dff88" (UID: "836e2b6e-a9ad-44c6-a036-3460365dff88"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:14:30.835777 kubelet[2655]: I0428 02:14:30.833916 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "836e2b6e-a9ad-44c6-a036-3460365dff88" (UID: "836e2b6e-a9ad-44c6-a036-3460365dff88"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:14:30.835777 kubelet[2655]: I0428 02:14:30.833929 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "836e2b6e-a9ad-44c6-a036-3460365dff88" (UID: "836e2b6e-a9ad-44c6-a036-3460365dff88"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:14:30.835878 kubelet[2655]: I0428 02:14:30.833939 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "836e2b6e-a9ad-44c6-a036-3460365dff88" (UID: "836e2b6e-a9ad-44c6-a036-3460365dff88"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:14:30.835878 kubelet[2655]: I0428 02:14:30.833951 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-hostproc" (OuterVolumeSpecName: "hostproc") pod "836e2b6e-a9ad-44c6-a036-3460365dff88" (UID: "836e2b6e-a9ad-44c6-a036-3460365dff88"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:14:30.835878 kubelet[2655]: I0428 02:14:30.835418 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "836e2b6e-a9ad-44c6-a036-3460365dff88" (UID: "836e2b6e-a9ad-44c6-a036-3460365dff88"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:14:30.835878 kubelet[2655]: I0428 02:14:30.835508 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/836e2b6e-a9ad-44c6-a036-3460365dff88-clustermesh-secrets\") pod \"836e2b6e-a9ad-44c6-a036-3460365dff88\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " Apr 28 02:14:30.835878 kubelet[2655]: I0428 02:14:30.835522 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-host-proc-sys-net\") pod \"836e2b6e-a9ad-44c6-a036-3460365dff88\" (UID: \"836e2b6e-a9ad-44c6-a036-3460365dff88\") " Apr 28 02:14:30.835878 kubelet[2655]: I0428 02:14:30.835558 2655 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 28 02:14:30.835996 kubelet[2655]: I0428 02:14:30.835568 2655 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 28 02:14:30.835996 kubelet[2655]: I0428 02:14:30.835575 2655 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 28 02:14:30.835996 kubelet[2655]: I0428 02:14:30.835582 2655 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 28 02:14:30.835996 kubelet[2655]: I0428 02:14:30.835590 2655 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-72s6g\" (UniqueName: \"kubernetes.io/projected/5658dd35-b9cb-4b02-8176-45358b28fa2c-kube-api-access-72s6g\") on node \"localhost\" DevicePath \"\"" Apr 28 02:14:30.835996 kubelet[2655]: I0428 02:14:30.835600 2655 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 28 02:14:30.835996 kubelet[2655]: I0428 02:14:30.835606 2655 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5658dd35-b9cb-4b02-8176-45358b28fa2c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 28 02:14:30.835996 kubelet[2655]: I0428 02:14:30.835613 2655 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 28 02:14:30.835996 kubelet[2655]: I0428 02:14:30.835620 2655 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 28 02:14:30.836137 kubelet[2655]: I0428 02:14:30.835628 2655 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 28 02:14:30.836137 kubelet[2655]: I0428 02:14:30.835636 2655 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 28 02:14:30.836137 kubelet[2655]: I0428 02:14:30.835657 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "836e2b6e-a9ad-44c6-a036-3460365dff88" (UID: "836e2b6e-a9ad-44c6-a036-3460365dff88"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 02:14:30.837817 kubelet[2655]: I0428 02:14:30.837206 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/836e2b6e-a9ad-44c6-a036-3460365dff88-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "836e2b6e-a9ad-44c6-a036-3460365dff88" (UID: "836e2b6e-a9ad-44c6-a036-3460365dff88"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 02:14:30.837817 kubelet[2655]: I0428 02:14:30.837209 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836e2b6e-a9ad-44c6-a036-3460365dff88-kube-api-access-665pk" (OuterVolumeSpecName: "kube-api-access-665pk") pod "836e2b6e-a9ad-44c6-a036-3460365dff88" (UID: "836e2b6e-a9ad-44c6-a036-3460365dff88"). InnerVolumeSpecName "kube-api-access-665pk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 02:14:30.837886 kubelet[2655]: I0428 02:14:30.837801 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836e2b6e-a9ad-44c6-a036-3460365dff88-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "836e2b6e-a9ad-44c6-a036-3460365dff88" (UID: "836e2b6e-a9ad-44c6-a036-3460365dff88"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 28 02:14:30.838247 kubelet[2655]: I0428 02:14:30.838227 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836e2b6e-a9ad-44c6-a036-3460365dff88-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "836e2b6e-a9ad-44c6-a036-3460365dff88" (UID: "836e2b6e-a9ad-44c6-a036-3460365dff88"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 02:14:30.936418 kubelet[2655]: I0428 02:14:30.936332 2655 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/836e2b6e-a9ad-44c6-a036-3460365dff88-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 28 02:14:30.936418 kubelet[2655]: I0428 02:14:30.936388 2655 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/836e2b6e-a9ad-44c6-a036-3460365dff88-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 28 02:14:30.936418 kubelet[2655]: I0428 02:14:30.936402 2655 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/836e2b6e-a9ad-44c6-a036-3460365dff88-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 28 02:14:30.936418 kubelet[2655]: I0428 02:14:30.936409 2655 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/836e2b6e-a9ad-44c6-a036-3460365dff88-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 28 02:14:30.936418 kubelet[2655]: I0428 02:14:30.936418 2655 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-665pk\" (UniqueName: \"kubernetes.io/projected/836e2b6e-a9ad-44c6-a036-3460365dff88-kube-api-access-665pk\") on node \"localhost\" DevicePath \"\"" Apr 28 02:14:31.156379 kubelet[2655]: I0428 02:14:31.156335 2655 scope.go:117] "RemoveContainer" containerID="c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4" Apr 28 02:14:31.159946 containerd[1555]: time="2026-04-28T02:14:31.159550495Z" level=info msg="RemoveContainer for \"c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4\"" Apr 28 02:14:31.165244 containerd[1555]: time="2026-04-28T02:14:31.165195323Z" level=info msg="RemoveContainer for \"c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4\" returns successfully" Apr 28 02:14:31.165432 kubelet[2655]: I0428 02:14:31.165390 2655 scope.go:117] "RemoveContainer" containerID="c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4" Apr 28 02:14:31.165702 containerd[1555]: time="2026-04-28T02:14:31.165568299Z" level=error msg="ContainerStatus for \"c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4\": not found" Apr 28 02:14:31.173026 kubelet[2655]: E0428 02:14:31.172989 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4\": not found" containerID="c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4" Apr 28 02:14:31.173026 kubelet[2655]: I0428 02:14:31.173036 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4"} err="failed to get container status \"c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4\": rpc error: code = NotFound desc = an error occurred when try to find container \"c036b154c329fca906da6f75a82f92d2b67a9656617eb9d71c43514b1261fce4\": not found" Apr 28 02:14:31.173204 kubelet[2655]: I0428 02:14:31.173069 2655 scope.go:117] "RemoveContainer" containerID="46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c" Apr 28 02:14:31.174053 containerd[1555]: time="2026-04-28T02:14:31.174006789Z" level=info msg="RemoveContainer for \"46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c\"" Apr 28 02:14:31.180566 containerd[1555]: time="2026-04-28T02:14:31.180096973Z" level=info msg="RemoveContainer for \"46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c\" returns successfully" Apr 28 02:14:31.180681 kubelet[2655]: I0428 02:14:31.180390 2655 scope.go:117] "RemoveContainer" containerID="7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0" Apr 28 02:14:31.181537 containerd[1555]: time="2026-04-28T02:14:31.181505281Z" level=info msg="RemoveContainer for \"7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0\"" Apr 28 02:14:31.184039 containerd[1555]: time="2026-04-28T02:14:31.184000673Z" level=info msg="RemoveContainer for \"7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0\" returns successfully" Apr 28 02:14:31.184217 kubelet[2655]: I0428 02:14:31.184196 2655 scope.go:117] "RemoveContainer" containerID="47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6" Apr 28 02:14:31.185217 containerd[1555]: time="2026-04-28T02:14:31.185134396Z" level=info msg="RemoveContainer for \"47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6\"" Apr 28 02:14:31.196602 containerd[1555]: time="2026-04-28T02:14:31.196236620Z" level=info msg="RemoveContainer for \"47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6\" returns successfully" Apr 28 02:14:31.196853 kubelet[2655]: I0428 02:14:31.196827 2655 scope.go:117] "RemoveContainer" containerID="e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e" Apr 28 02:14:31.197946 containerd[1555]: time="2026-04-28T02:14:31.197926974Z" level=info msg="RemoveContainer for \"e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e\"" Apr 28 02:14:31.201661 containerd[1555]: time="2026-04-28T02:14:31.201608817Z" level=info msg="RemoveContainer for \"e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e\" returns successfully" Apr 28 02:14:31.201941 kubelet[2655]: I0428 02:14:31.201907 2655 scope.go:117] "RemoveContainer" containerID="4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12" Apr 28 02:14:31.202773 containerd[1555]: time="2026-04-28T02:14:31.202751362Z" level=info msg="RemoveContainer for \"4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12\"" Apr 28 02:14:31.204948 containerd[1555]: time="2026-04-28T02:14:31.204914574Z" level=info msg="RemoveContainer for \"4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12\" returns successfully" Apr 28 02:14:31.205120 kubelet[2655]: I0428 02:14:31.205081 2655 scope.go:117] "RemoveContainer" containerID="46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c" Apr 28 02:14:31.205360 containerd[1555]: time="2026-04-28T02:14:31.205308349Z" level=error msg="ContainerStatus for \"46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c\": not found" Apr 28 02:14:31.205544 kubelet[2655]: E0428 02:14:31.205521 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c\": not found" containerID="46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c" Apr 28 02:14:31.205650 kubelet[2655]: I0428 02:14:31.205551 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c"} err="failed to get container status \"46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c\": rpc error: code = NotFound desc = an error occurred when try to find container \"46aacd1f333e6d20d14fe36f3f5749cbc5f8ef99206b728c49ebf8181cbeeb6c\": not found" Apr 28 02:14:31.205650 kubelet[2655]: I0428 02:14:31.205568 2655 scope.go:117] "RemoveContainer" containerID="7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0" Apr 28 02:14:31.205757 containerd[1555]: time="2026-04-28T02:14:31.205731762Z" level=error msg="ContainerStatus for \"7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0\": not found" Apr 28 02:14:31.205843 kubelet[2655]: E0428 02:14:31.205826 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0\": not found" containerID="7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0" Apr 28 02:14:31.205863 kubelet[2655]: I0428 02:14:31.205845 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0"} err="failed to get container status \"7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"7cacc5998ea03e0a1c5d67429d44e107074adc89cd839b693f8f0b5dafafb4b0\": not found" Apr 28 02:14:31.205863 kubelet[2655]: I0428 02:14:31.205856 2655 scope.go:117] "RemoveContainer" containerID="47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6" Apr 28 02:14:31.206582 containerd[1555]: time="2026-04-28T02:14:31.206486777Z" level=error msg="ContainerStatus for \"47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6\": not found" Apr 28 02:14:31.206734 kubelet[2655]: E0428 02:14:31.206672 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6\": not found" containerID="47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6" Apr 28 02:14:31.206763 kubelet[2655]: I0428 02:14:31.206740 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6"} err="failed to get container status \"47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"47690fcdc5a4900929739f725dd3e063805df0c411bebb40fe66185ce65da1e6\": not found" Apr 28 02:14:31.206763 kubelet[2655]: I0428 02:14:31.206758 2655 scope.go:117] "RemoveContainer" containerID="e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e" Apr 28 02:14:31.206945 containerd[1555]: time="2026-04-28T02:14:31.206916831Z" level=error msg="ContainerStatus for \"e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e\": not found" Apr 28 02:14:31.207056 kubelet[2655]: E0428 02:14:31.207014 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e\": not found" containerID="e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e" Apr 28 02:14:31.207089 kubelet[2655]: I0428 02:14:31.207044 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e"} err="failed to get container status \"e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1ca6945db46fd745c98b92ac63102d7a2949142df12c85b21853d1c029cf57e\": not found" Apr 28 02:14:31.207089 kubelet[2655]: I0428 02:14:31.207065 2655 scope.go:117] "RemoveContainer" containerID="4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12" Apr 28 02:14:31.207271 containerd[1555]: time="2026-04-28T02:14:31.207241005Z" level=error msg="ContainerStatus for \"4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12\": not found" Apr 28 02:14:31.207397 kubelet[2655]: E0428 02:14:31.207373 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12\": not found" containerID="4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12" Apr 28 02:14:31.207493 kubelet[2655]: I0428 02:14:31.207420 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12"} err="failed to get container status \"4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12\": rpc error: code = NotFound desc = an error occurred when try to find container \"4607aac6cb20a58610bcf494a0c5eb17c288cbde6f852a055557d06ba0d1ae12\": not found" Apr 28 02:14:31.566776 systemd[1]: var-lib-kubelet-pods-5658dd35\x2db9cb\x2d4b02\x2d8176\x2d45358b28fa2c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d72s6g.mount: Deactivated successfully. Apr 28 02:14:31.566928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3-rootfs.mount: Deactivated successfully. Apr 28 02:14:31.567082 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74b267c5a2f12a5649436b91880b0c7cd67d50f76d612e71df685521947707d3-shm.mount: Deactivated successfully. Apr 28 02:14:31.567172 systemd[1]: var-lib-kubelet-pods-836e2b6e\x2da9ad\x2d44c6\x2da036\x2d3460365dff88-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d665pk.mount: Deactivated successfully. Apr 28 02:14:31.567258 systemd[1]: var-lib-kubelet-pods-836e2b6e\x2da9ad\x2d44c6\x2da036\x2d3460365dff88-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 28 02:14:31.567353 systemd[1]: var-lib-kubelet-pods-836e2b6e\x2da9ad\x2d44c6\x2da036\x2d3460365dff88-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 28 02:14:31.923250 kubelet[2655]: I0428 02:14:31.923192 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5658dd35-b9cb-4b02-8176-45358b28fa2c" path="/var/lib/kubelet/pods/5658dd35-b9cb-4b02-8176-45358b28fa2c/volumes" Apr 28 02:14:31.923779 kubelet[2655]: I0428 02:14:31.923540 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="836e2b6e-a9ad-44c6-a036-3460365dff88" path="/var/lib/kubelet/pods/836e2b6e-a9ad-44c6-a036-3460365dff88/volumes" Apr 28 02:14:31.984093 kubelet[2655]: E0428 02:14:31.984041 2655 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 02:14:32.506421 sshd[4296]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:32.515846 systemd[1]: Started sshd@23-10.0.0.6:22-10.0.0.1:36984.service - OpenSSH per-connection server daemon (10.0.0.1:36984). Apr 28 02:14:32.516264 systemd[1]: sshd@22-10.0.0.6:22-10.0.0.1:57780.service: Deactivated successfully. Apr 28 02:14:32.519417 systemd-logind[1534]: Session 23 logged out. Waiting for processes to exit. Apr 28 02:14:32.520238 systemd[1]: session-23.scope: Deactivated successfully. Apr 28 02:14:32.520998 systemd-logind[1534]: Removed session 23. Apr 28 02:14:32.547717 sshd[4465]: Accepted publickey for core from 10.0.0.1 port 36984 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:32.548834 sshd[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:32.552438 systemd-logind[1534]: New session 24 of user core. Apr 28 02:14:32.559419 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 28 02:14:32.961342 sshd[4465]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:32.963991 systemd[1]: sshd@23-10.0.0.6:22-10.0.0.1:36984.service: Deactivated successfully. Apr 28 02:14:32.970410 systemd[1]: session-24.scope: Deactivated successfully. Apr 28 02:14:32.971437 systemd-logind[1534]: Session 24 logged out. Waiting for processes to exit. Apr 28 02:14:32.978584 systemd[1]: Started sshd@24-10.0.0.6:22-10.0.0.1:36986.service - OpenSSH per-connection server daemon (10.0.0.1:36986). Apr 28 02:14:32.979737 systemd-logind[1534]: Removed session 24. Apr 28 02:14:33.037349 sshd[4482]: Accepted publickey for core from 10.0.0.1 port 36986 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:33.038660 sshd[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:33.043723 systemd-logind[1534]: New session 25 of user core. Apr 28 02:14:33.053437 kubelet[2655]: I0428 02:14:33.053390 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/36cdd892-64be-47ad-a3cf-ecd8f169c404-cilium-run\") pod \"cilium-k7n6b\" (UID: \"36cdd892-64be-47ad-a3cf-ecd8f169c404\") " pod="kube-system/cilium-k7n6b" Apr 28 02:14:33.053437 kubelet[2655]: I0428 02:14:33.053428 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/36cdd892-64be-47ad-a3cf-ecd8f169c404-hostproc\") pod \"cilium-k7n6b\" (UID: \"36cdd892-64be-47ad-a3cf-ecd8f169c404\") " pod="kube-system/cilium-k7n6b" Apr 28 02:14:33.060979 kubelet[2655]: I0428 02:14:33.053448 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/36cdd892-64be-47ad-a3cf-ecd8f169c404-cilium-ipsec-secrets\") pod \"cilium-k7n6b\" (UID: \"36cdd892-64be-47ad-a3cf-ecd8f169c404\") " pod="kube-system/cilium-k7n6b" Apr 28 02:14:33.060979 kubelet[2655]: I0428 02:14:33.053464 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36cdd892-64be-47ad-a3cf-ecd8f169c404-cilium-config-path\") pod \"cilium-k7n6b\" (UID: \"36cdd892-64be-47ad-a3cf-ecd8f169c404\") " pod="kube-system/cilium-k7n6b" Apr 28 02:14:33.060979 kubelet[2655]: I0428 02:14:33.053481 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/36cdd892-64be-47ad-a3cf-ecd8f169c404-bpf-maps\") pod \"cilium-k7n6b\" (UID: \"36cdd892-64be-47ad-a3cf-ecd8f169c404\") " pod="kube-system/cilium-k7n6b" Apr 28 02:14:33.060979 kubelet[2655]: I0428 02:14:33.053494 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/36cdd892-64be-47ad-a3cf-ecd8f169c404-cilium-cgroup\") pod \"cilium-k7n6b\" (UID: \"36cdd892-64be-47ad-a3cf-ecd8f169c404\") " pod="kube-system/cilium-k7n6b" Apr 28 02:14:33.060979 kubelet[2655]: I0428 02:14:33.053525 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36cdd892-64be-47ad-a3cf-ecd8f169c404-etc-cni-netd\") pod \"cilium-k7n6b\" (UID: \"36cdd892-64be-47ad-a3cf-ecd8f169c404\") " pod="kube-system/cilium-k7n6b" Apr 28 02:14:33.060979 kubelet[2655]: I0428 02:14:33.053575 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36cdd892-64be-47ad-a3cf-ecd8f169c404-xtables-lock\") pod \"cilium-k7n6b\" (UID: \"36cdd892-64be-47ad-a3cf-ecd8f169c404\") " pod="kube-system/cilium-k7n6b" Apr 28 02:14:33.061105 kubelet[2655]: I0428 02:14:33.053651 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/36cdd892-64be-47ad-a3cf-ecd8f169c404-host-proc-sys-kernel\") pod \"cilium-k7n6b\" (UID: \"36cdd892-64be-47ad-a3cf-ecd8f169c404\") " pod="kube-system/cilium-k7n6b" Apr 28 02:14:33.061105 kubelet[2655]: I0428 02:14:33.053691 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/36cdd892-64be-47ad-a3cf-ecd8f169c404-clustermesh-secrets\") pod \"cilium-k7n6b\" (UID: \"36cdd892-64be-47ad-a3cf-ecd8f169c404\") " pod="kube-system/cilium-k7n6b" Apr 28 02:14:33.061105 kubelet[2655]: I0428 02:14:33.053721 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22q5l\" (UniqueName: \"kubernetes.io/projected/36cdd892-64be-47ad-a3cf-ecd8f169c404-kube-api-access-22q5l\") pod \"cilium-k7n6b\" (UID: \"36cdd892-64be-47ad-a3cf-ecd8f169c404\") " pod="kube-system/cilium-k7n6b" Apr 28 02:14:33.061105 kubelet[2655]: I0428 02:14:33.053747 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/36cdd892-64be-47ad-a3cf-ecd8f169c404-host-proc-sys-net\") pod \"cilium-k7n6b\" (UID: \"36cdd892-64be-47ad-a3cf-ecd8f169c404\") " pod="kube-system/cilium-k7n6b" Apr 28 02:14:33.061105 kubelet[2655]: I0428 02:14:33.053763 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/36cdd892-64be-47ad-a3cf-ecd8f169c404-hubble-tls\") pod \"cilium-k7n6b\" (UID: \"36cdd892-64be-47ad-a3cf-ecd8f169c404\") " pod="kube-system/cilium-k7n6b" Apr 28 02:14:33.061237 kubelet[2655]: I0428 02:14:33.053779 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/36cdd892-64be-47ad-a3cf-ecd8f169c404-cni-path\") pod \"cilium-k7n6b\" (UID: \"36cdd892-64be-47ad-a3cf-ecd8f169c404\") " pod="kube-system/cilium-k7n6b" Apr 28 02:14:33.061237 kubelet[2655]: I0428 02:14:33.053793 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36cdd892-64be-47ad-a3cf-ecd8f169c404-lib-modules\") pod \"cilium-k7n6b\" (UID: \"36cdd892-64be-47ad-a3cf-ecd8f169c404\") " pod="kube-system/cilium-k7n6b" Apr 28 02:14:33.061269 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 28 02:14:33.111512 sshd[4482]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:33.122444 systemd[1]: Started sshd@25-10.0.0.6:22-10.0.0.1:37002.service - OpenSSH per-connection server daemon (10.0.0.1:37002). Apr 28 02:14:33.122819 systemd[1]: sshd@24-10.0.0.6:22-10.0.0.1:36986.service: Deactivated successfully. Apr 28 02:14:33.125387 systemd-logind[1534]: Session 25 logged out. Waiting for processes to exit. Apr 28 02:14:33.126055 systemd[1]: session-25.scope: Deactivated successfully. Apr 28 02:14:33.126990 systemd-logind[1534]: Removed session 25. Apr 28 02:14:33.151727 sshd[4489]: Accepted publickey for core from 10.0.0.1 port 37002 ssh2: RSA SHA256:X/xX/oNFYBze8ouDyrFQOn+sGrMSNH/oRDsxh++w0ko Apr 28 02:14:33.152945 sshd[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:14:33.163107 kubelet[2655]: I0428 02:14:33.163067 2655 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-28T02:14:33Z","lastTransitionTime":"2026-04-28T02:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 28 02:14:33.168827 systemd-logind[1534]: New session 26 of user core. Apr 28 02:14:33.173536 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 28 02:14:33.315378 kubelet[2655]: E0428 02:14:33.311767 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:14:33.315524 containerd[1555]: time="2026-04-28T02:14:33.312823699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k7n6b,Uid:36cdd892-64be-47ad-a3cf-ecd8f169c404,Namespace:kube-system,Attempt:0,}" Apr 28 02:14:33.335270 containerd[1555]: time="2026-04-28T02:14:33.334726159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:14:33.335270 containerd[1555]: time="2026-04-28T02:14:33.335235967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:14:33.335400 containerd[1555]: time="2026-04-28T02:14:33.335246979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:14:33.335419 containerd[1555]: time="2026-04-28T02:14:33.335398693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:14:33.366992 containerd[1555]: time="2026-04-28T02:14:33.366915755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k7n6b,Uid:36cdd892-64be-47ad-a3cf-ecd8f169c404,Namespace:kube-system,Attempt:0,} returns sandbox id \"e10586b394d857712d2790cb28f38002403ee13aad6834f2d4b2e1515c85a446\"" Apr 28 02:14:33.367835 kubelet[2655]: E0428 02:14:33.367483 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:14:33.372456 containerd[1555]: time="2026-04-28T02:14:33.372404153Z" level=info msg="CreateContainer within sandbox \"e10586b394d857712d2790cb28f38002403ee13aad6834f2d4b2e1515c85a446\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 28 02:14:33.384055 containerd[1555]: time="2026-04-28T02:14:33.384008450Z" level=info msg="CreateContainer within sandbox \"e10586b394d857712d2790cb28f38002403ee13aad6834f2d4b2e1515c85a446\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b57ecb3b7cf2128982c5a636e9918b2bdd1e8d607e9be0c37f58d86e941fd166\"" Apr 28 02:14:33.384916 containerd[1555]: time="2026-04-28T02:14:33.384854042Z" level=info msg="StartContainer for \"b57ecb3b7cf2128982c5a636e9918b2bdd1e8d607e9be0c37f58d86e941fd166\"" Apr 28 02:14:33.437694 containerd[1555]: time="2026-04-28T02:14:33.437569086Z" level=info msg="StartContainer for \"b57ecb3b7cf2128982c5a636e9918b2bdd1e8d607e9be0c37f58d86e941fd166\" returns successfully" Apr 28 02:14:33.466265 containerd[1555]: time="2026-04-28T02:14:33.466181436Z" level=info msg="shim disconnected" id=b57ecb3b7cf2128982c5a636e9918b2bdd1e8d607e9be0c37f58d86e941fd166 namespace=k8s.io Apr 28 02:14:33.466265 containerd[1555]: time="2026-04-28T02:14:33.466237778Z" level=warning msg="cleaning up after shim disconnected" id=b57ecb3b7cf2128982c5a636e9918b2bdd1e8d607e9be0c37f58d86e941fd166 namespace=k8s.io Apr 28 02:14:33.466265 containerd[1555]: time="2026-04-28T02:14:33.466245017Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:14:34.183657 kubelet[2655]: E0428 02:14:34.183611 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:14:34.193329 containerd[1555]: time="2026-04-28T02:14:34.191575550Z" level=info msg="CreateContainer within sandbox \"e10586b394d857712d2790cb28f38002403ee13aad6834f2d4b2e1515c85a446\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 28 02:14:34.207835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1771658326.mount: Deactivated successfully. Apr 28 02:14:34.208640 containerd[1555]: time="2026-04-28T02:14:34.208609555Z" level=info msg="CreateContainer within sandbox \"e10586b394d857712d2790cb28f38002403ee13aad6834f2d4b2e1515c85a446\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4a2018aec3b47e0904ce4c5c9160e076b2ae63304385caed6cb7ffa4e6aabfdb\"" Apr 28 02:14:34.209078 containerd[1555]: time="2026-04-28T02:14:34.209053325Z" level=info msg="StartContainer for \"4a2018aec3b47e0904ce4c5c9160e076b2ae63304385caed6cb7ffa4e6aabfdb\"" Apr 28 02:14:34.251665 containerd[1555]: time="2026-04-28T02:14:34.251607637Z" level=info msg="StartContainer for \"4a2018aec3b47e0904ce4c5c9160e076b2ae63304385caed6cb7ffa4e6aabfdb\" returns successfully" Apr 28 02:14:34.272019 containerd[1555]: time="2026-04-28T02:14:34.271933224Z" level=info msg="shim disconnected" id=4a2018aec3b47e0904ce4c5c9160e076b2ae63304385caed6cb7ffa4e6aabfdb namespace=k8s.io Apr 28 02:14:34.272019 containerd[1555]: time="2026-04-28T02:14:34.271981828Z" level=warning msg="cleaning up after shim disconnected" id=4a2018aec3b47e0904ce4c5c9160e076b2ae63304385caed6cb7ffa4e6aabfdb namespace=k8s.io Apr 28 02:14:34.272019 containerd[1555]: time="2026-04-28T02:14:34.271988194Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:14:35.160644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a2018aec3b47e0904ce4c5c9160e076b2ae63304385caed6cb7ffa4e6aabfdb-rootfs.mount: Deactivated successfully. Apr 28 02:14:35.189030 kubelet[2655]: E0428 02:14:35.188772 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:14:35.200822 containerd[1555]: time="2026-04-28T02:14:35.200667444Z" level=info msg="CreateContainer within sandbox \"e10586b394d857712d2790cb28f38002403ee13aad6834f2d4b2e1515c85a446\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 28 02:14:35.264970 containerd[1555]: time="2026-04-28T02:14:35.264676192Z" level=info msg="CreateContainer within sandbox \"e10586b394d857712d2790cb28f38002403ee13aad6834f2d4b2e1515c85a446\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c6fd2944fdfda990f1e88fe9c80a37bcdac926e8d75e2cbd6c2764d49a53d8f1\"" Apr 28 02:14:35.265380 containerd[1555]: time="2026-04-28T02:14:35.265359543Z" level=info msg="StartContainer for \"c6fd2944fdfda990f1e88fe9c80a37bcdac926e8d75e2cbd6c2764d49a53d8f1\"" Apr 28 02:14:35.325897 containerd[1555]: time="2026-04-28T02:14:35.325830960Z" level=info msg="StartContainer for \"c6fd2944fdfda990f1e88fe9c80a37bcdac926e8d75e2cbd6c2764d49a53d8f1\" returns successfully" Apr 28 02:14:35.351453 containerd[1555]: time="2026-04-28T02:14:35.351326051Z" level=info msg="shim disconnected" id=c6fd2944fdfda990f1e88fe9c80a37bcdac926e8d75e2cbd6c2764d49a53d8f1 namespace=k8s.io Apr 28 02:14:35.351453 containerd[1555]: time="2026-04-28T02:14:35.351388925Z" level=warning msg="cleaning up after shim disconnected" id=c6fd2944fdfda990f1e88fe9c80a37bcdac926e8d75e2cbd6c2764d49a53d8f1 namespace=k8s.io Apr 28 02:14:35.351453 containerd[1555]: time="2026-04-28T02:14:35.351399940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:14:36.160878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6fd2944fdfda990f1e88fe9c80a37bcdac926e8d75e2cbd6c2764d49a53d8f1-rootfs.mount: Deactivated successfully. Apr 28 02:14:36.192595 kubelet[2655]: E0428 02:14:36.192512 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:14:36.199415 containerd[1555]: time="2026-04-28T02:14:36.199303454Z" level=info msg="CreateContainer within sandbox \"e10586b394d857712d2790cb28f38002403ee13aad6834f2d4b2e1515c85a446\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 28 02:14:36.236794 containerd[1555]: time="2026-04-28T02:14:36.236685306Z" level=info msg="CreateContainer within sandbox \"e10586b394d857712d2790cb28f38002403ee13aad6834f2d4b2e1515c85a446\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3ea12e51223b4fc3d6b11535e876cc0b38f7b95e1462ee74cf79f6fe3154a0c0\"" Apr 28 02:14:36.238127 containerd[1555]: time="2026-04-28T02:14:36.237980572Z" level=info msg="StartContainer for \"3ea12e51223b4fc3d6b11535e876cc0b38f7b95e1462ee74cf79f6fe3154a0c0\"" Apr 28 02:14:36.324186 containerd[1555]: time="2026-04-28T02:14:36.324125971Z" level=info msg="StartContainer for \"3ea12e51223b4fc3d6b11535e876cc0b38f7b95e1462ee74cf79f6fe3154a0c0\" returns successfully" Apr 28 02:14:36.343542 containerd[1555]: time="2026-04-28T02:14:36.343340152Z" level=info msg="shim disconnected" id=3ea12e51223b4fc3d6b11535e876cc0b38f7b95e1462ee74cf79f6fe3154a0c0 namespace=k8s.io Apr 28 02:14:36.343542 containerd[1555]: time="2026-04-28T02:14:36.343549572Z" level=warning msg="cleaning up after shim disconnected" id=3ea12e51223b4fc3d6b11535e876cc0b38f7b95e1462ee74cf79f6fe3154a0c0 namespace=k8s.io Apr 28 02:14:36.343774 containerd[1555]: time="2026-04-28T02:14:36.343569966Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:14:36.985560 kubelet[2655]: E0428 02:14:36.985510 2655 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 02:14:37.159880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ea12e51223b4fc3d6b11535e876cc0b38f7b95e1462ee74cf79f6fe3154a0c0-rootfs.mount: Deactivated successfully. Apr 28 02:14:37.196281 kubelet[2655]: E0428 02:14:37.196245 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:14:37.200894 containerd[1555]: time="2026-04-28T02:14:37.200808337Z" level=info msg="CreateContainer within sandbox \"e10586b394d857712d2790cb28f38002403ee13aad6834f2d4b2e1515c85a446\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 28 02:14:37.216394 containerd[1555]: time="2026-04-28T02:14:37.216285239Z" level=info msg="CreateContainer within sandbox \"e10586b394d857712d2790cb28f38002403ee13aad6834f2d4b2e1515c85a446\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8a52c437d93ae0c782ce50f7010f931b50caa3bd6380ed7aed2d333b8d2289e8\"" Apr 28 02:14:37.216837 containerd[1555]: time="2026-04-28T02:14:37.216802788Z" level=info msg="StartContainer for \"8a52c437d93ae0c782ce50f7010f931b50caa3bd6380ed7aed2d333b8d2289e8\"" Apr 28 02:14:37.266990 containerd[1555]: time="2026-04-28T02:14:37.266658658Z" level=info msg="StartContainer for \"8a52c437d93ae0c782ce50f7010f931b50caa3bd6380ed7aed2d333b8d2289e8\" returns successfully" Apr 28 02:14:37.524230 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 28 02:14:38.159604 systemd[1]: run-containerd-runc-k8s.io-8a52c437d93ae0c782ce50f7010f931b50caa3bd6380ed7aed2d333b8d2289e8-runc.5G6yRD.mount: Deactivated successfully. Apr 28 02:14:38.202060 kubelet[2655]: E0428 02:14:38.202018 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:14:38.218898 kubelet[2655]: I0428 02:14:38.218740 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k7n6b" podStartSLOduration=6.21871903 podStartE2EDuration="6.21871903s" podCreationTimestamp="2026-04-28 02:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:14:38.21859365 +0000 UTC m=+76.374068629" watchObservedRunningTime="2026-04-28 02:14:38.21871903 +0000 UTC m=+76.374194004" Apr 28 02:14:39.313686 kubelet[2655]: E0428 02:14:39.313613 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:14:40.387900 systemd-networkd[1249]: lxc_health: Link UP Apr 28 02:14:40.396907 systemd-networkd[1249]: lxc_health: Gained carrier Apr 28 02:14:41.315624 kubelet[2655]: E0428 02:14:41.315547 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:14:41.922479 kubelet[2655]: E0428 02:14:41.922426 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:14:42.212033 kubelet[2655]: E0428 02:14:42.210618 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:14:42.460521 systemd-networkd[1249]: lxc_health: Gained IPv6LL Apr 28 02:14:43.212956 kubelet[2655]: E0428 02:14:43.212893 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 02:14:45.788798 sshd[4489]: pam_unix(sshd:session): session closed for user core Apr 28 02:14:45.793755 systemd[1]: sshd@25-10.0.0.6:22-10.0.0.1:37002.service: Deactivated successfully. Apr 28 02:14:45.795398 systemd-logind[1534]: Session 26 logged out. Waiting for processes to exit. Apr 28 02:14:45.795450 systemd[1]: session-26.scope: Deactivated successfully. Apr 28 02:14:45.796470 systemd-logind[1534]: Removed session 26. Apr 28 02:14:45.922598 kubelet[2655]: E0428 02:14:45.922495 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"