Apr 28 01:09:43.027078 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 27 22:40:10 -00 2026 Apr 28 01:09:43.027112 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 01:09:43.027125 kernel: BIOS-provided physical RAM map: Apr 28 01:09:43.027132 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 28 01:09:43.027138 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 28 01:09:43.027144 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 28 01:09:43.027152 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 28 01:09:43.027160 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 28 01:09:43.027168 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 28 01:09:43.027179 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 28 01:09:43.027186 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 28 01:09:43.027192 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 28 01:09:43.027260 kernel: NX (Execute Disable) protection: active Apr 28 01:09:43.027270 kernel: APIC: Static calls initialized Apr 28 01:09:43.027278 kernel: SMBIOS 2.8 present. Apr 28 01:09:43.027313 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 28 01:09:43.027321 kernel: Hypervisor detected: KVM Apr 28 01:09:43.027328 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 28 01:09:43.027335 kernel: kvm-clock: using sched offset of 13286037469 cycles Apr 28 01:09:43.027344 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 28 01:09:43.027352 kernel: tsc: Detected 2793.438 MHz processor Apr 28 01:09:43.027361 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 28 01:09:43.027370 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 28 01:09:43.027378 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 28 01:09:43.027389 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 28 01:09:43.027397 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 28 01:09:43.027405 kernel: Using GB pages for direct mapping Apr 28 01:09:43.027413 kernel: ACPI: Early table checksum verification disabled Apr 28 01:09:43.027422 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 28 01:09:43.027430 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:09:43.027439 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:09:43.027446 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:09:43.027454 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 28 01:09:43.027464 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:09:43.027473 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:09:43.027481 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:09:43.027489 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:09:43.027495 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 28 01:09:43.027502 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 28 01:09:43.027510 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 28 01:09:43.027522 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 28 01:09:43.027533 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 28 01:09:43.027540 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 28 01:09:43.027549 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 28 01:09:43.027557 kernel: No NUMA configuration found Apr 28 01:09:43.027565 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 28 01:09:43.027573 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 28 01:09:43.027583 kernel: Zone ranges: Apr 28 01:09:43.027591 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 28 01:09:43.027599 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 28 01:09:43.027607 kernel: Normal empty Apr 28 01:09:43.027615 kernel: Movable zone start for each node Apr 28 01:09:43.027624 kernel: Early memory node ranges Apr 28 01:09:43.027632 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 28 01:09:43.027641 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 28 01:09:43.027649 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 28 01:09:43.027659 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 01:09:43.027668 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 28 01:09:43.027702 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 28 01:09:43.027710 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 28 01:09:43.027718 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 28 01:09:43.027726 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 28 01:09:43.027734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 28 01:09:43.027742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 28 01:09:43.027750 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 28 01:09:43.027761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 28 01:09:43.027768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 28 01:09:43.027777 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 28 01:09:43.027786 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 28 01:09:43.027794 kernel: TSC deadline timer available Apr 28 01:09:43.027802 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 28 01:09:43.027811 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 28 01:09:43.027820 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 28 01:09:43.027829 kernel: kvm-guest: setup PV sched yield Apr 28 01:09:43.028620 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 28 01:09:43.028993 kernel: Booting paravirtualized kernel on KVM Apr 28 01:09:43.029012 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 28 01:09:43.029022 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 28 01:09:43.029031 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 28 01:09:43.029039 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 28 01:09:43.029048 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 28 01:09:43.029056 kernel: kvm-guest: PV spinlocks enabled Apr 28 01:09:43.029065 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 28 01:09:43.029086 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 01:09:43.029095 kernel: random: crng init done Apr 28 01:09:43.029104 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 28 01:09:43.029114 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 28 01:09:43.029123 kernel: Fallback order for Node 0: 0 Apr 28 01:09:43.029132 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 28 01:09:43.029140 kernel: Policy zone: DMA32 Apr 28 01:09:43.029147 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 28 01:09:43.029155 kernel: Memory: 2433648K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 137900K reserved, 0K cma-reserved) Apr 28 01:09:43.029166 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 28 01:09:43.029175 kernel: ftrace: allocating 37996 entries in 149 pages Apr 28 01:09:43.029184 kernel: ftrace: allocated 149 pages with 4 groups Apr 28 01:09:43.029193 kernel: Dynamic Preempt: voluntary Apr 28 01:09:43.029259 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 28 01:09:43.029270 kernel: rcu: RCU event tracing is enabled. Apr 28 01:09:43.029280 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 28 01:09:43.029289 kernel: Trampoline variant of Tasks RCU enabled. Apr 28 01:09:43.029298 kernel: Rude variant of Tasks RCU enabled. Apr 28 01:09:43.029310 kernel: Tracing variant of Tasks RCU enabled. Apr 28 01:09:43.029319 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 28 01:09:43.029328 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 28 01:09:43.029337 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 28 01:09:43.029373 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 28 01:09:43.029383 kernel: Console: colour VGA+ 80x25 Apr 28 01:09:43.029391 kernel: printk: console [ttyS0] enabled Apr 28 01:09:43.029398 kernel: ACPI: Core revision 20230628 Apr 28 01:09:43.029406 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 28 01:09:43.029417 kernel: APIC: Switch to symmetric I/O mode setup Apr 28 01:09:43.029425 kernel: x2apic enabled Apr 28 01:09:43.029434 kernel: APIC: Switched APIC routing to: physical x2apic Apr 28 01:09:43.029443 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 28 01:09:43.029452 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 28 01:09:43.029460 kernel: kvm-guest: setup PV IPIs Apr 28 01:09:43.029468 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 28 01:09:43.029478 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 01:09:43.029498 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 28 01:09:43.029507 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 28 01:09:43.029517 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 28 01:09:43.029528 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 28 01:09:43.029537 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 28 01:09:43.029547 kernel: Spectre V2 : Mitigation: Retpolines Apr 28 01:09:43.029556 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 28 01:09:43.029567 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 28 01:09:43.029579 kernel: RETBleed: Vulnerable Apr 28 01:09:43.029588 kernel: Speculative Store Bypass: Vulnerable Apr 28 01:09:43.029598 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 28 01:09:43.029635 kernel: GDS: Unknown: Dependent on hypervisor status Apr 28 01:09:43.029644 kernel: active return thunk: its_return_thunk Apr 28 01:09:43.029652 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 28 01:09:43.029660 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 28 01:09:43.029668 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 28 01:09:43.029677 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 28 01:09:43.029688 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 28 01:09:43.029696 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 28 01:09:43.029705 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 28 01:09:43.029714 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 28 01:09:43.029723 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 28 01:09:43.029733 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 28 01:09:43.029743 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 28 01:09:43.029751 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 28 01:09:43.029760 kernel: Freeing SMP alternatives memory: 32K Apr 28 01:09:43.029772 kernel: pid_max: default: 32768 minimum: 301 Apr 28 01:09:43.029781 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 28 01:09:43.029790 kernel: landlock: Up and running. Apr 28 01:09:43.029798 kernel: SELinux: Initializing. Apr 28 01:09:43.029807 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 01:09:43.029816 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 01:09:43.029826 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 28 01:09:43.030181 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 01:09:43.030826 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 01:09:43.030892 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 01:09:43.030902 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 28 01:09:43.030910 kernel: signal: max sigframe size: 3632 Apr 28 01:09:43.030918 kernel: rcu: Hierarchical SRCU implementation. Apr 28 01:09:43.030926 kernel: rcu: Max phase no-delay instances is 400. Apr 28 01:09:43.030935 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 28 01:09:43.030944 kernel: smp: Bringing up secondary CPUs ... Apr 28 01:09:43.030953 kernel: smpboot: x86: Booting SMP configuration: Apr 28 01:09:43.030962 kernel: .... node #0, CPUs: #1 #2 #3 Apr 28 01:09:43.030975 kernel: smp: Brought up 1 node, 4 CPUs Apr 28 01:09:43.030984 kernel: smpboot: Max logical packages: 1 Apr 28 01:09:43.030993 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 28 01:09:43.031002 kernel: devtmpfs: initialized Apr 28 01:09:43.031011 kernel: x86/mm: Memory block size: 128MB Apr 28 01:09:43.031020 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 28 01:09:43.031029 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 28 01:09:43.031038 kernel: pinctrl core: initialized pinctrl subsystem Apr 28 01:09:43.031046 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 28 01:09:43.031058 kernel: audit: initializing netlink subsys (disabled) Apr 28 01:09:43.031067 kernel: audit: type=2000 audit(1777338569.469:1): state=initialized audit_enabled=0 res=1 Apr 28 01:09:43.031076 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 28 01:09:43.031085 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 28 01:09:43.031095 kernel: cpuidle: using governor menu Apr 28 01:09:43.031104 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 28 01:09:43.031114 kernel: dca service started, version 1.12.1 Apr 28 01:09:43.031124 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 28 01:09:43.031133 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 28 01:09:43.031144 kernel: PCI: Using configuration type 1 for base access Apr 28 01:09:43.031152 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 28 01:09:43.031160 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 28 01:09:43.031169 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 28 01:09:43.031179 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 28 01:09:43.031189 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 28 01:09:43.031250 kernel: ACPI: Added _OSI(Module Device) Apr 28 01:09:43.031260 kernel: ACPI: Added _OSI(Processor Device) Apr 28 01:09:43.031664 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 28 01:09:43.031713 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 28 01:09:43.031723 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 28 01:09:43.031733 kernel: ACPI: Interpreter enabled Apr 28 01:09:43.031743 kernel: ACPI: PM: (supports S0 S3 S5) Apr 28 01:09:43.031752 kernel: ACPI: Using IOAPIC for interrupt routing Apr 28 01:09:43.031763 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 28 01:09:43.031774 kernel: PCI: Using E820 reservations for host bridge windows Apr 28 01:09:43.031784 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 28 01:09:43.031794 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 28 01:09:43.033068 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 28 01:09:43.033186 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 28 01:09:43.035607 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 28 01:09:43.035658 kernel: PCI host bridge to bus 0000:00 Apr 28 01:09:43.037366 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 28 01:09:43.037486 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 28 01:09:43.037575 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 28 01:09:43.037655 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 28 01:09:43.037733 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 28 01:09:43.037812 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 28 01:09:43.038765 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 28 01:09:43.039688 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 28 01:09:43.040034 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 28 01:09:43.040139 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 28 01:09:43.040333 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 28 01:09:43.040453 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 28 01:09:43.040544 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 28 01:09:43.040667 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 28 01:09:43.040756 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 28 01:09:43.041292 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 28 01:09:43.041488 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 28 01:09:43.043635 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 28 01:09:43.043737 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 28 01:09:43.043823 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 28 01:09:43.043956 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 28 01:09:43.047410 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 28 01:09:43.048741 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 28 01:09:43.048840 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 28 01:09:43.052358 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 28 01:09:43.052483 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 28 01:09:43.052601 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 28 01:09:43.053174 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 28 01:09:43.054589 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 28 01:09:43.054782 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 28 01:09:43.054949 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 28 01:09:43.060100 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 28 01:09:43.060281 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 28 01:09:43.060298 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 28 01:09:43.060306 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 28 01:09:43.060316 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 28 01:09:43.060332 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 28 01:09:43.060341 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 28 01:09:43.060378 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 28 01:09:43.060388 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 28 01:09:43.060396 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 28 01:09:43.060405 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 28 01:09:43.060415 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 28 01:09:43.060423 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 28 01:09:43.060433 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 28 01:09:43.060445 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 28 01:09:43.060454 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 28 01:09:43.060464 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 28 01:09:43.060472 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 28 01:09:43.060482 kernel: iommu: Default domain type: Translated Apr 28 01:09:43.060492 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 28 01:09:43.060500 kernel: PCI: Using ACPI for IRQ routing Apr 28 01:09:43.060510 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 28 01:09:43.060519 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 28 01:09:43.060530 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 28 01:09:43.060637 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 28 01:09:43.060734 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 28 01:09:43.060824 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 28 01:09:43.060837 kernel: vgaarb: loaded Apr 28 01:09:43.061674 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 28 01:09:43.061718 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 28 01:09:43.061729 kernel: clocksource: Switched to clocksource kvm-clock Apr 28 01:09:43.061738 kernel: VFS: Disk quotas dquot_6.6.0 Apr 28 01:09:43.061756 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 28 01:09:43.061765 kernel: pnp: PnP ACPI init Apr 28 01:09:43.062095 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 28 01:09:43.062111 kernel: pnp: PnP ACPI: found 6 devices Apr 28 01:09:43.062121 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 28 01:09:43.062130 kernel: NET: Registered PF_INET protocol family Apr 28 01:09:43.062140 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 28 01:09:43.062150 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 28 01:09:43.062164 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 28 01:09:43.062175 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 28 01:09:43.062183 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 28 01:09:43.062192 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 28 01:09:43.062274 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 01:09:43.062285 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 01:09:43.062295 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 28 01:09:43.062303 kernel: NET: Registered PF_XDP protocol family Apr 28 01:09:43.062407 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 28 01:09:43.062501 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 28 01:09:43.062587 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 28 01:09:43.062671 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 28 01:09:43.062755 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 28 01:09:43.062841 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 28 01:09:43.062894 kernel: PCI: CLS 0 bytes, default 64 Apr 28 01:09:43.062903 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 28 01:09:43.062940 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 01:09:43.062978 kernel: Initialise system trusted keyrings Apr 28 01:09:43.062987 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 28 01:09:43.062996 kernel: Key type asymmetric registered Apr 28 01:09:43.063005 kernel: Asymmetric key parser 'x509' registered Apr 28 01:09:43.063013 kernel: hrtimer: interrupt took 3990858 ns Apr 28 01:09:43.063022 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 28 01:09:43.063031 kernel: io scheduler mq-deadline registered Apr 28 01:09:43.063040 kernel: io scheduler kyber registered Apr 28 01:09:43.063049 kernel: io scheduler bfq registered Apr 28 01:09:43.063060 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 28 01:09:43.063070 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 28 01:09:43.063081 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 28 01:09:43.063092 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 28 01:09:43.063102 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 28 01:09:43.063112 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 28 01:09:43.063122 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 28 01:09:43.063132 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 28 01:09:43.063142 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 28 01:09:43.065780 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 28 01:09:43.065806 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 28 01:09:43.065939 kernel: rtc_cmos 00:04: registered as rtc0 Apr 28 01:09:43.066028 kernel: rtc_cmos 00:04: setting system clock to 2026-04-28T01:09:38 UTC (1777338578) Apr 28 01:09:43.066116 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 28 01:09:43.066129 kernel: intel_pstate: CPU model not supported Apr 28 01:09:43.066137 kernel: NET: Registered PF_INET6 protocol family Apr 28 01:09:43.066147 kernel: Segment Routing with IPv6 Apr 28 01:09:43.066162 kernel: In-situ OAM (IOAM) with IPv6 Apr 28 01:09:43.066171 kernel: NET: Registered PF_PACKET protocol family Apr 28 01:09:43.066182 kernel: Key type dns_resolver registered Apr 28 01:09:43.066190 kernel: IPI shorthand broadcast: enabled Apr 28 01:09:43.067011 kernel: sched_clock: Marking stable (8460024201, 667148087)->(10053074079, -925901791) Apr 28 01:09:43.068513 kernel: registered taskstats version 1 Apr 28 01:09:43.068524 kernel: Loading compiled-in X.509 certificates Apr 28 01:09:43.068533 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 40b5c5a01382737457e1eae3e889ae587960eb18' Apr 28 01:09:43.068544 kernel: Key type .fscrypt registered Apr 28 01:09:43.068559 kernel: Key type fscrypt-provisioning registered Apr 28 01:09:43.068568 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 28 01:09:43.068578 kernel: ima: Allocated hash algorithm: sha1 Apr 28 01:09:43.068585 kernel: ima: No architecture policies found Apr 28 01:09:43.068595 kernel: clk: Disabling unused clocks Apr 28 01:09:43.068604 kernel: Freeing unused kernel image (initmem) memory: 42884K Apr 28 01:09:43.068612 kernel: Write protecting the kernel read-only data: 36864k Apr 28 01:09:43.068651 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 28 01:09:43.068661 kernel: Run /init as init process Apr 28 01:09:43.068673 kernel: with arguments: Apr 28 01:09:43.068683 kernel: /init Apr 28 01:09:43.068692 kernel: with environment: Apr 28 01:09:43.068700 kernel: HOME=/ Apr 28 01:09:43.068710 kernel: TERM=linux Apr 28 01:09:43.068722 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 01:09:43.068735 systemd[1]: Detected virtualization kvm. Apr 28 01:09:43.068745 systemd[1]: Detected architecture x86-64. Apr 28 01:09:43.068757 systemd[1]: Running in initrd. Apr 28 01:09:43.068767 systemd[1]: No hostname configured, using default hostname. Apr 28 01:09:43.068775 systemd[1]: Hostname set to . Apr 28 01:09:43.068787 systemd[1]: Initializing machine ID from VM UUID. Apr 28 01:09:43.068796 systemd[1]: Queued start job for default target initrd.target. Apr 28 01:09:43.068804 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 01:09:43.068816 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 01:09:43.068825 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 28 01:09:43.068839 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 01:09:43.068883 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 28 01:09:43.068906 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 28 01:09:43.068922 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 28 01:09:43.068931 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 28 01:09:43.068944 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 01:09:43.068954 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 01:09:43.068963 systemd[1]: Reached target paths.target - Path Units. Apr 28 01:09:43.068973 systemd[1]: Reached target slices.target - Slice Units. Apr 28 01:09:43.068982 systemd[1]: Reached target swap.target - Swaps. Apr 28 01:09:43.068992 systemd[1]: Reached target timers.target - Timer Units. Apr 28 01:09:43.069001 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 01:09:43.069011 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 01:09:43.069023 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 01:09:43.069034 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 01:09:43.069046 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 01:09:43.069056 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 01:09:43.069068 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 01:09:43.069079 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 01:09:43.069091 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 28 01:09:43.069102 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 01:09:43.069113 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 28 01:09:43.069126 systemd[1]: Starting systemd-fsck-usr.service... Apr 28 01:09:43.069137 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 01:09:43.069149 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 01:09:43.069160 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 01:09:43.069172 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 28 01:09:43.069183 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 01:09:43.069195 systemd[1]: Finished systemd-fsck-usr.service. Apr 28 01:09:43.070999 systemd-journald[194]: Collecting audit messages is disabled. Apr 28 01:09:43.071041 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 01:09:43.071052 systemd-journald[194]: Journal started Apr 28 01:09:43.071075 systemd-journald[194]: Runtime Journal (/run/log/journal/01287489d2454c9881cc49abd3dd92c2) is 6.0M, max 48.4M, 42.3M free. Apr 28 01:09:43.060440 systemd-modules-load[195]: Inserted module 'overlay' Apr 28 01:09:44.063830 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 01:09:44.063918 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 28 01:09:44.063934 kernel: Bridge firewalling registered Apr 28 01:09:43.281166 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 28 01:09:44.075807 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 01:09:44.079317 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 01:09:44.163832 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 01:09:44.291741 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 01:09:44.313191 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 01:09:44.330791 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 01:09:44.405751 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 01:09:44.442125 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 01:09:44.472945 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 01:09:44.489273 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 01:09:44.565978 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 01:09:44.660836 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 01:09:44.753580 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 28 01:09:44.956538 dracut-cmdline[232]: dracut-dracut-053 Apr 28 01:09:44.975677 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 01:09:45.004742 systemd-resolved[223]: Positive Trust Anchors: Apr 28 01:09:45.004752 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 01:09:45.004786 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 01:09:45.054949 systemd-resolved[223]: Defaulting to hostname 'linux'. Apr 28 01:09:45.082559 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 01:09:45.116385 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 01:09:46.823617 kernel: SCSI subsystem initialized Apr 28 01:09:46.958627 kernel: Loading iSCSI transport class v2.0-870. Apr 28 01:09:47.044843 kernel: iscsi: registered transport (tcp) Apr 28 01:09:47.309819 kernel: iscsi: registered transport (qla4xxx) Apr 28 01:09:47.310068 kernel: QLogic iSCSI HBA Driver Apr 28 01:09:47.919301 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 28 01:09:47.958133 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 28 01:09:48.262325 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 28 01:09:48.262739 kernel: device-mapper: uevent: version 1.0.3 Apr 28 01:09:48.275728 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 28 01:09:48.748759 kernel: raid6: avx512x4 gen() 16300 MB/s Apr 28 01:09:48.774706 kernel: raid6: avx512x2 gen() 16656 MB/s Apr 28 01:09:48.818750 kernel: raid6: avx512x1 gen() 9508 MB/s Apr 28 01:09:48.838748 kernel: raid6: avx2x4 gen() 13467 MB/s Apr 28 01:09:48.858112 kernel: raid6: avx2x2 gen() 16158 MB/s Apr 28 01:09:48.879011 kernel: raid6: avx2x1 gen() 10152 MB/s Apr 28 01:09:48.879707 kernel: raid6: using algorithm avx512x2 gen() 16656 MB/s Apr 28 01:09:48.902791 kernel: raid6: .... xor() 3518 MB/s, rmw enabled Apr 28 01:09:48.903065 kernel: raid6: using avx512x2 recovery algorithm Apr 28 01:09:49.218973 kernel: xor: automatically using best checksumming function avx Apr 28 01:09:51.426857 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 28 01:09:51.612733 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 28 01:09:51.645996 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 01:09:52.097481 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 28 01:09:52.169996 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 01:09:52.229547 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 28 01:09:52.481648 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Apr 28 01:09:52.838863 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 01:09:52.897783 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 01:09:53.223070 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 01:09:53.245554 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 28 01:09:53.346851 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 28 01:09:53.362485 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 01:09:53.367541 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 01:09:53.403823 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 01:09:53.419803 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 28 01:09:53.429490 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 28 01:09:53.462753 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 28 01:09:53.523960 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 28 01:09:53.524092 kernel: GPT:9289727 != 19775487 Apr 28 01:09:53.524109 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 28 01:09:53.530608 kernel: GPT:9289727 != 19775487 Apr 28 01:09:53.530846 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 28 01:09:53.537551 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 01:09:53.612545 kernel: cryptd: max_cpu_qlen set to 1000 Apr 28 01:09:53.621280 kernel: libata version 3.00 loaded. Apr 28 01:09:53.621778 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 28 01:09:53.661815 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 01:09:53.662167 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 01:09:53.681847 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 01:09:53.720846 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 01:09:53.723704 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 01:09:53.734572 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 01:09:53.772682 kernel: AVX2 version of gcm_enc/dec engaged. Apr 28 01:09:53.775672 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 01:09:53.819022 kernel: ahci 0000:00:1f.2: version 3.0 Apr 28 01:09:53.819369 kernel: AES CTR mode by8 optimization enabled Apr 28 01:09:53.831621 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 28 01:09:53.876029 kernel: BTRFS: device fsid c393bc7b-9362-4bef-afe6-6491ed4d6c93 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (481) Apr 28 01:09:53.916136 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 28 01:09:53.916543 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 28 01:09:53.932794 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 28 01:09:53.941196 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 28 01:09:53.957323 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Apr 28 01:09:54.003167 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 28 01:09:54.864424 kernel: scsi host0: ahci Apr 28 01:09:54.864771 kernel: scsi host1: ahci Apr 28 01:09:54.864889 kernel: scsi host2: ahci Apr 28 01:09:54.867369 kernel: scsi host3: ahci Apr 28 01:09:54.867545 kernel: scsi host4: ahci Apr 28 01:09:54.870130 kernel: scsi host5: ahci Apr 28 01:09:54.870362 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 28 01:09:54.870386 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 28 01:09:54.870397 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 28 01:09:54.870409 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 28 01:09:54.870420 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 28 01:09:54.870431 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 28 01:09:54.870442 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 28 01:09:54.870455 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 28 01:09:54.870466 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 28 01:09:54.870479 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 28 01:09:54.870490 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 28 01:09:54.870500 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 28 01:09:54.870511 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 28 01:09:54.870522 kernel: ata3.00: applying bridge limits Apr 28 01:09:54.870533 kernel: ata3.00: configured for UDMA/100 Apr 28 01:09:54.870544 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 28 01:09:54.870705 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 28 01:09:54.870819 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 28 01:09:54.870836 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 28 01:09:54.878747 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 28 01:09:54.892582 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 01:09:54.976873 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 01:09:55.026776 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 28 01:09:55.053680 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 01:09:55.114915 disk-uuid[570]: Primary Header is updated. Apr 28 01:09:55.114915 disk-uuid[570]: Secondary Entries is updated. Apr 28 01:09:55.114915 disk-uuid[570]: Secondary Header is updated. Apr 28 01:09:55.132869 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 01:09:55.153680 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 01:09:55.227860 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 01:09:56.183439 disk-uuid[571]: Warning: The kernel is still using the old partition table. Apr 28 01:09:56.183439 disk-uuid[571]: The new table will be used at the next reboot or after you Apr 28 01:09:56.183439 disk-uuid[571]: run partprobe(8) or kpartx(8) Apr 28 01:09:56.183439 disk-uuid[571]: The operation has completed successfully. Apr 28 01:09:56.922267 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 28 01:09:56.929706 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 28 01:09:57.442776 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 28 01:09:57.906086 sh[592]: Success Apr 28 01:09:58.096410 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 28 01:09:59.038170 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 28 01:09:59.118650 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 28 01:09:59.269016 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 28 01:09:59.770936 kernel: BTRFS info (device dm-0): first mount of filesystem c393bc7b-9362-4bef-afe6-6491ed4d6c93 Apr 28 01:09:59.773337 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 28 01:09:59.773357 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 28 01:09:59.806864 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 28 01:09:59.837807 kernel: BTRFS info (device dm-0): using free space tree Apr 28 01:10:00.197084 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 28 01:10:00.219299 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 28 01:10:00.264846 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 28 01:10:00.363715 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 28 01:10:00.540539 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 01:10:00.541808 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 01:10:00.548764 kernel: BTRFS info (device vda6): using free space tree Apr 28 01:10:00.580184 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 01:10:00.723586 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 28 01:10:00.736833 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 01:10:00.826414 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 28 01:10:00.843058 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 28 01:10:02.955167 ignition[684]: Ignition 2.19.0 Apr 28 01:10:02.955325 ignition[684]: Stage: fetch-offline Apr 28 01:10:02.956250 ignition[684]: no configs at "/usr/lib/ignition/base.d" Apr 28 01:10:02.956281 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:10:02.958528 ignition[684]: parsed url from cmdline: "" Apr 28 01:10:02.958534 ignition[684]: no config URL provided Apr 28 01:10:02.958542 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 01:10:02.958591 ignition[684]: no config at "/usr/lib/ignition/user.ign" Apr 28 01:10:02.961637 ignition[684]: op(1): [started] loading QEMU firmware config module Apr 28 01:10:02.961646 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 28 01:10:03.067012 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 01:10:03.141830 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 01:10:03.157358 ignition[684]: op(1): [finished] loading QEMU firmware config module Apr 28 01:10:03.734185 systemd-networkd[780]: lo: Link UP Apr 28 01:10:03.735627 systemd-networkd[780]: lo: Gained carrier Apr 28 01:10:03.762632 systemd-networkd[780]: Enumeration completed Apr 28 01:10:03.779048 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 01:10:03.831027 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 01:10:03.831033 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 01:10:03.854796 systemd-networkd[780]: eth0: Link UP Apr 28 01:10:03.854801 systemd-networkd[780]: eth0: Gained carrier Apr 28 01:10:03.854816 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 01:10:03.865835 systemd[1]: Reached target network.target - Network. Apr 28 01:10:03.908819 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 01:10:04.215151 ignition[684]: parsing config with SHA512: 2d6c11993eac9c19f1d678928ae82e36cca48fbcfab9bf5d1bd5e1e00cd229d49ffd2c80ffad37ea2e1195b38bcdffe0e07dd2c9ae728bb93c3eab707314036b Apr 28 01:10:04.463693 unknown[684]: fetched base config from "system" Apr 28 01:10:04.463704 unknown[684]: fetched user config from "qemu" Apr 28 01:10:04.468880 ignition[684]: fetch-offline: fetch-offline passed Apr 28 01:10:04.474524 ignition[684]: Ignition finished successfully Apr 28 01:10:04.479963 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 01:10:04.523401 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 28 01:10:04.581177 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 28 01:10:05.073906 systemd-networkd[780]: eth0: Gained IPv6LL Apr 28 01:10:05.411524 ignition[784]: Ignition 2.19.0 Apr 28 01:10:05.412637 ignition[784]: Stage: kargs Apr 28 01:10:05.413952 ignition[784]: no configs at "/usr/lib/ignition/base.d" Apr 28 01:10:05.413967 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:10:05.417810 ignition[784]: kargs: kargs passed Apr 28 01:10:05.418147 ignition[784]: Ignition finished successfully Apr 28 01:10:05.471430 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 28 01:10:05.589757 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 28 01:10:08.332943 ignition[791]: Ignition 2.19.0 Apr 28 01:10:08.336931 ignition[791]: Stage: disks Apr 28 01:10:08.346600 ignition[791]: no configs at "/usr/lib/ignition/base.d" Apr 28 01:10:08.346696 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:10:08.354343 ignition[791]: disks: disks passed Apr 28 01:10:08.375636 ignition[791]: Ignition finished successfully Apr 28 01:10:08.454618 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 28 01:10:08.501390 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 28 01:10:08.510314 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 01:10:08.510778 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 01:10:08.557914 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 01:10:08.625873 systemd[1]: Reached target basic.target - Basic System. Apr 28 01:10:08.874406 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 28 01:10:10.488715 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 28 01:10:10.520652 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 28 01:10:10.559880 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 28 01:10:12.708395 kernel: EXT4-fs (vda9): mounted filesystem f590d1f8-5181-4682-9e04-fe65400dca5c r/w with ordered data mode. Quota mode: none. Apr 28 01:10:12.824434 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 28 01:10:12.882896 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 28 01:10:13.134732 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 01:10:13.174509 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 28 01:10:13.180894 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 28 01:10:13.181123 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 28 01:10:13.182517 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 01:10:13.432019 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Apr 28 01:10:13.449576 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 01:10:13.449767 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 01:10:13.449815 kernel: BTRFS info (device vda6): using free space tree Apr 28 01:10:13.504087 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 28 01:10:13.555913 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 01:10:13.615893 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 28 01:10:13.710864 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 01:10:15.165025 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Apr 28 01:10:15.197616 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Apr 28 01:10:15.252914 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Apr 28 01:10:15.409366 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Apr 28 01:10:18.331045 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 28 01:10:18.436749 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 28 01:10:18.450131 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 28 01:10:18.743774 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 01:10:18.743979 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 28 01:10:18.942584 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 28 01:10:19.567025 ignition[924]: INFO : Ignition 2.19.0 Apr 28 01:10:19.567025 ignition[924]: INFO : Stage: mount Apr 28 01:10:19.637981 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 01:10:19.637981 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:10:19.654655 ignition[924]: INFO : mount: mount passed Apr 28 01:10:19.654655 ignition[924]: INFO : Ignition finished successfully Apr 28 01:10:19.675963 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 28 01:10:19.758698 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 28 01:10:20.432254 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 01:10:20.710959 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Apr 28 01:10:20.727009 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 01:10:20.728967 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 01:10:20.740705 kernel: BTRFS info (device vda6): using free space tree Apr 28 01:10:20.861870 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 01:10:20.971473 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 01:10:21.573825 ignition[955]: INFO : Ignition 2.19.0 Apr 28 01:10:21.622137 ignition[955]: INFO : Stage: files Apr 28 01:10:21.635410 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 01:10:21.642505 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:10:21.677597 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Apr 28 01:10:21.728698 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 28 01:10:21.728698 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 28 01:10:21.805401 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 28 01:10:21.819929 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 28 01:10:21.833066 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 28 01:10:21.820566 unknown[955]: wrote ssh authorized keys file for user: core Apr 28 01:10:21.870353 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 28 01:10:21.870353 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 28 01:10:21.920836 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 01:10:21.920836 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 28 01:10:22.724274 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 28 01:10:24.698412 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 01:10:24.715944 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 28 01:10:24.715944 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 28 01:10:25.468706 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 28 01:10:32.579786 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 28 01:10:32.649865 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 28 01:10:32.649865 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 28 01:10:32.673050 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 28 01:10:32.706329 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 28 01:10:32.714899 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 01:10:32.714899 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 01:10:32.714899 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 01:10:32.714899 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 01:10:32.768858 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 01:10:32.768858 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 01:10:32.768858 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 01:10:32.768858 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 01:10:32.768858 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 01:10:32.768858 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 28 01:10:33.436653 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 28 01:11:06.350587 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 01:11:06.362868 ignition[955]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 28 01:11:06.435776 ignition[955]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 28 01:11:06.478421 ignition[955]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 28 01:11:06.516282 ignition[955]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 28 01:11:06.516282 ignition[955]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 28 01:11:06.542158 ignition[955]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 01:11:06.542158 ignition[955]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 01:11:06.542158 ignition[955]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 28 01:11:06.542158 ignition[955]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Apr 28 01:11:06.576073 ignition[955]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 01:11:06.627346 ignition[955]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 01:11:06.627346 ignition[955]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Apr 28 01:11:06.627346 ignition[955]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Apr 28 01:11:08.047085 ignition[955]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 01:11:08.171525 ignition[955]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 01:11:08.212244 ignition[955]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Apr 28 01:11:08.212244 ignition[955]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Apr 28 01:11:08.212244 ignition[955]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Apr 28 01:11:08.231877 ignition[955]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 28 01:11:08.231877 ignition[955]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 28 01:11:08.231877 ignition[955]: INFO : files: files passed Apr 28 01:11:08.231877 ignition[955]: INFO : Ignition finished successfully Apr 28 01:11:08.234853 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 28 01:11:08.332429 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 28 01:11:08.371699 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 28 01:11:08.567454 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 28 01:11:08.567663 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 28 01:11:08.577903 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Apr 28 01:11:08.595172 initrd-setup-root-after-ignition[989]: grep: Apr 28 01:11:08.602805 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf Apr 28 01:11:08.602805 initrd-setup-root-after-ignition[989]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 01:11:08.612018 initrd-setup-root-after-ignition[985]: : No such file or directory Apr 28 01:11:08.612018 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 28 01:11:08.617658 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 01:11:08.627869 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 28 01:11:08.660235 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 28 01:11:10.518564 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 28 01:11:10.518769 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 28 01:11:10.539095 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 28 01:11:10.579300 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 28 01:11:10.581170 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 28 01:11:10.663605 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 28 01:11:10.990341 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 01:11:11.000017 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 28 01:11:11.308959 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 28 01:11:11.368662 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 01:11:11.382185 systemd[1]: Stopped target timers.target - Timer Units. Apr 28 01:11:11.423496 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 28 01:11:11.423942 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 01:11:11.433492 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 28 01:11:11.439688 systemd[1]: Stopped target basic.target - Basic System. Apr 28 01:11:11.459842 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 28 01:11:11.470233 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 01:11:11.525362 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 28 01:11:11.536784 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 28 01:11:11.541938 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 01:11:11.570962 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 28 01:11:11.602132 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 28 01:11:11.612420 systemd[1]: Stopped target swap.target - Swaps. Apr 28 01:11:11.616860 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 28 01:11:11.617558 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 28 01:11:11.647651 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 28 01:11:11.669672 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 01:11:11.697067 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 28 01:11:11.699137 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 01:11:11.710633 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 28 01:11:11.710971 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 28 01:11:11.735314 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 28 01:11:11.759545 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 01:11:11.774867 systemd[1]: Stopped target paths.target - Path Units. Apr 28 01:11:11.783624 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 28 01:11:11.833024 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 01:11:11.842993 systemd[1]: Stopped target slices.target - Slice Units. Apr 28 01:11:11.843265 systemd[1]: Stopped target sockets.target - Socket Units. Apr 28 01:11:11.868943 systemd[1]: iscsid.socket: Deactivated successfully. Apr 28 01:11:11.869378 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 01:11:11.906192 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 28 01:11:11.906472 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 01:11:11.909874 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 28 01:11:11.910146 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 01:11:11.910948 systemd[1]: ignition-files.service: Deactivated successfully. Apr 28 01:11:11.911055 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 28 01:11:11.927718 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 28 01:11:11.933466 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 28 01:11:11.956798 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 01:11:12.031535 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 28 01:11:12.054833 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 28 01:11:12.066423 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 01:11:12.075001 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 28 01:11:12.077190 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 01:11:12.526671 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 28 01:11:12.529985 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 28 01:11:12.768820 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 28 01:11:12.801054 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 28 01:11:12.801312 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 28 01:11:13.056874 ignition[1009]: INFO : Ignition 2.19.0 Apr 28 01:11:13.061929 ignition[1009]: INFO : Stage: umount Apr 28 01:11:13.065480 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 01:11:13.065480 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:11:13.081971 ignition[1009]: INFO : umount: umount passed Apr 28 01:11:13.089148 ignition[1009]: INFO : Ignition finished successfully Apr 28 01:11:13.157583 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 28 01:11:13.179985 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 28 01:11:13.221302 systemd[1]: Stopped target network.target - Network. Apr 28 01:11:13.223934 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 28 01:11:13.224142 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 28 01:11:13.257159 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 28 01:11:13.261146 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 28 01:11:13.268538 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 28 01:11:13.269710 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 28 01:11:13.283540 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 28 01:11:13.331138 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 28 01:11:13.344044 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 28 01:11:13.345109 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 28 01:11:13.405878 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 28 01:11:13.420310 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 28 01:11:13.426897 systemd-networkd[780]: eth0: DHCPv6 lease lost Apr 28 01:11:13.455802 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 28 01:11:13.456849 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 28 01:11:13.480790 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 28 01:11:13.481032 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 28 01:11:13.517587 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 28 01:11:13.517692 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 28 01:11:13.542935 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 28 01:11:13.544901 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 28 01:11:13.555158 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 01:11:13.565522 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 01:11:13.571906 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 01:11:13.578930 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 28 01:11:13.579166 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 28 01:11:13.633863 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 28 01:11:13.636324 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 01:11:13.641824 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 01:11:13.719420 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 28 01:11:13.722021 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 01:11:13.737083 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 28 01:11:13.738081 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 28 01:11:13.745146 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 28 01:11:13.745235 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 01:11:13.764642 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 28 01:11:13.766549 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 28 01:11:13.782455 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 28 01:11:13.800194 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 28 01:11:13.819661 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 01:11:13.825078 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 01:11:13.879024 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 28 01:11:13.931236 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 28 01:11:13.939848 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 01:11:13.956292 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 01:11:13.960717 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 01:11:14.018383 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 28 01:11:14.019637 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 28 01:11:14.118723 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 28 01:11:14.145403 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 28 01:11:14.176881 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 28 01:11:14.219779 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 28 01:11:14.709276 systemd[1]: Switching root. Apr 28 01:11:15.260050 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 28 01:11:15.260444 systemd-journald[194]: Journal stopped Apr 28 01:11:50.299120 kernel: SELinux: policy capability network_peer_controls=1 Apr 28 01:11:50.300081 kernel: SELinux: policy capability open_perms=1 Apr 28 01:11:50.300133 kernel: SELinux: policy capability extended_socket_class=1 Apr 28 01:11:50.300147 kernel: SELinux: policy capability always_check_network=0 Apr 28 01:11:50.300160 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 28 01:11:50.301545 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 28 01:11:50.301567 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 28 01:11:50.301590 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 28 01:11:50.301605 kernel: audit: type=1403 audit(1777338677.442:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 28 01:11:50.301624 systemd[1]: Successfully loaded SELinux policy in 185.904ms. Apr 28 01:11:50.301673 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 348.160ms. Apr 28 01:11:50.301689 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 01:11:50.301704 systemd[1]: Detected virtualization kvm. Apr 28 01:11:50.301718 systemd[1]: Detected architecture x86-64. Apr 28 01:11:50.301734 systemd[1]: Detected first boot. Apr 28 01:11:50.301749 systemd[1]: Initializing machine ID from VM UUID. Apr 28 01:11:50.301766 zram_generator::config[1071]: No configuration found. Apr 28 01:11:50.301782 systemd[1]: Populated /etc with preset unit settings. Apr 28 01:11:50.301797 systemd[1]: Queued start job for default target multi-user.target. Apr 28 01:11:50.301813 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 28 01:11:50.301830 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 28 01:11:50.301843 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 28 01:11:50.301860 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 28 01:11:50.301879 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 28 01:11:50.301894 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 28 01:11:50.301908 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 28 01:11:50.301921 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 28 01:11:50.301934 systemd[1]: Created slice user.slice - User and Session Slice. Apr 28 01:11:50.301953 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 01:11:50.301967 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 01:11:50.301980 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 28 01:11:50.301992 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 28 01:11:50.302005 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 28 01:11:50.302022 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 01:11:50.302036 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 28 01:11:50.302049 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 01:11:50.302062 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 28 01:11:50.302074 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 01:11:50.302116 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 01:11:50.302128 systemd[1]: Reached target slices.target - Slice Units. Apr 28 01:11:50.302141 systemd[1]: Reached target swap.target - Swaps. Apr 28 01:11:50.302162 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 28 01:11:50.302177 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 28 01:11:50.302191 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 01:11:50.303153 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 01:11:50.303183 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 01:11:50.303196 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 01:11:50.303240 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 01:11:50.303253 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 28 01:11:50.303267 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 28 01:11:50.303288 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 28 01:11:50.303321 systemd[1]: Mounting media.mount - External Media Directory... Apr 28 01:11:50.303336 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:11:50.303348 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 28 01:11:50.303362 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 28 01:11:50.303376 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 28 01:11:50.303389 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 28 01:11:50.303402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 01:11:50.303419 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 01:11:50.303457 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 28 01:11:50.303469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 01:11:50.303480 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 01:11:50.303494 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 01:11:50.303509 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 28 01:11:50.303522 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 01:11:50.303537 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 28 01:11:50.303551 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 28 01:11:50.303569 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 28 01:11:50.303586 kernel: ACPI: bus type drm_connector registered Apr 28 01:11:50.303600 kernel: fuse: init (API version 7.39) Apr 28 01:11:50.303613 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 01:11:50.303626 kernel: loop: module loaded Apr 28 01:11:50.307125 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 01:11:50.311773 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 28 01:11:50.311849 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 28 01:11:50.311956 systemd-journald[1171]: Collecting audit messages is disabled. Apr 28 01:11:50.312011 systemd-journald[1171]: Journal started Apr 28 01:11:50.312039 systemd-journald[1171]: Runtime Journal (/run/log/journal/01287489d2454c9881cc49abd3dd92c2) is 6.0M, max 48.4M, 42.3M free. Apr 28 01:11:50.325991 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 01:11:50.343955 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:11:50.381627 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 01:11:50.463480 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 28 01:11:50.484226 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 28 01:11:50.540239 systemd[1]: Mounted media.mount - External Media Directory. Apr 28 01:11:50.552128 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 28 01:11:50.562269 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 28 01:11:50.609877 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 28 01:11:50.628371 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 28 01:11:50.649537 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 01:11:50.659151 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 28 01:11:50.662905 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 28 01:11:50.682112 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 01:11:50.701415 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 01:11:50.708794 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 01:11:50.716470 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 01:11:50.723030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 01:11:50.726583 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 01:11:50.773585 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 28 01:11:50.779253 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 28 01:11:50.826195 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 01:11:50.881112 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 01:11:50.922592 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 01:11:50.941712 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 28 01:11:50.948308 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 28 01:11:51.383962 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 01:11:51.766105 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 28 01:11:51.868007 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 28 01:11:51.877464 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 28 01:11:51.937803 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 28 01:11:51.976844 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 28 01:11:52.007611 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 01:11:52.020490 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 28 01:11:52.026583 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 01:11:52.068575 systemd-journald[1171]: Time spent on flushing to /var/log/journal/01287489d2454c9881cc49abd3dd92c2 is 513.057ms for 946 entries. Apr 28 01:11:52.068575 systemd-journald[1171]: System Journal (/var/log/journal/01287489d2454c9881cc49abd3dd92c2) is 8.0M, max 195.6M, 187.6M free. Apr 28 01:11:52.669432 systemd-journald[1171]: Received client request to flush runtime journal. Apr 28 01:11:52.052745 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 01:11:52.114050 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 01:11:52.216135 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 28 01:11:52.248888 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 28 01:11:52.276491 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 28 01:11:52.309009 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 01:11:52.395139 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 28 01:11:52.769352 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 28 01:11:52.844358 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 28 01:11:52.880765 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 01:11:52.920083 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 28 01:11:53.190536 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Apr 28 01:11:53.191569 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Apr 28 01:11:53.276762 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 01:11:53.360438 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 28 01:11:55.029628 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 28 01:11:55.069473 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 01:11:56.722665 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Apr 28 01:11:56.722711 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Apr 28 01:11:57.041231 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 01:12:25.901961 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 28 01:12:25.950777 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 01:12:31.420645 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Apr 28 01:12:36.751970 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 01:12:36.847155 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 01:12:36.919054 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 28 01:12:37.120672 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 28 01:12:37.456445 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 28 01:12:37.654057 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1253) Apr 28 01:12:38.215828 systemd-networkd[1242]: lo: Link UP Apr 28 01:12:38.216466 systemd-networkd[1242]: lo: Gained carrier Apr 28 01:12:38.225402 systemd-networkd[1242]: Enumeration completed Apr 28 01:12:38.239479 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 01:12:38.239787 systemd-networkd[1242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 01:12:38.244051 systemd-networkd[1242]: eth0: Link UP Apr 28 01:12:38.244060 systemd-networkd[1242]: eth0: Gained carrier Apr 28 01:12:38.244081 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 01:12:38.315668 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 01:12:38.337371 systemd-networkd[1242]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 01:12:38.347058 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 28 01:12:38.412543 kernel: ACPI: button: Power Button [PWRF] Apr 28 01:12:38.413182 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 01:12:38.515516 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 28 01:12:38.573624 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 28 01:12:38.657159 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 28 01:12:38.660891 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 28 01:12:38.661250 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 28 01:12:39.163527 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 01:12:39.315371 systemd-networkd[1242]: eth0: Gained IPv6LL Apr 28 01:12:39.529463 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 28 01:12:40.242825 kernel: mousedev: PS/2 mouse device common for all mice Apr 28 01:12:41.913892 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 28 01:12:41.958401 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 28 01:12:42.935814 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 01:12:45.468111 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 01:13:00.764872 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 28 01:13:01.013379 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 01:13:01.603656 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 28 01:13:06.223018 lvm[1290]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 01:13:13.322110 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 28 01:13:13.467096 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 01:13:13.488685 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 28 01:13:13.515260 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 01:13:13.526001 systemd[1]: Reached target machines.target - Containers. Apr 28 01:13:13.771785 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 28 01:13:15.020860 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 28 01:13:15.114943 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 28 01:13:15.144101 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 01:13:15.328139 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 28 01:13:15.389867 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 28 01:13:15.480477 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 28 01:13:15.537798 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 28 01:13:15.664696 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 28 01:13:15.805977 kernel: loop0: detected capacity change from 0 to 140768 Apr 28 01:13:15.816369 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 28 01:13:15.825657 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 28 01:13:16.663763 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 28 01:13:17.277834 kernel: loop1: detected capacity change from 0 to 228704 Apr 28 01:13:18.820649 kernel: loop2: detected capacity change from 0 to 142488 Apr 28 01:13:20.043863 kernel: loop3: detected capacity change from 0 to 140768 Apr 28 01:13:20.614432 kernel: loop4: detected capacity change from 0 to 228704 Apr 28 01:13:20.781367 kernel: loop5: detected capacity change from 0 to 142488 Apr 28 01:13:21.551787 (sd-merge)[1312]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 28 01:13:21.559533 (sd-merge)[1312]: Merged extensions into '/usr'. Apr 28 01:13:24.632126 systemd[1]: Reloading requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... Apr 28 01:13:24.651128 systemd[1]: Reloading... Apr 28 01:13:26.473430 zram_generator::config[1339]: No configuration found. Apr 28 01:13:29.459347 ldconfig[1294]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 28 01:13:38.515951 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 01:14:10.956662 systemd[1]: Reloading finished in 46293 ms. Apr 28 01:14:31.328958 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 28 01:14:31.504757 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 28 01:14:34.037431 systemd[1]: Starting ensure-sysext.service... Apr 28 01:14:34.274952 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 01:14:37.284961 systemd[1]: Reloading requested from client PID 1383 ('systemctl') (unit ensure-sysext.service)... Apr 28 01:14:37.307037 systemd[1]: Reloading... Apr 28 01:14:39.851638 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 01:14:40.059897 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 01:14:40.766474 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 01:14:40.935871 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Apr 28 01:14:40.942585 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Apr 28 01:14:42.523066 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 01:14:42.530899 systemd-tmpfiles[1384]: Skipping /boot Apr 28 01:14:48.070716 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 01:14:48.072388 systemd-tmpfiles[1384]: Skipping /boot Apr 28 01:14:49.848093 zram_generator::config[1415]: No configuration found. Apr 28 01:15:20.218373 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 01:15:32.224909 systemd[1]: Reloading finished in 54849 ms. Apr 28 01:15:37.435905 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 01:15:37.884120 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 01:15:37.953113 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 28 01:15:38.132093 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 28 01:15:38.264584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 01:15:38.364308 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 28 01:15:38.788190 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 28 01:15:38.966989 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:15:39.025919 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 01:15:39.106691 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 01:15:39.153877 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 01:15:39.163193 augenrules[1484]: No rules Apr 28 01:15:39.254773 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 01:15:39.260195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 01:15:39.319573 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 28 01:15:39.324517 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:15:39.427293 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 01:15:39.493687 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 28 01:15:39.511352 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 28 01:15:39.567169 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 01:15:39.582726 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 01:15:39.645828 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 01:15:39.646114 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 01:15:39.855162 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 01:15:39.909143 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 01:15:39.943282 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 28 01:15:39.983069 systemd-resolved[1463]: Positive Trust Anchors: Apr 28 01:15:39.984272 systemd-resolved[1463]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 01:15:39.984320 systemd-resolved[1463]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 01:15:40.210975 systemd-resolved[1463]: Defaulting to hostname 'linux'. Apr 28 01:15:40.355726 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:15:40.363922 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 01:15:40.486594 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 01:15:40.750803 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 01:15:40.774195 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 01:15:40.880747 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 01:15:40.886848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 01:15:40.886922 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 01:15:40.886948 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:15:40.895792 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 01:15:40.923725 systemd[1]: Finished ensure-sysext.service. Apr 28 01:15:41.193889 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 01:15:41.200146 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 01:15:41.327846 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 01:15:41.337015 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 01:15:42.936882 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 01:15:43.120379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 01:15:43.386710 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 01:15:43.544760 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 01:15:45.475649 systemd[1]: Reached target network.target - Network. Apr 28 01:15:45.548195 systemd[1]: Reached target network-online.target - Network is Online. Apr 28 01:15:45.642701 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 01:15:45.668825 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 01:15:45.709304 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 01:15:46.533676 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 28 01:16:03.759561 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 28 01:16:03.980061 systemd-timesyncd[1522]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 28 01:16:03.980171 systemd-timesyncd[1522]: Initial clock synchronization to Tue 2026-04-28 01:16:03.913126 UTC. Apr 28 01:16:03.980646 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 01:16:04.169982 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 28 01:16:04.371850 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 28 01:16:04.721666 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 28 01:16:04.823293 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 28 01:16:04.899992 systemd[1]: Reached target paths.target - Path Units. Apr 28 01:16:05.015774 systemd[1]: Reached target time-set.target - System Time Set. Apr 28 01:16:05.063618 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 28 01:16:05.313106 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 28 01:16:05.397061 systemd[1]: Reached target timers.target - Timer Units. Apr 28 01:16:06.345961 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 28 01:16:08.543836 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 28 01:16:08.983034 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 28 01:16:09.218773 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 28 01:16:09.298684 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 01:16:09.591026 systemd[1]: Reached target basic.target - Basic System. Apr 28 01:16:09.725065 systemd[1]: System is tainted: cgroupsv1 Apr 28 01:16:09.775931 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 28 01:16:09.805492 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 28 01:16:11.287181 systemd[1]: Starting containerd.service - containerd container runtime... Apr 28 01:16:12.343449 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 28 01:16:12.500781 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 28 01:16:12.803834 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 28 01:16:12.970482 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 28 01:16:12.979429 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 28 01:16:13.009388 jq[1531]: false Apr 28 01:16:13.014834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:16:13.121061 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 28 01:16:13.489876 dbus-daemon[1528]: [system] SELinux support is enabled Apr 28 01:16:13.514710 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 28 01:16:13.540170 extend-filesystems[1532]: Found loop3 Apr 28 01:16:13.540170 extend-filesystems[1532]: Found loop4 Apr 28 01:16:13.540170 extend-filesystems[1532]: Found loop5 Apr 28 01:16:13.540170 extend-filesystems[1532]: Found sr0 Apr 28 01:16:13.540170 extend-filesystems[1532]: Found vda Apr 28 01:16:13.540170 extend-filesystems[1532]: Found vda1 Apr 28 01:16:13.540170 extend-filesystems[1532]: Found vda2 Apr 28 01:16:13.540170 extend-filesystems[1532]: Found vda3 Apr 28 01:16:13.540170 extend-filesystems[1532]: Found usr Apr 28 01:16:13.540170 extend-filesystems[1532]: Found vda4 Apr 28 01:16:13.540170 extend-filesystems[1532]: Found vda6 Apr 28 01:16:13.540170 extend-filesystems[1532]: Found vda7 Apr 28 01:16:13.540170 extend-filesystems[1532]: Found vda9 Apr 28 01:16:13.540170 extend-filesystems[1532]: Checking size of /dev/vda9 Apr 28 01:16:13.649331 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 28 01:16:13.810839 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 28 01:16:13.879619 extend-filesystems[1532]: Resized partition /dev/vda9 Apr 28 01:16:13.899913 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 28 01:16:13.920728 extend-filesystems[1551]: resize2fs 1.47.1 (20-May-2024) Apr 28 01:16:13.999627 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 28 01:16:14.216265 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 28 01:16:14.251008 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 28 01:16:14.557436 systemd[1]: Starting update-engine.service - Update Engine... Apr 28 01:16:14.599587 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 28 01:16:14.722023 extend-filesystems[1551]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 28 01:16:14.722023 extend-filesystems[1551]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 28 01:16:14.722023 extend-filesystems[1551]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 28 01:16:14.769161 extend-filesystems[1532]: Resized filesystem in /dev/vda9 Apr 28 01:16:14.749832 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 28 01:16:14.930938 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 28 01:16:15.044612 jq[1568]: true Apr 28 01:16:15.099569 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 28 01:16:15.110728 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 28 01:16:15.131719 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 28 01:16:15.200603 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 28 01:16:15.297699 systemd[1]: motdgen.service: Deactivated successfully. Apr 28 01:16:15.298941 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 28 01:16:15.325523 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 28 01:16:15.528907 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 28 01:16:15.609757 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1571) Apr 28 01:16:15.614119 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 28 01:16:15.747874 update_engine[1565]: I20260428 01:16:15.635792 1565 main.cc:92] Flatcar Update Engine starting Apr 28 01:16:15.862799 update_engine[1565]: I20260428 01:16:15.841414 1565 update_check_scheduler.cc:74] Next update check in 2m32s Apr 28 01:16:15.991972 jq[1585]: true Apr 28 01:16:16.158586 (ntainerd)[1586]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 28 01:16:16.159387 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 28 01:16:16.189027 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 28 01:16:17.461069 tar[1583]: linux-amd64/LICENSE Apr 28 01:16:17.461069 tar[1583]: linux-amd64/helm Apr 28 01:16:17.899292 bash[1619]: Updated "/home/core/.ssh/authorized_keys" Apr 28 01:16:17.929653 systemd[1]: Started update-engine.service - Update Engine. Apr 28 01:16:17.982726 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 28 01:16:18.375043 systemd-logind[1560]: Watching system buttons on /dev/input/event1 (Power Button) Apr 28 01:16:18.382890 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 28 01:16:18.537880 systemd-logind[1560]: New seat seat0. Apr 28 01:16:18.595267 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 28 01:16:18.612569 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 28 01:16:18.620468 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 28 01:16:18.688649 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 28 01:16:18.715772 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 28 01:16:18.715851 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 28 01:16:18.884082 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 28 01:16:19.008540 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 28 01:16:19.095297 systemd[1]: Started systemd-logind.service - User Login Management. Apr 28 01:16:19.290057 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 28 01:16:21.389041 sshd_keygen[1566]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 28 01:16:22.609068 locksmithd[1627]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 28 01:16:23.254801 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 28 01:16:23.700765 containerd[1586]: time="2026-04-28T01:16:23.585387745Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 28 01:16:23.892480 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 28 01:16:24.302995 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:46542.service - OpenSSH per-connection server daemon (10.0.0.1:46542). Apr 28 01:16:24.884491 systemd[1]: issuegen.service: Deactivated successfully. Apr 28 01:16:24.892394 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 28 01:16:25.382144 containerd[1586]: time="2026-04-28T01:16:25.372786748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 28 01:16:25.385493 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 28 01:16:25.581703 containerd[1586]: time="2026-04-28T01:16:25.580108759Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 28 01:16:25.610927 containerd[1586]: time="2026-04-28T01:16:25.595913764Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 28 01:16:25.610927 containerd[1586]: time="2026-04-28T01:16:25.609431501Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 28 01:16:25.763787 containerd[1586]: time="2026-04-28T01:16:25.739004479Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 28 01:16:25.763787 containerd[1586]: time="2026-04-28T01:16:25.762632037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 28 01:16:25.823114 containerd[1586]: time="2026-04-28T01:16:25.822798724Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 01:16:25.823114 containerd[1586]: time="2026-04-28T01:16:25.822901197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 28 01:16:25.880158 containerd[1586]: time="2026-04-28T01:16:25.858684051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 01:16:25.880158 containerd[1586]: time="2026-04-28T01:16:25.859446346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 28 01:16:25.880158 containerd[1586]: time="2026-04-28T01:16:25.876241492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 01:16:25.880158 containerd[1586]: time="2026-04-28T01:16:25.880051000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 28 01:16:25.890859 containerd[1586]: time="2026-04-28T01:16:25.890764441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 28 01:16:25.904243 containerd[1586]: time="2026-04-28T01:16:25.902518260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 28 01:16:25.990055 containerd[1586]: time="2026-04-28T01:16:25.959735886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 01:16:25.990055 containerd[1586]: time="2026-04-28T01:16:25.962436475Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 28 01:16:25.990055 containerd[1586]: time="2026-04-28T01:16:25.963516673Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 28 01:16:25.990055 containerd[1586]: time="2026-04-28T01:16:25.970668316Z" level=info msg="metadata content store policy set" policy=shared Apr 28 01:16:26.035623 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 28 01:16:26.398297 containerd[1586]: time="2026-04-28T01:16:26.362723424Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 28 01:16:26.460579 containerd[1586]: time="2026-04-28T01:16:26.400433037Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 28 01:16:26.460579 containerd[1586]: time="2026-04-28T01:16:26.400583812Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 28 01:16:26.460579 containerd[1586]: time="2026-04-28T01:16:26.400600996Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 28 01:16:26.460579 containerd[1586]: time="2026-04-28T01:16:26.400661794Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 28 01:16:26.757582 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 28 01:16:26.909846 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 46542 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:16:27.002096 containerd[1586]: time="2026-04-28T01:16:26.943912273Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 28 01:16:27.002096 containerd[1586]: time="2026-04-28T01:16:26.957090720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 28 01:16:26.975909 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:16:27.099937 containerd[1586]: time="2026-04-28T01:16:27.018449476Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 28 01:16:27.099937 containerd[1586]: time="2026-04-28T01:16:27.018873582Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 28 01:16:27.099937 containerd[1586]: time="2026-04-28T01:16:27.018903700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 28 01:16:27.099937 containerd[1586]: time="2026-04-28T01:16:27.028533276Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 28 01:16:27.099937 containerd[1586]: time="2026-04-28T01:16:27.029600308Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 28 01:16:27.099937 containerd[1586]: time="2026-04-28T01:16:27.029819140Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 28 01:16:27.099937 containerd[1586]: time="2026-04-28T01:16:27.030023007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 28 01:16:27.099937 containerd[1586]: time="2026-04-28T01:16:27.030045617Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 28 01:16:27.099937 containerd[1586]: time="2026-04-28T01:16:27.030065475Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 28 01:16:27.099937 containerd[1586]: time="2026-04-28T01:16:27.030085709Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 28 01:16:27.099937 containerd[1586]: time="2026-04-28T01:16:27.030104924Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 28 01:16:27.099937 containerd[1586]: time="2026-04-28T01:16:27.051550281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.099937 containerd[1586]: time="2026-04-28T01:16:27.061075691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.031036 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 28 01:16:27.116931 containerd[1586]: time="2026-04-28T01:16:27.110921919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.147034 containerd[1586]: time="2026-04-28T01:16:27.143569565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.147034 containerd[1586]: time="2026-04-28T01:16:27.143643499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.147034 containerd[1586]: time="2026-04-28T01:16:27.143681823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.143745 systemd[1]: Reached target getty.target - Login Prompts. Apr 28 01:16:27.285186 containerd[1586]: time="2026-04-28T01:16:27.157950167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.285186 containerd[1586]: time="2026-04-28T01:16:27.158614977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.285186 containerd[1586]: time="2026-04-28T01:16:27.158710606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.285186 containerd[1586]: time="2026-04-28T01:16:27.158776317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.285186 containerd[1586]: time="2026-04-28T01:16:27.158792531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.285186 containerd[1586]: time="2026-04-28T01:16:27.158857805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.285186 containerd[1586]: time="2026-04-28T01:16:27.158898649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.285186 containerd[1586]: time="2026-04-28T01:16:27.158973743Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 28 01:16:27.285186 containerd[1586]: time="2026-04-28T01:16:27.159024359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.285186 containerd[1586]: time="2026-04-28T01:16:27.159042423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.285186 containerd[1586]: time="2026-04-28T01:16:27.159055537Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 28 01:16:27.285186 containerd[1586]: time="2026-04-28T01:16:27.159353595Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 28 01:16:27.285186 containerd[1586]: time="2026-04-28T01:16:27.159442802Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 28 01:16:27.285186 containerd[1586]: time="2026-04-28T01:16:27.159457709Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 28 01:16:27.285754 containerd[1586]: time="2026-04-28T01:16:27.159506923Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 28 01:16:27.285754 containerd[1586]: time="2026-04-28T01:16:27.159521089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.285754 containerd[1586]: time="2026-04-28T01:16:27.159549711Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 28 01:16:27.285754 containerd[1586]: time="2026-04-28T01:16:27.159615692Z" level=info msg="NRI interface is disabled by configuration." Apr 28 01:16:27.285754 containerd[1586]: time="2026-04-28T01:16:27.159730148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 28 01:16:27.313047 containerd[1586]: time="2026-04-28T01:16:27.298873848Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 28 01:16:27.353139 containerd[1586]: time="2026-04-28T01:16:27.348853795Z" level=info msg="Connect containerd service" Apr 28 01:16:27.355054 containerd[1586]: time="2026-04-28T01:16:27.354925433Z" level=info msg="using legacy CRI server" Apr 28 01:16:27.358840 containerd[1586]: time="2026-04-28T01:16:27.356257655Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 28 01:16:27.369167 containerd[1586]: time="2026-04-28T01:16:27.366319249Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 28 01:16:27.468327 containerd[1586]: time="2026-04-28T01:16:27.466680964Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 01:16:27.506370 containerd[1586]: time="2026-04-28T01:16:27.483071227Z" level=info msg="Start subscribing containerd event" Apr 28 01:16:27.506370 containerd[1586]: time="2026-04-28T01:16:27.492103769Z" level=info msg="Start recovering state" Apr 28 01:16:27.513736 containerd[1586]: time="2026-04-28T01:16:27.509505758Z" level=info msg="Start event monitor" Apr 28 01:16:27.519840 containerd[1586]: time="2026-04-28T01:16:27.514927777Z" level=info msg="Start snapshots syncer" Apr 28 01:16:27.578944 containerd[1586]: time="2026-04-28T01:16:27.576994582Z" level=info msg="Start cni network conf syncer for default" Apr 28 01:16:27.587972 containerd[1586]: time="2026-04-28T01:16:27.586154867Z" level=info msg="Start streaming server" Apr 28 01:16:27.611993 containerd[1586]: time="2026-04-28T01:16:27.606501005Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 28 01:16:27.616295 containerd[1586]: time="2026-04-28T01:16:27.613110217Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 28 01:16:27.668054 containerd[1586]: time="2026-04-28T01:16:27.620757409Z" level=info msg="containerd successfully booted in 4.166584s" Apr 28 01:16:27.621806 systemd[1]: Started containerd.service - containerd container runtime. Apr 28 01:16:27.668968 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 28 01:16:27.814290 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 28 01:16:28.603586 systemd-logind[1560]: New session 1 of user core. Apr 28 01:16:29.312016 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 28 01:16:30.307661 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 28 01:16:30.800842 (systemd)[1667]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 28 01:16:34.409699 tar[1583]: linux-amd64/README.md Apr 28 01:16:36.008961 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 28 01:16:38.047481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:16:38.172899 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 28 01:16:38.700625 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:16:49.755869 systemd[1667]: Queued start job for default target default.target. Apr 28 01:16:49.825088 systemd[1667]: Created slice app.slice - User Application Slice. Apr 28 01:16:49.878765 systemd[1667]: Reached target paths.target - Paths. Apr 28 01:16:49.880472 systemd[1667]: Reached target timers.target - Timers. Apr 28 01:16:50.415833 systemd[1667]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 28 01:16:53.067974 systemd[1667]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 28 01:16:53.079765 systemd[1667]: Reached target sockets.target - Sockets. Apr 28 01:16:53.079795 systemd[1667]: Reached target basic.target - Basic System. Apr 28 01:16:53.080008 systemd[1667]: Reached target default.target - Main User Target. Apr 28 01:16:53.080044 systemd[1667]: Startup finished in 20.999s. Apr 28 01:16:53.167908 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 28 01:16:53.974837 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 28 01:16:53.998765 systemd[1]: Startup finished in 1min 46.955s (kernel) + 5min 36.728s (userspace) = 7min 23.683s. Apr 28 01:16:55.899938 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:46262.service - OpenSSH per-connection server daemon (10.0.0.1:46262). Apr 28 01:16:59.193970 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 46262 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:16:59.587334 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:17:01.362832 update_engine[1565]: I20260428 01:17:01.317022 1565 update_attempter.cc:509] Updating boot flags... Apr 28 01:17:02.039274 systemd-logind[1560]: New session 2 of user core. Apr 28 01:17:02.110989 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 28 01:17:03.211922 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1716) Apr 28 01:17:03.419865 sshd[1703]: pam_unix(sshd:session): session closed for user core Apr 28 01:17:04.315192 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:46262.service: Deactivated successfully. Apr 28 01:17:04.560561 systemd[1]: session-2.scope: Deactivated successfully. Apr 28 01:17:04.769783 systemd-logind[1560]: Session 2 logged out. Waiting for processes to exit. Apr 28 01:17:07.398869 systemd-logind[1560]: Removed session 2. Apr 28 01:17:09.698362 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:60940.service - OpenSSH per-connection server daemon (10.0.0.1:60940). Apr 28 01:17:14.387709 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 60940 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:17:15.079628 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:17:18.377978 systemd-logind[1560]: New session 3 of user core. Apr 28 01:17:18.650955 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 28 01:17:20.725716 sshd[1725]: pam_unix(sshd:session): session closed for user core Apr 28 01:17:21.835926 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:48912.service - OpenSSH per-connection server daemon (10.0.0.1:48912). Apr 28 01:17:21.942159 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:60940.service: Deactivated successfully. Apr 28 01:17:22.176720 systemd[1]: session-3.scope: Deactivated successfully. Apr 28 01:17:22.375661 systemd-logind[1560]: Session 3 logged out. Waiting for processes to exit. Apr 28 01:17:22.593870 systemd-logind[1560]: Removed session 3. Apr 28 01:17:25.378169 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 48912 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:17:25.816049 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:17:27.049068 kubelet[1690]: E0428 01:17:27.004990 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:17:27.097792 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:17:27.144247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:17:31.963766 systemd-logind[1560]: New session 4 of user core. Apr 28 01:17:34.364537 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 28 01:17:37.866653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 28 01:17:38.516304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:17:38.903084 sshd[1732]: pam_unix(sshd:session): session closed for user core Apr 28 01:17:39.242433 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:48912.service: Deactivated successfully. Apr 28 01:17:39.512680 systemd[1]: session-4.scope: Deactivated successfully. Apr 28 01:17:42.505090 systemd-logind[1560]: Session 4 logged out. Waiting for processes to exit. Apr 28 01:17:43.087033 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:54190.service - OpenSSH per-connection server daemon (10.0.0.1:54190). Apr 28 01:17:43.319989 systemd-logind[1560]: Removed session 4. Apr 28 01:17:52.306171 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 54190 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:17:53.407995 sshd[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:17:56.106119 systemd-logind[1560]: New session 5 of user core. Apr 28 01:18:00.542438 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 28 01:18:00.560454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:18:00.697075 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:18:05.080899 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 28 01:18:05.212954 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 01:18:06.515076 sudo[1770]: pam_unix(sudo:session): session closed for user root Apr 28 01:18:06.750332 sshd[1749]: pam_unix(sshd:session): session closed for user core Apr 28 01:18:07.178370 systemd[1]: Started sshd@5-10.0.0.35:22-10.0.0.1:44374.service - OpenSSH per-connection server daemon (10.0.0.1:44374). Apr 28 01:18:07.185886 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:54190.service: Deactivated successfully. Apr 28 01:18:07.434759 systemd[1]: session-5.scope: Deactivated successfully. Apr 28 01:18:07.659884 systemd-logind[1560]: Session 5 logged out. Waiting for processes to exit. Apr 28 01:18:07.810758 systemd-logind[1560]: Removed session 5. Apr 28 01:18:09.267176 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 44374 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:18:09.571763 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:18:12.102717 systemd-logind[1560]: New session 6 of user core. Apr 28 01:18:12.664612 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 28 01:18:17.354902 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 28 01:18:17.390821 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 01:18:18.760026 sudo[1782]: pam_unix(sudo:session): session closed for user root Apr 28 01:18:23.512604 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 28 01:18:23.532077 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 01:18:28.576122 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 28 01:18:29.001832 auditctl[1785]: No rules Apr 28 01:18:29.160950 systemd[1]: audit-rules.service: Deactivated successfully. Apr 28 01:18:29.204944 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 28 01:18:30.066404 kubelet[1762]: E0428 01:18:30.065920 1762 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:18:30.135051 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 01:18:30.219130 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:18:30.259877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:18:41.611294 augenrules[1807]: No rules Apr 28 01:18:41.624874 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 28 01:18:42.095884 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 01:18:42.189683 sudo[1781]: pam_unix(sudo:session): session closed for user root Apr 28 01:18:42.281921 sshd[1773]: pam_unix(sshd:session): session closed for user core Apr 28 01:18:48.354565 update_engine[1565]: I20260428 01:18:48.345695 1565 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 28 01:18:48.386895 update_engine[1565]: I20260428 01:18:48.357159 1565 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 28 01:18:48.415154 update_engine[1565]: I20260428 01:18:48.412779 1565 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 28 01:18:48.560362 update_engine[1565]: I20260428 01:18:48.556716 1565 omaha_request_params.cc:62] Current group set to lts Apr 28 01:18:48.570088 update_engine[1565]: I20260428 01:18:48.569980 1565 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 28 01:18:48.570088 update_engine[1565]: I20260428 01:18:48.570065 1565 update_attempter.cc:643] Scheduling an action processor start. Apr 28 01:18:48.570088 update_engine[1565]: I20260428 01:18:48.570088 1565 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 28 01:18:48.601835 update_engine[1565]: I20260428 01:18:48.577112 1565 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 28 01:18:48.601835 update_engine[1565]: I20260428 01:18:48.594941 1565 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 28 01:18:48.612364 update_engine[1565]: I20260428 01:18:48.601709 1565 omaha_request_action.cc:272] Request: Apr 28 01:18:48.612364 update_engine[1565]: Apr 28 01:18:48.612364 update_engine[1565]: Apr 28 01:18:48.612364 update_engine[1565]: Apr 28 01:18:48.612364 update_engine[1565]: Apr 28 01:18:48.612364 update_engine[1565]: Apr 28 01:18:48.612364 update_engine[1565]: Apr 28 01:18:48.612364 update_engine[1565]: Apr 28 01:18:48.612364 update_engine[1565]: Apr 28 01:18:48.612364 update_engine[1565]: I20260428 01:18:48.602048 1565 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 01:18:48.612724 systemd[1]: sshd@5-10.0.0.35:22-10.0.0.1:44374.service: Deactivated successfully. Apr 28 01:18:48.662121 locksmithd[1627]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 28 01:18:48.703380 update_engine[1565]: I20260428 01:18:48.696107 1565 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 01:18:48.813256 update_engine[1565]: I20260428 01:18:48.812796 1565 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 01:18:48.910778 update_engine[1565]: E20260428 01:18:48.882897 1565 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 01:18:48.910778 update_engine[1565]: I20260428 01:18:48.901666 1565 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 28 01:18:49.160938 systemd[1]: session-6.scope: Deactivated successfully. Apr 28 01:18:52.989114 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Apr 28 01:18:55.086615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:18:56.201671 systemd[1]: Started sshd@6-10.0.0.35:22-10.0.0.1:49718.service - OpenSSH per-connection server daemon (10.0.0.1:49718). Apr 28 01:18:56.683618 systemd-logind[1560]: Removed session 6. Apr 28 01:18:58.687333 sshd[1818]: Accepted publickey for core from 10.0.0.1 port 49718 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:18:58.845079 sshd[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:18:59.393071 update_engine[1565]: I20260428 01:18:59.318249 1565 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 01:18:59.441678 update_engine[1565]: I20260428 01:18:59.411594 1565 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 01:18:59.441678 update_engine[1565]: I20260428 01:18:59.422804 1565 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 01:18:59.441678 update_engine[1565]: E20260428 01:18:59.441119 1565 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 01:18:59.441678 update_engine[1565]: I20260428 01:18:59.441461 1565 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 28 01:19:00.672176 systemd-logind[1560]: New session 7 of user core. Apr 28 01:19:01.778136 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 28 01:19:04.704062 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 28 01:19:04.708648 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 01:19:07.261512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:19:07.497376 (kubelet)[1843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:19:09.496832 update_engine[1565]: I20260428 01:19:09.322033 1565 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 01:19:09.678085 update_engine[1565]: I20260428 01:19:09.670907 1565 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 01:19:09.822714 update_engine[1565]: I20260428 01:19:09.791934 1565 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 01:19:09.822714 update_engine[1565]: E20260428 01:19:09.820152 1565 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 01:19:09.822714 update_engine[1565]: I20260428 01:19:09.821662 1565 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 28 01:19:20.345394 update_engine[1565]: I20260428 01:19:20.330859 1565 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 01:19:20.395559 update_engine[1565]: I20260428 01:19:20.359616 1565 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 01:19:20.395559 update_engine[1565]: I20260428 01:19:20.369712 1565 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 01:19:20.395559 update_engine[1565]: E20260428 01:19:20.386476 1565 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 01:19:20.468846 update_engine[1565]: I20260428 01:19:20.415995 1565 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 28 01:19:20.468846 update_engine[1565]: I20260428 01:19:20.464493 1565 omaha_request_action.cc:617] Omaha request response: Apr 28 01:19:20.515907 update_engine[1565]: E20260428 01:19:20.499944 1565 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 28 01:19:20.527285 update_engine[1565]: I20260428 01:19:20.516451 1565 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 28 01:19:20.527285 update_engine[1565]: I20260428 01:19:20.526661 1565 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 01:19:20.621111 update_engine[1565]: I20260428 01:19:20.527349 1565 update_attempter.cc:306] Processing Done. Apr 28 01:19:20.621111 update_engine[1565]: E20260428 01:19:20.527467 1565 update_attempter.cc:619] Update failed. Apr 28 01:19:20.621111 update_engine[1565]: I20260428 01:19:20.527477 1565 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 28 01:19:20.621111 update_engine[1565]: I20260428 01:19:20.527484 1565 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 28 01:19:20.621111 update_engine[1565]: I20260428 01:19:20.527566 1565 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 28 01:19:20.621111 update_engine[1565]: I20260428 01:19:20.528072 1565 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 28 01:19:20.621111 update_engine[1565]: I20260428 01:19:20.589527 1565 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 28 01:19:20.621111 update_engine[1565]: I20260428 01:19:20.601553 1565 omaha_request_action.cc:272] Request: Apr 28 01:19:20.621111 update_engine[1565]: Apr 28 01:19:20.621111 update_engine[1565]: Apr 28 01:19:20.621111 update_engine[1565]: Apr 28 01:19:20.621111 update_engine[1565]: Apr 28 01:19:20.621111 update_engine[1565]: Apr 28 01:19:20.621111 update_engine[1565]: Apr 28 01:19:20.621111 update_engine[1565]: I20260428 01:19:20.601810 1565 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 01:19:20.864988 update_engine[1565]: I20260428 01:19:20.624891 1565 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 01:19:20.864988 update_engine[1565]: I20260428 01:19:20.642041 1565 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 01:19:20.864988 update_engine[1565]: E20260428 01:19:20.692426 1565 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 01:19:20.864988 update_engine[1565]: I20260428 01:19:20.746726 1565 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 28 01:19:20.864988 update_engine[1565]: I20260428 01:19:20.755156 1565 omaha_request_action.cc:617] Omaha request response: Apr 28 01:19:20.864988 update_engine[1565]: I20260428 01:19:20.761600 1565 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 01:19:20.864988 update_engine[1565]: I20260428 01:19:20.796480 1565 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 01:19:20.864988 update_engine[1565]: I20260428 01:19:20.814451 1565 update_attempter.cc:306] Processing Done. Apr 28 01:19:20.864988 update_engine[1565]: I20260428 01:19:20.817246 1565 update_attempter.cc:310] Error event sent. Apr 28 01:19:20.864988 update_engine[1565]: I20260428 01:19:20.817472 1565 update_check_scheduler.cc:74] Next update check in 49m48s Apr 28 01:19:20.982179 locksmithd[1627]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 28 01:19:20.982179 locksmithd[1627]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 28 01:19:23.780902 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 28 01:19:24.608379 (dockerd)[1860]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 28 01:19:52.005496 dockerd[1860]: time="2026-04-28T01:19:51.984116404Z" level=info msg="Starting up" Apr 28 01:20:01.189267 dockerd[1860]: time="2026-04-28T01:20:01.176291592Z" level=info msg="Loading containers: start." Apr 28 01:20:04.971629 kubelet[1843]: E0428 01:20:04.925351 1843 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:20:05.123485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:20:05.162746 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:20:15.499880 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 28 01:20:16.424873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:20:21.383719 kernel: Initializing XFRM netlink socket Apr 28 01:20:27.582768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:20:27.820635 (kubelet)[1979]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:20:28.321734 systemd-networkd[1242]: docker0: Link UP Apr 28 01:20:30.590866 dockerd[1860]: time="2026-04-28T01:20:30.585617581Z" level=info msg="Loading containers: done." Apr 28 01:20:33.471662 dockerd[1860]: time="2026-04-28T01:20:33.464928599Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 01:20:33.615911 dockerd[1860]: time="2026-04-28T01:20:33.593815422Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 28 01:20:33.650948 dockerd[1860]: time="2026-04-28T01:20:33.635717945Z" level=info msg="Daemon has completed initialization" Apr 28 01:20:42.186752 dockerd[1860]: time="2026-04-28T01:20:42.154906838Z" level=info msg="API listen on /run/docker.sock" Apr 28 01:20:42.268520 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 28 01:21:31.516973 kubelet[1979]: E0428 01:21:31.474545 1979 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:21:31.697665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:21:31.719637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:21:32.577879 containerd[1586]: time="2026-04-28T01:21:32.567852197Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.8\"" Apr 28 01:21:44.162272 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 28 01:21:45.001545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:21:47.982783 containerd[1586]: time="2026-04-28T01:21:47.980978185Z" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v2/kube-apiserver/manifests/v1.33.8\": net/http: TLS handshake timeout" host=registry.k8s.io Apr 28 01:21:48.092865 containerd[1586]: time="2026-04-28T01:21:48.089710295Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.8: active requests=0, bytes read=0" Apr 28 01:21:48.092865 containerd[1586]: time="2026-04-28T01:21:48.092968242Z" level=error msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.8\" failed" error="failed to pull and unpack image \"registry.k8s.io/kube-apiserver:v1.33.8\": failed to resolve reference \"registry.k8s.io/kube-apiserver:v1.33.8\": failed to do request: Head \"https://registry.k8s.io/v2/kube-apiserver/manifests/v1.33.8\": net/http: TLS handshake timeout" Apr 28 01:21:48.375835 containerd[1586]: time="2026-04-28T01:21:48.278159300Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.8\"" Apr 28 01:22:11.495760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:22:12.033902 (kubelet)[2058]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:22:12.459579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2555226620.mount: Deactivated successfully. Apr 28 01:23:04.489940 kubelet[2058]: E0428 01:23:04.487401 2058 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:23:04.499241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:23:04.506184 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:23:14.797987 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 28 01:23:14.969954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:23:20.528680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:23:20.583955 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:23:22.823175 kubelet[2138]: E0428 01:23:22.822096 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:23:22.852735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:23:22.854740 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:23:30.845164 containerd[1586]: time="2026-04-28T01:23:30.839710103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:23:30.871614 containerd[1586]: time="2026-04-28T01:23:30.856195846Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.8: active requests=0, bytes read=30113997" Apr 28 01:23:31.334160 containerd[1586]: time="2026-04-28T01:23:31.317806332Z" level=info msg="ImageCreate event name:\"sha256:dc64713f4ac867ea18e11a58b9d7919f5636e80c652734e5aaba316218bdbbdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:23:33.300655 containerd[1586]: time="2026-04-28T01:23:33.299415210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d1f1afdd389ba0b99233830af563d7da79484b8bae6ff905d6edbcb419127bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:23:33.302685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 28 01:23:33.461456 containerd[1586]: time="2026-04-28T01:23:33.460162814Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.8\" with image id \"sha256:dc64713f4ac867ea18e11a58b9d7919f5636e80c652734e5aaba316218bdbbdb\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d1f1afdd389ba0b99233830af563d7da79484b8bae6ff905d6edbcb419127bd\", size \"30111158\" in 1m45.181042848s" Apr 28 01:23:33.461456 containerd[1586]: time="2026-04-28T01:23:33.460676284Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.8\" returns image reference \"sha256:dc64713f4ac867ea18e11a58b9d7919f5636e80c652734e5aaba316218bdbbdb\"" Apr 28 01:23:34.021422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:23:34.199704 containerd[1586]: time="2026-04-28T01:23:34.199192409Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.8\"" Apr 28 01:23:45.016769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:23:45.085677 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:23:50.199352 kubelet[2160]: E0428 01:23:50.198877 2160 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:23:50.218278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:23:50.219043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:23:56.430613 containerd[1586]: time="2026-04-28T01:23:56.426078323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:23:56.452487 containerd[1586]: time="2026-04-28T01:23:56.452183358Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.8: active requests=0, bytes read=26021560" Apr 28 01:23:56.458447 containerd[1586]: time="2026-04-28T01:23:56.458111914Z" level=info msg="ImageCreate event name:\"sha256:d6c80027e9465615ba510d0c5f3a98ff50a8cd7eaf378b3aaa107f6c9a92216c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:23:57.795174 containerd[1586]: time="2026-04-28T01:23:57.788118429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4b93c08a1d78c2065518e8bbcad3132beafab937a9fd0771c82cdb63d2a050b8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:23:58.625931 containerd[1586]: time="2026-04-28T01:23:58.620130310Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.8\" with image id \"sha256:d6c80027e9465615ba510d0c5f3a98ff50a8cd7eaf378b3aaa107f6c9a92216c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4b93c08a1d78c2065518e8bbcad3132beafab937a9fd0771c82cdb63d2a050b8\", size \"27678578\" in 24.393118125s" Apr 28 01:23:58.671644 containerd[1586]: time="2026-04-28T01:23:58.643677808Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.8\" returns image reference \"sha256:d6c80027e9465615ba510d0c5f3a98ff50a8cd7eaf378b3aaa107f6c9a92216c\"" Apr 28 01:23:58.796809 containerd[1586]: time="2026-04-28T01:23:58.789576265Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.8\"" Apr 28 01:24:01.010123 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 28 01:24:01.124065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:24:06.574464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:24:06.901731 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:24:12.063066 kubelet[2191]: E0428 01:24:12.034040 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:24:12.201676 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:24:12.202700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:24:18.563753 containerd[1586]: time="2026-04-28T01:24:18.563103876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:18.608692 containerd[1586]: time="2026-04-28T01:24:18.566646642Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.8: active requests=0, bytes read=20160949" Apr 28 01:24:18.706773 containerd[1586]: time="2026-04-28T01:24:18.706059087Z" level=info msg="ImageCreate event name:\"sha256:94ca5455c32fc8639aa2138e77a382b04bb32cd3477d3dcfced2fd2dfe4427b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:19.055634 containerd[1586]: time="2026-04-28T01:24:19.054610131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f6c5eae3f9f702a0c00e5c52aa040b2c685acfc9fd8d2646f150a183de36e72f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:19.131810 containerd[1586]: time="2026-04-28T01:24:19.130764880Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.8\" with image id \"sha256:94ca5455c32fc8639aa2138e77a382b04bb32cd3477d3dcfced2fd2dfe4427b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f6c5eae3f9f702a0c00e5c52aa040b2c685acfc9fd8d2646f150a183de36e72f\", size \"21817985\" in 20.335691796s" Apr 28 01:24:19.131810 containerd[1586]: time="2026-04-28T01:24:19.131134336Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.8\" returns image reference \"sha256:94ca5455c32fc8639aa2138e77a382b04bb32cd3477d3dcfced2fd2dfe4427b7\"" Apr 28 01:24:19.202599 containerd[1586]: time="2026-04-28T01:24:19.201680698Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.8\"" Apr 28 01:24:22.307169 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 28 01:24:22.643298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:24:24.324856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:24:24.333139 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:24:24.984787 kubelet[2217]: E0428 01:24:24.984521 2217 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:24:24.988118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:24:24.988339 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:24:25.258178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2663209991.mount: Deactivated successfully. Apr 28 01:24:25.967487 containerd[1586]: time="2026-04-28T01:24:25.966987620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:25.978282 containerd[1586]: time="2026-04-28T01:24:25.968136151Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.8: active requests=0, bytes read=31828042" Apr 28 01:24:25.978282 containerd[1586]: time="2026-04-28T01:24:25.976051029Z" level=info msg="ImageCreate event name:\"sha256:85ec3b545d037f93f83e44b07f146127cbabe79932928142521ca2d14f41d608\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:26.001334 containerd[1586]: time="2026-04-28T01:24:26.000900026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:00c5df7707d5fc1f8b2c95cf71ec8ea82fd27a01af1b720e1f252ece4f71b17c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:26.004016 containerd[1586]: time="2026-04-28T01:24:26.002947404Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.8\" with image id \"sha256:85ec3b545d037f93f83e44b07f146127cbabe79932928142521ca2d14f41d608\", repo tag \"registry.k8s.io/kube-proxy:v1.33.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:00c5df7707d5fc1f8b2c95cf71ec8ea82fd27a01af1b720e1f252ece4f71b17c\", size \"31827167\" in 6.798473231s" Apr 28 01:24:26.004016 containerd[1586]: time="2026-04-28T01:24:26.003693593Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.8\" returns image reference \"sha256:85ec3b545d037f93f83e44b07f146127cbabe79932928142521ca2d14f41d608\"" Apr 28 01:24:26.033563 containerd[1586]: time="2026-04-28T01:24:26.032781066Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 28 01:24:27.185935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2256701736.mount: Deactivated successfully. Apr 28 01:24:28.920109 containerd[1586]: time="2026-04-28T01:24:28.919042534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:28.920109 containerd[1586]: time="2026-04-28T01:24:28.919151015Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 28 01:24:28.925821 containerd[1586]: time="2026-04-28T01:24:28.924851776Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:28.952394 containerd[1586]: time="2026-04-28T01:24:28.951327474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:28.955620 containerd[1586]: time="2026-04-28T01:24:28.955542191Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.922514838s" Apr 28 01:24:28.955673 containerd[1586]: time="2026-04-28T01:24:28.955628508Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 28 01:24:28.960731 containerd[1586]: time="2026-04-28T01:24:28.960499486Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 28 01:24:30.120853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4030783690.mount: Deactivated successfully. Apr 28 01:24:30.175021 containerd[1586]: time="2026-04-28T01:24:30.173664545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:30.184066 containerd[1586]: time="2026-04-28T01:24:30.173810030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 28 01:24:30.197507 containerd[1586]: time="2026-04-28T01:24:30.197122744Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:30.206318 containerd[1586]: time="2026-04-28T01:24:30.205973604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:30.338827 containerd[1586]: time="2026-04-28T01:24:30.338278812Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.377682228s" Apr 28 01:24:30.338827 containerd[1586]: time="2026-04-28T01:24:30.338476164Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 28 01:24:30.345557 containerd[1586]: time="2026-04-28T01:24:30.345177286Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 28 01:24:31.331993 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 28 01:24:31.368795 systemd-tmpfiles[2288]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 01:24:31.372815 systemd-tmpfiles[2288]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 01:24:31.373806 systemd-tmpfiles[2288]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 01:24:31.374092 systemd-tmpfiles[2288]: ACLs are not supported, ignoring. Apr 28 01:24:31.375998 systemd-tmpfiles[2288]: ACLs are not supported, ignoring. Apr 28 01:24:31.384657 systemd-tmpfiles[2288]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 01:24:31.384706 systemd-tmpfiles[2288]: Skipping /boot Apr 28 01:24:31.418492 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 28 01:24:31.487387 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 28 01:24:32.115632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount869970110.mount: Deactivated successfully. Apr 28 01:24:35.038119 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 28 01:24:35.043435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:24:35.811001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:24:35.815305 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:24:36.189099 kubelet[2316]: E0428 01:24:36.185508 2316 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:24:36.194016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:24:36.194370 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:24:41.774371 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1818627427 wd_nsec: 1818626842 Apr 28 01:24:43.059286 containerd[1586]: time="2026-04-28T01:24:43.058775763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:43.081524 containerd[1586]: time="2026-04-28T01:24:43.059696625Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718278" Apr 28 01:24:43.098888 containerd[1586]: time="2026-04-28T01:24:43.098288693Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:43.125300 containerd[1586]: time="2026-04-28T01:24:43.124674527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:43.134836 containerd[1586]: time="2026-04-28T01:24:43.132514602Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 12.787139446s" Apr 28 01:24:43.134836 containerd[1586]: time="2026-04-28T01:24:43.134480504Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 28 01:24:44.842158 containerd[1586]: time="2026-04-28T01:24:44.841938650Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 28 01:24:46.355615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 28 01:24:46.525611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:24:47.826404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:24:47.837340 (kubelet)[2432]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:24:48.013779 kubelet[2432]: E0428 01:24:48.005882 2432 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:24:48.022096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:24:48.022358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:24:49.340911 containerd[1586]: time="2026-04-28T01:24:49.340330213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:49.340911 containerd[1586]: time="2026-04-28T01:24:49.340755512Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=29492654" Apr 28 01:24:49.342977 containerd[1586]: time="2026-04-28T01:24:49.342810040Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:49.345466 containerd[1586]: time="2026-04-28T01:24:49.345422127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:49.346401 containerd[1586]: time="2026-04-28T01:24:49.346353481Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 4.504320848s" Apr 28 01:24:49.346401 containerd[1586]: time="2026-04-28T01:24:49.346392896Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 28 01:24:49.362740 containerd[1586]: time="2026-04-28T01:24:49.361951019Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 28 01:24:54.182382 containerd[1586]: time="2026-04-28T01:24:54.181939791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:54.183753 containerd[1586]: time="2026-04-28T01:24:54.182783966Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 28 01:24:54.184386 containerd[1586]: time="2026-04-28T01:24:54.184352068Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:54.199796 containerd[1586]: time="2026-04-28T01:24:54.199530785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:54.200650 containerd[1586]: time="2026-04-28T01:24:54.200428071Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 4.837087445s" Apr 28 01:24:54.200650 containerd[1586]: time="2026-04-28T01:24:54.200466145Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 28 01:24:54.208878 containerd[1586]: time="2026-04-28T01:24:54.208844591Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 28 01:24:56.908528 containerd[1586]: time="2026-04-28T01:24:56.907093537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:56.922303 containerd[1586]: time="2026-04-28T01:24:56.913254875Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 28 01:24:56.924999 containerd[1586]: time="2026-04-28T01:24:56.924921381Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:56.956014 containerd[1586]: time="2026-04-28T01:24:56.955611266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:56.957784 containerd[1586]: time="2026-04-28T01:24:56.957730931Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 2.748841647s" Apr 28 01:24:56.957784 containerd[1586]: time="2026-04-28T01:24:56.957771084Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 28 01:24:56.988750 containerd[1586]: time="2026-04-28T01:24:56.988305457Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 28 01:24:58.034834 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 28 01:24:58.054435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:24:58.263913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1995519053.mount: Deactivated successfully. Apr 28 01:24:58.295938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:24:58.306695 (kubelet)[2466]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:24:58.389645 kubelet[2466]: E0428 01:24:58.389537 2466 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:24:58.392486 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:24:58.392704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:24:58.854466 containerd[1586]: time="2026-04-28T01:24:58.853113301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:58.873019 containerd[1586]: time="2026-04-28T01:24:58.872071504Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 28 01:24:58.903433 containerd[1586]: time="2026-04-28T01:24:58.903073924Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:58.907693 containerd[1586]: time="2026-04-28T01:24:58.907644910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:24:58.908323 containerd[1586]: time="2026-04-28T01:24:58.908281542Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.919856136s" Apr 28 01:24:58.908359 containerd[1586]: time="2026-04-28T01:24:58.908326394Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 28 01:25:03.411227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:25:03.432013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:25:03.583590 systemd[1]: Reloading requested from client PID 2491 ('systemctl') (unit session-7.scope)... Apr 28 01:25:03.583635 systemd[1]: Reloading... Apr 28 01:25:03.713655 zram_generator::config[2530]: No configuration found. Apr 28 01:25:04.725788 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 01:25:04.843583 systemd[1]: Reloading finished in 1259 ms. Apr 28 01:25:04.944445 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 28 01:25:04.944729 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 28 01:25:04.947348 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:25:04.969050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:25:05.696049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:25:05.700631 (kubelet)[2589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 01:25:06.439599 kubelet[2589]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 01:25:06.439599 kubelet[2589]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 01:25:06.439599 kubelet[2589]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 01:25:06.450418 kubelet[2589]: I0428 01:25:06.439650 2589 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 01:25:08.252510 kubelet[2589]: I0428 01:25:08.252163 2589 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 28 01:25:08.252510 kubelet[2589]: I0428 01:25:08.252357 2589 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 01:25:08.254053 kubelet[2589]: I0428 01:25:08.252926 2589 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 01:25:08.498615 kubelet[2589]: E0428 01:25:08.496359 2589 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 01:25:08.703944 kubelet[2589]: I0428 01:25:08.703632 2589 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 01:25:08.761669 kubelet[2589]: E0428 01:25:08.761516 2589 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 01:25:08.761669 kubelet[2589]: I0428 01:25:08.761558 2589 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 28 01:25:08.788618 kubelet[2589]: I0428 01:25:08.788418 2589 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 28 01:25:08.789225 kubelet[2589]: I0428 01:25:08.789154 2589 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 01:25:08.790652 kubelet[2589]: I0428 01:25:08.789245 2589 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 28 01:25:08.790790 kubelet[2589]: I0428 01:25:08.790709 2589 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 01:25:08.790790 kubelet[2589]: I0428 01:25:08.790719 2589 container_manager_linux.go:303] "Creating device plugin manager" Apr 28 01:25:08.791869 kubelet[2589]: I0428 01:25:08.791833 2589 state_mem.go:36] "Initialized new in-memory state store" Apr 28 01:25:08.800511 kubelet[2589]: I0428 01:25:08.800224 2589 kubelet.go:480] "Attempting to sync node with API server" Apr 28 01:25:08.802392 kubelet[2589]: I0428 01:25:08.800950 2589 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 01:25:08.806358 kubelet[2589]: I0428 01:25:08.803860 2589 kubelet.go:386] "Adding apiserver pod source" Apr 28 01:25:08.806358 kubelet[2589]: I0428 01:25:08.804123 2589 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 01:25:08.814558 kubelet[2589]: E0428 01:25:08.814130 2589 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:25:08.814558 kubelet[2589]: E0428 01:25:08.814150 2589 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:25:08.817440 kubelet[2589]: I0428 01:25:08.817398 2589 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 01:25:08.818149 kubelet[2589]: I0428 01:25:08.818107 2589 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 01:25:08.825374 kubelet[2589]: W0428 01:25:08.825039 2589 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 28 01:25:08.913020 kubelet[2589]: I0428 01:25:08.912812 2589 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 28 01:25:08.914934 kubelet[2589]: I0428 01:25:08.913234 2589 server.go:1289] "Started kubelet" Apr 28 01:25:08.914934 kubelet[2589]: I0428 01:25:08.913413 2589 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 01:25:08.916805 kubelet[2589]: I0428 01:25:08.916240 2589 server.go:317] "Adding debug handlers to kubelet server" Apr 28 01:25:08.916805 kubelet[2589]: I0428 01:25:08.916464 2589 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 01:25:08.919799 kubelet[2589]: I0428 01:25:08.918961 2589 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 01:25:08.928304 kubelet[2589]: I0428 01:25:08.928154 2589 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 01:25:08.944366 kubelet[2589]: I0428 01:25:08.942533 2589 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 01:25:08.948999 kubelet[2589]: I0428 01:25:08.948584 2589 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 28 01:25:08.952943 kubelet[2589]: E0428 01:25:08.951026 2589 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:25:08.953351 kubelet[2589]: I0428 01:25:08.953301 2589 factory.go:223] Registration of the systemd container factory successfully Apr 28 01:25:08.953598 kubelet[2589]: I0428 01:25:08.953557 2589 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 01:25:08.954270 kubelet[2589]: E0428 01:25:08.954153 2589 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="200ms" Apr 28 01:25:08.954647 kubelet[2589]: E0428 01:25:08.954578 2589 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:25:08.954647 kubelet[2589]: I0428 01:25:08.946760 2589 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 28 01:25:08.954799 kubelet[2589]: I0428 01:25:08.954786 2589 reconciler.go:26] "Reconciler: start to sync state" Apr 28 01:25:08.969417 kubelet[2589]: E0428 01:25:08.920734 2589 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa60ddda05f577 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:25:08.912944503 +0000 UTC m=+3.166585569,LastTimestamp:2026-04-28 01:25:08.912944503 +0000 UTC m=+3.166585569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:25:08.971195 kubelet[2589]: I0428 01:25:08.970558 2589 factory.go:223] Registration of the containerd container factory successfully Apr 28 01:25:08.975779 kubelet[2589]: E0428 01:25:08.973708 2589 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 01:25:09.000460 kubelet[2589]: I0428 01:25:09.000381 2589 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 28 01:25:09.006665 kubelet[2589]: I0428 01:25:09.005395 2589 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 28 01:25:09.006665 kubelet[2589]: I0428 01:25:09.005630 2589 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 28 01:25:09.006665 kubelet[2589]: I0428 01:25:09.005730 2589 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 01:25:09.006665 kubelet[2589]: I0428 01:25:09.005831 2589 kubelet.go:2436] "Starting kubelet main sync loop" Apr 28 01:25:09.006665 kubelet[2589]: E0428 01:25:09.006125 2589 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 01:25:09.008582 kubelet[2589]: E0428 01:25:09.008541 2589 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:25:09.009277 kubelet[2589]: I0428 01:25:09.008959 2589 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 01:25:09.009277 kubelet[2589]: I0428 01:25:09.008981 2589 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 01:25:09.009277 kubelet[2589]: I0428 01:25:09.009005 2589 state_mem.go:36] "Initialized new in-memory state store" Apr 28 01:25:09.011787 kubelet[2589]: I0428 01:25:09.011744 2589 policy_none.go:49] "None policy: Start" Apr 28 01:25:09.011787 kubelet[2589]: I0428 01:25:09.011792 2589 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 28 01:25:09.011886 kubelet[2589]: I0428 01:25:09.011810 2589 state_mem.go:35] "Initializing new in-memory state store" Apr 28 01:25:09.029994 kubelet[2589]: E0428 01:25:09.029812 2589 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 01:25:09.034976 kubelet[2589]: I0428 01:25:09.030287 2589 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 01:25:09.034976 kubelet[2589]: I0428 01:25:09.030345 2589 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 01:25:09.034976 kubelet[2589]: I0428 01:25:09.032288 2589 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 01:25:09.046182 kubelet[2589]: E0428 01:25:09.046108 2589 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 01:25:09.046779 kubelet[2589]: E0428 01:25:09.046429 2589 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:25:09.156617 kubelet[2589]: I0428 01:25:09.156389 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd07e1cda6dcdff238db2463d4d3deb1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dd07e1cda6dcdff238db2463d4d3deb1\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:25:09.156617 kubelet[2589]: I0428 01:25:09.156513 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:25:09.156617 kubelet[2589]: I0428 01:25:09.164406 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:25:09.156617 kubelet[2589]: I0428 01:25:09.166029 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:25:09.156617 kubelet[2589]: I0428 01:25:09.166127 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:25:09.167099 kubelet[2589]: I0428 01:25:09.166179 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:25:09.167099 kubelet[2589]: E0428 01:25:09.166626 2589 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="400ms" Apr 28 01:25:09.167099 kubelet[2589]: I0428 01:25:09.166993 2589 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:25:09.173549 kubelet[2589]: I0428 01:25:09.167795 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd07e1cda6dcdff238db2463d4d3deb1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dd07e1cda6dcdff238db2463d4d3deb1\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:25:09.186568 kubelet[2589]: I0428 01:25:09.185897 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd07e1cda6dcdff238db2463d4d3deb1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dd07e1cda6dcdff238db2463d4d3deb1\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:25:09.189883 kubelet[2589]: E0428 01:25:09.189745 2589 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:25:09.190845 kubelet[2589]: E0428 01:25:09.190101 2589 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Apr 28 01:25:09.195977 kubelet[2589]: E0428 01:25:09.195855 2589 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:25:09.199401 kubelet[2589]: E0428 01:25:09.199376 2589 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:25:09.315382 kubelet[2589]: I0428 01:25:09.315117 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 28 01:25:09.401930 kubelet[2589]: I0428 01:25:09.401078 2589 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:25:09.406776 kubelet[2589]: E0428 01:25:09.406663 2589 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Apr 28 01:25:09.498148 kubelet[2589]: E0428 01:25:09.497725 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:09.498148 kubelet[2589]: E0428 01:25:09.497751 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:09.503425 kubelet[2589]: E0428 01:25:09.503358 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:09.506626 containerd[1586]: time="2026-04-28T01:25:09.506565958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 28 01:25:09.506626 containerd[1586]: time="2026-04-28T01:25:09.506608360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dd07e1cda6dcdff238db2463d4d3deb1,Namespace:kube-system,Attempt:0,}" Apr 28 01:25:09.507110 containerd[1586]: time="2026-04-28T01:25:09.506582287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 28 01:25:09.578446 kubelet[2589]: E0428 01:25:09.578054 2589 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="800ms" Apr 28 01:25:09.638768 kubelet[2589]: E0428 01:25:09.637646 2589 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:25:09.850706 kubelet[2589]: I0428 01:25:09.833417 2589 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:25:09.928047 kubelet[2589]: E0428 01:25:09.926152 2589 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Apr 28 01:25:10.219587 kubelet[2589]: E0428 01:25:10.218753 2589 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:25:10.219587 kubelet[2589]: E0428 01:25:10.218774 2589 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:25:10.416227 kubelet[2589]: E0428 01:25:10.412828 2589 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="1.6s" Apr 28 01:25:10.466736 kubelet[2589]: E0428 01:25:10.466459 2589 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:25:10.787690 kubelet[2589]: E0428 01:25:10.787504 2589 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 01:25:10.804117 kubelet[2589]: I0428 01:25:10.803549 2589 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:25:10.810274 kubelet[2589]: E0428 01:25:10.810142 2589 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Apr 28 01:25:11.125883 containerd[1586]: time="2026-04-28T01:25:11.125378784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 01:25:11.133066 containerd[1586]: time="2026-04-28T01:25:11.130814695Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 28 01:25:11.138703 containerd[1586]: time="2026-04-28T01:25:11.137557535Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 01:25:11.140254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3288719948.mount: Deactivated successfully. Apr 28 01:25:11.141074 containerd[1586]: time="2026-04-28T01:25:11.141026717Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 01:25:11.141585 containerd[1586]: time="2026-04-28T01:25:11.141523922Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 01:25:11.143324 containerd[1586]: time="2026-04-28T01:25:11.143151365Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 01:25:11.144195 containerd[1586]: time="2026-04-28T01:25:11.144130454Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 01:25:11.146533 containerd[1586]: time="2026-04-28T01:25:11.146485081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 01:25:11.149239 containerd[1586]: time="2026-04-28T01:25:11.148186560Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.640370399s" Apr 28 01:25:11.155538 containerd[1586]: time="2026-04-28T01:25:11.155368247Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.6483711s" Apr 28 01:25:11.169576 containerd[1586]: time="2026-04-28T01:25:11.167788877Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.660919203s" Apr 28 01:25:11.555235 kubelet[2589]: E0428 01:25:11.554954 2589 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:25:11.634857 containerd[1586]: time="2026-04-28T01:25:11.632662419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:25:11.634857 containerd[1586]: time="2026-04-28T01:25:11.632748616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:25:11.634857 containerd[1586]: time="2026-04-28T01:25:11.632762192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:25:11.634857 containerd[1586]: time="2026-04-28T01:25:11.632598999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:25:11.634857 containerd[1586]: time="2026-04-28T01:25:11.632697629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:25:11.634857 containerd[1586]: time="2026-04-28T01:25:11.632713235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:25:11.634857 containerd[1586]: time="2026-04-28T01:25:11.632857720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:25:11.636584 containerd[1586]: time="2026-04-28T01:25:11.632955357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:25:11.640553 containerd[1586]: time="2026-04-28T01:25:11.631747519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:25:11.641416 containerd[1586]: time="2026-04-28T01:25:11.640805389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:25:11.641416 containerd[1586]: time="2026-04-28T01:25:11.640931884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:25:11.641416 containerd[1586]: time="2026-04-28T01:25:11.641169623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:25:11.860432 containerd[1586]: time="2026-04-28T01:25:11.859348791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"158deffec2b9a78027c6b1dd694048db795565547cf57592276e97955bc2d8a7\"" Apr 28 01:25:11.862084 containerd[1586]: time="2026-04-28T01:25:11.862003214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b6266d7ba8c24c0e2cc849dc0e30a0f3e9d83bec15a504b5197217a01c44716\"" Apr 28 01:25:11.862302 containerd[1586]: time="2026-04-28T01:25:11.862257166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dd07e1cda6dcdff238db2463d4d3deb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cd8f5f03da38a59b305afeac60be14f0dc694da7e56f065ba2b3c89ca56a7fb\"" Apr 28 01:25:11.889090 kubelet[2589]: E0428 01:25:11.886363 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:11.889090 kubelet[2589]: E0428 01:25:11.886384 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:11.889090 kubelet[2589]: E0428 01:25:11.886465 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:11.956421 containerd[1586]: time="2026-04-28T01:25:11.955989694Z" level=info msg="CreateContainer within sandbox \"8b6266d7ba8c24c0e2cc849dc0e30a0f3e9d83bec15a504b5197217a01c44716\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 28 01:25:11.995385 containerd[1586]: time="2026-04-28T01:25:11.995259231Z" level=info msg="CreateContainer within sandbox \"6cd8f5f03da38a59b305afeac60be14f0dc694da7e56f065ba2b3c89ca56a7fb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 28 01:25:11.998399 containerd[1586]: time="2026-04-28T01:25:11.998249370Z" level=info msg="CreateContainer within sandbox \"158deffec2b9a78027c6b1dd694048db795565547cf57592276e97955bc2d8a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 28 01:25:12.030052 kubelet[2589]: E0428 01:25:12.028961 2589 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="3.2s" Apr 28 01:25:12.134267 containerd[1586]: time="2026-04-28T01:25:12.133312942Z" level=info msg="CreateContainer within sandbox \"8b6266d7ba8c24c0e2cc849dc0e30a0f3e9d83bec15a504b5197217a01c44716\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"057a5014731b90dbeae37eac1703b47b7a6a71a60084b7cd58e0c5221db23cca\"" Apr 28 01:25:12.135488 containerd[1586]: time="2026-04-28T01:25:12.133683176Z" level=info msg="CreateContainer within sandbox \"6cd8f5f03da38a59b305afeac60be14f0dc694da7e56f065ba2b3c89ca56a7fb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3975ff11e1122d42387e2130d1f4e6904161596d09d99b42223543463248c282\"" Apr 28 01:25:12.137491 containerd[1586]: time="2026-04-28T01:25:12.137463032Z" level=info msg="StartContainer for \"057a5014731b90dbeae37eac1703b47b7a6a71a60084b7cd58e0c5221db23cca\"" Apr 28 01:25:12.138240 containerd[1586]: time="2026-04-28T01:25:12.137466014Z" level=info msg="StartContainer for \"3975ff11e1122d42387e2130d1f4e6904161596d09d99b42223543463248c282\"" Apr 28 01:25:12.147549 containerd[1586]: time="2026-04-28T01:25:12.147431178Z" level=info msg="CreateContainer within sandbox \"158deffec2b9a78027c6b1dd694048db795565547cf57592276e97955bc2d8a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"df40bcba28c46e50f6c1089a06c527aa7bc27c9ff638ebcbcb4bf52d3e4ab5cd\"" Apr 28 01:25:12.149105 containerd[1586]: time="2026-04-28T01:25:12.149057903Z" level=info msg="StartContainer for \"df40bcba28c46e50f6c1089a06c527aa7bc27c9ff638ebcbcb4bf52d3e4ab5cd\"" Apr 28 01:25:12.205907 kubelet[2589]: E0428 01:25:12.205772 2589 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:25:12.241404 kubelet[2589]: E0428 01:25:12.240798 2589 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:25:12.448910 kubelet[2589]: I0428 01:25:12.448591 2589 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:25:12.450064 kubelet[2589]: E0428 01:25:12.450007 2589 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Apr 28 01:25:12.500986 containerd[1586]: time="2026-04-28T01:25:12.500804500Z" level=info msg="StartContainer for \"057a5014731b90dbeae37eac1703b47b7a6a71a60084b7cd58e0c5221db23cca\" returns successfully" Apr 28 01:25:12.684371 containerd[1586]: time="2026-04-28T01:25:12.680027900Z" level=info msg="StartContainer for \"3975ff11e1122d42387e2130d1f4e6904161596d09d99b42223543463248c282\" returns successfully" Apr 28 01:25:12.888312 kubelet[2589]: E0428 01:25:12.886370 2589 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:25:12.895054 containerd[1586]: time="2026-04-28T01:25:12.894832032Z" level=info msg="StartContainer for \"df40bcba28c46e50f6c1089a06c527aa7bc27c9ff638ebcbcb4bf52d3e4ab5cd\" returns successfully" Apr 28 01:25:13.513290 kubelet[2589]: E0428 01:25:13.509635 2589 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:25:13.513290 kubelet[2589]: E0428 01:25:13.509991 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:13.552761 kubelet[2589]: E0428 01:25:13.552138 2589 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:25:13.568339 kubelet[2589]: E0428 01:25:13.562786 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:13.714543 kubelet[2589]: E0428 01:25:13.714134 2589 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:25:13.716118 kubelet[2589]: E0428 01:25:13.716053 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:15.056866 kubelet[2589]: E0428 01:25:15.056675 2589 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:25:15.065908 kubelet[2589]: E0428 01:25:15.059720 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:15.088805 kubelet[2589]: E0428 01:25:15.063169 2589 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:25:15.101025 kubelet[2589]: E0428 01:25:15.100068 2589 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:25:15.167913 kubelet[2589]: E0428 01:25:15.120655 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:15.183059 kubelet[2589]: E0428 01:25:15.182170 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:16.021362 kubelet[2589]: I0428 01:25:16.021122 2589 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:25:16.699950 kubelet[2589]: E0428 01:25:16.688972 2589 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:25:16.699950 kubelet[2589]: E0428 01:25:16.696509 2589 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:25:16.699950 kubelet[2589]: E0428 01:25:16.698053 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:16.720303 kubelet[2589]: E0428 01:25:16.708150 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:17.470149 kubelet[2589]: E0428 01:25:17.469897 2589 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:25:17.471248 kubelet[2589]: E0428 01:25:17.470991 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:17.953507 kubelet[2589]: E0428 01:25:17.953440 2589 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:25:17.954396 kubelet[2589]: E0428 01:25:17.953913 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:19.051816 kubelet[2589]: E0428 01:25:19.047827 2589 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:25:23.627995 kubelet[2589]: I0428 01:25:23.627779 2589 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 01:25:23.656319 kubelet[2589]: I0428 01:25:23.655845 2589 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 01:25:24.203040 kubelet[2589]: I0428 01:25:24.199856 2589 apiserver.go:52] "Watching apiserver" Apr 28 01:25:24.249273 kubelet[2589]: I0428 01:25:24.244927 2589 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 01:25:24.421480 kubelet[2589]: I0428 01:25:24.418912 2589 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 28 01:25:24.444036 kubelet[2589]: I0428 01:25:24.443138 2589 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 01:25:24.444036 kubelet[2589]: I0428 01:25:24.443798 2589 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 01:25:24.447065 kubelet[2589]: E0428 01:25:24.445671 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:24.477613 kubelet[2589]: E0428 01:25:24.457186 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:24.545804 kubelet[2589]: E0428 01:25:24.544828 2589 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 28 01:25:24.545804 kubelet[2589]: E0428 01:25:24.545109 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:24.992676 kubelet[2589]: I0428 01:25:24.988467 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.9884270019999999 podStartE2EDuration="1.988427002s" podCreationTimestamp="2026-04-28 01:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:25:24.988251619 +0000 UTC m=+19.241892707" watchObservedRunningTime="2026-04-28 01:25:24.988427002 +0000 UTC m=+19.242068073" Apr 28 01:25:26.312371 kubelet[2589]: E0428 01:25:26.305083 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:28.450557 kubelet[2589]: E0428 01:25:28.450167 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:28.857365 kubelet[2589]: I0428 01:25:28.854859 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.854725619 podStartE2EDuration="4.854725619s" podCreationTimestamp="2026-04-28 01:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:25:28.725661303 +0000 UTC m=+22.979302374" watchObservedRunningTime="2026-04-28 01:25:28.854725619 +0000 UTC m=+23.108366685" Apr 28 01:25:28.857365 kubelet[2589]: I0428 01:25:28.857446 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.857430785 podStartE2EDuration="4.857430785s" podCreationTimestamp="2026-04-28 01:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:25:28.854681227 +0000 UTC m=+23.108322311" watchObservedRunningTime="2026-04-28 01:25:28.857430785 +0000 UTC m=+23.111071862" Apr 28 01:25:42.394819 systemd[1]: Reloading requested from client PID 2881 ('systemctl') (unit session-7.scope)... Apr 28 01:25:42.394896 systemd[1]: Reloading... Apr 28 01:25:42.704775 zram_generator::config[2919]: No configuration found. Apr 28 01:25:43.439170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 01:25:43.590535 systemd[1]: Reloading finished in 1184 ms. Apr 28 01:25:43.644521 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:25:43.732916 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 01:25:43.735515 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:25:43.777518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:25:44.192396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:25:44.203544 (kubelet)[2975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 01:25:44.370475 kubelet[2975]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 01:25:44.370475 kubelet[2975]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 01:25:44.370475 kubelet[2975]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 01:25:44.373744 kubelet[2975]: I0428 01:25:44.370644 2975 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 01:25:44.405727 kubelet[2975]: I0428 01:25:44.403174 2975 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 28 01:25:44.405727 kubelet[2975]: I0428 01:25:44.403413 2975 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 01:25:44.410357 kubelet[2975]: I0428 01:25:44.410068 2975 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 01:25:44.412455 kubelet[2975]: I0428 01:25:44.412349 2975 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 28 01:25:44.435314 kubelet[2975]: I0428 01:25:44.435072 2975 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 01:25:44.497157 kubelet[2975]: E0428 01:25:44.495003 2975 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 01:25:44.497157 kubelet[2975]: I0428 01:25:44.496763 2975 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 28 01:25:44.508776 kubelet[2975]: I0428 01:25:44.508727 2975 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 28 01:25:44.510585 kubelet[2975]: I0428 01:25:44.510064 2975 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 01:25:44.510585 kubelet[2975]: I0428 01:25:44.510246 2975 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 28 01:25:44.510585 kubelet[2975]: I0428 01:25:44.510478 2975 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 01:25:44.510585 kubelet[2975]: I0428 01:25:44.510490 2975 container_manager_linux.go:303] "Creating device plugin manager" Apr 28 01:25:44.510585 kubelet[2975]: I0428 01:25:44.510545 2975 state_mem.go:36] "Initialized new in-memory state store" Apr 28 01:25:44.524917 kubelet[2975]: I0428 01:25:44.523950 2975 kubelet.go:480] "Attempting to sync node with API server" Apr 28 01:25:44.524917 kubelet[2975]: I0428 01:25:44.524021 2975 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 01:25:44.524917 kubelet[2975]: I0428 01:25:44.524132 2975 kubelet.go:386] "Adding apiserver pod source" Apr 28 01:25:44.524917 kubelet[2975]: I0428 01:25:44.524178 2975 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 01:25:44.540472 kubelet[2975]: I0428 01:25:44.538131 2975 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 01:25:44.545268 kubelet[2975]: I0428 01:25:44.543349 2975 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 01:25:44.558772 kubelet[2975]: I0428 01:25:44.558658 2975 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 28 01:25:44.558772 kubelet[2975]: I0428 01:25:44.558758 2975 server.go:1289] "Started kubelet" Apr 28 01:25:44.595328 kubelet[2975]: I0428 01:25:44.587401 2975 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 01:25:44.598645 kubelet[2975]: I0428 01:25:44.560398 2975 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 01:25:44.596744 sudo[2992]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 28 01:25:44.597033 sudo[2992]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 28 01:25:44.602602 kubelet[2975]: I0428 01:25:44.601626 2975 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 01:25:44.603825 kubelet[2975]: I0428 01:25:44.603276 2975 server.go:317] "Adding debug handlers to kubelet server" Apr 28 01:25:44.604662 kubelet[2975]: I0428 01:25:44.604623 2975 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 01:25:44.605563 kubelet[2975]: I0428 01:25:44.605064 2975 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 01:25:44.607136 kubelet[2975]: I0428 01:25:44.607087 2975 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 28 01:25:44.607277 kubelet[2975]: I0428 01:25:44.607243 2975 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 28 01:25:44.609485 kubelet[2975]: I0428 01:25:44.607403 2975 reconciler.go:26] "Reconciler: start to sync state" Apr 28 01:25:44.615467 kubelet[2975]: I0428 01:25:44.611718 2975 factory.go:223] Registration of the systemd container factory successfully Apr 28 01:25:44.615467 kubelet[2975]: I0428 01:25:44.612298 2975 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 01:25:44.628846 kubelet[2975]: E0428 01:25:44.628718 2975 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 01:25:44.695546 kubelet[2975]: I0428 01:25:44.694906 2975 factory.go:223] Registration of the containerd container factory successfully Apr 28 01:25:44.705024 kubelet[2975]: I0428 01:25:44.704945 2975 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 28 01:25:44.706315 kubelet[2975]: I0428 01:25:44.706135 2975 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 28 01:25:44.706409 kubelet[2975]: I0428 01:25:44.706402 2975 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 28 01:25:44.706530 kubelet[2975]: I0428 01:25:44.706521 2975 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 01:25:44.706583 kubelet[2975]: I0428 01:25:44.706578 2975 kubelet.go:2436] "Starting kubelet main sync loop" Apr 28 01:25:44.706653 kubelet[2975]: E0428 01:25:44.706641 2975 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 01:25:44.810123 kubelet[2975]: E0428 01:25:44.809882 2975 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 01:25:44.928364 kubelet[2975]: I0428 01:25:44.928323 2975 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 01:25:44.928663 kubelet[2975]: I0428 01:25:44.928351 2975 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 01:25:44.928663 kubelet[2975]: I0428 01:25:44.928505 2975 state_mem.go:36] "Initialized new in-memory state store" Apr 28 01:25:44.928771 kubelet[2975]: I0428 01:25:44.928741 2975 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 28 01:25:44.929129 kubelet[2975]: I0428 01:25:44.928761 2975 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 28 01:25:44.929129 kubelet[2975]: I0428 01:25:44.928878 2975 policy_none.go:49] "None policy: Start" Apr 28 01:25:44.929129 kubelet[2975]: I0428 01:25:44.928888 2975 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 28 01:25:44.929129 kubelet[2975]: I0428 01:25:44.928896 2975 state_mem.go:35] "Initializing new in-memory state store" Apr 28 01:25:44.929129 kubelet[2975]: I0428 01:25:44.929005 2975 state_mem.go:75] "Updated machine memory state" Apr 28 01:25:45.033113 kubelet[2975]: E0428 01:25:45.018424 2975 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 01:25:45.033113 kubelet[2975]: E0428 01:25:45.026004 2975 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 01:25:45.033113 kubelet[2975]: I0428 01:25:45.028069 2975 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 01:25:45.033113 kubelet[2975]: I0428 01:25:45.028135 2975 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 01:25:45.033113 kubelet[2975]: I0428 01:25:45.028919 2975 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 01:25:45.045503 kubelet[2975]: E0428 01:25:45.042674 2975 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 01:25:45.285518 kubelet[2975]: I0428 01:25:45.285458 2975 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:25:45.325519 kubelet[2975]: I0428 01:25:45.325326 2975 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 28 01:25:45.326106 kubelet[2975]: I0428 01:25:45.325745 2975 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 01:25:45.431983 kubelet[2975]: I0428 01:25:45.431793 2975 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 01:25:45.431983 kubelet[2975]: I0428 01:25:45.431850 2975 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 01:25:45.433262 kubelet[2975]: I0428 01:25:45.432361 2975 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 01:25:45.475223 kubelet[2975]: E0428 01:25:45.474994 2975 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 28 01:25:45.475223 kubelet[2975]: E0428 01:25:45.475003 2975 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 28 01:25:45.475662 kubelet[2975]: E0428 01:25:45.475404 2975 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 28 01:25:45.528373 kubelet[2975]: I0428 01:25:45.528102 2975 apiserver.go:52] "Watching apiserver" Apr 28 01:25:45.549031 kubelet[2975]: I0428 01:25:45.536868 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd07e1cda6dcdff238db2463d4d3deb1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dd07e1cda6dcdff238db2463d4d3deb1\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:25:45.549031 kubelet[2975]: I0428 01:25:45.540308 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:25:45.549031 kubelet[2975]: I0428 01:25:45.547374 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:25:45.549031 kubelet[2975]: I0428 01:25:45.547602 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:25:45.549031 kubelet[2975]: I0428 01:25:45.547626 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 28 01:25:45.549654 kubelet[2975]: I0428 01:25:45.547774 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd07e1cda6dcdff238db2463d4d3deb1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dd07e1cda6dcdff238db2463d4d3deb1\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:25:45.549654 kubelet[2975]: I0428 01:25:45.547791 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd07e1cda6dcdff238db2463d4d3deb1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dd07e1cda6dcdff238db2463d4d3deb1\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:25:45.549654 kubelet[2975]: I0428 01:25:45.547804 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:25:45.549654 kubelet[2975]: I0428 01:25:45.547887 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:25:45.609849 kubelet[2975]: I0428 01:25:45.607857 2975 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 28 01:25:45.783361 kubelet[2975]: E0428 01:25:45.783169 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:45.783885 kubelet[2975]: E0428 01:25:45.783732 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:45.791544 kubelet[2975]: E0428 01:25:45.782114 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:45.975387 kubelet[2975]: E0428 01:25:45.974827 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:45.976372 kubelet[2975]: E0428 01:25:45.974930 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:46.014889 sudo[2992]: pam_unix(sudo:session): session closed for user root Apr 28 01:25:46.958251 kubelet[2975]: I0428 01:25:46.958155 2975 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 28 01:25:46.963482 kubelet[2975]: I0428 01:25:46.958856 2975 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 28 01:25:46.963613 containerd[1586]: time="2026-04-28T01:25:46.958636307Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 28 01:25:46.972941 kubelet[2975]: E0428 01:25:46.967170 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:46.972941 kubelet[2975]: E0428 01:25:46.968633 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:47.977292 kubelet[2975]: E0428 01:25:47.976823 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:48.055056 kubelet[2975]: I0428 01:25:48.054841 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6727827-da9b-437b-8ad1-9883d3976194-lib-modules\") pod \"kube-proxy-pl97j\" (UID: \"f6727827-da9b-437b-8ad1-9883d3976194\") " pod="kube-system/kube-proxy-pl97j" Apr 28 01:25:48.055783 kubelet[2975]: I0428 01:25:48.055119 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-cilium-cgroup\") pod \"cilium-g2fwx\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " pod="kube-system/cilium-g2fwx" Apr 28 01:25:48.055783 kubelet[2975]: I0428 01:25:48.055396 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-etc-cni-netd\") pod \"cilium-g2fwx\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " pod="kube-system/cilium-g2fwx" Apr 28 01:25:48.055783 kubelet[2975]: I0428 01:25:48.055457 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-host-proc-sys-net\") pod \"cilium-g2fwx\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " pod="kube-system/cilium-g2fwx" Apr 28 01:25:48.055783 kubelet[2975]: I0428 01:25:48.055505 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s7q9\" (UniqueName: \"kubernetes.io/projected/147c3752-e4b1-4bee-bb21-d219f93b4aba-kube-api-access-5s7q9\") pod \"cilium-g2fwx\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " pod="kube-system/cilium-g2fwx" Apr 28 01:25:48.055783 kubelet[2975]: I0428 01:25:48.055531 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-cilium-run\") pod \"cilium-g2fwx\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " pod="kube-system/cilium-g2fwx" Apr 28 01:25:48.055783 kubelet[2975]: I0428 01:25:48.055549 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-lib-modules\") pod \"cilium-g2fwx\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " pod="kube-system/cilium-g2fwx" Apr 28 01:25:48.055936 kubelet[2975]: I0428 01:25:48.055572 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-xtables-lock\") pod \"cilium-g2fwx\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " pod="kube-system/cilium-g2fwx" Apr 28 01:25:48.055988 kubelet[2975]: I0428 01:25:48.055950 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-host-proc-sys-kernel\") pod \"cilium-g2fwx\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " pod="kube-system/cilium-g2fwx" Apr 28 01:25:48.056162 kubelet[2975]: I0428 01:25:48.056127 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f6727827-da9b-437b-8ad1-9883d3976194-kube-proxy\") pod \"kube-proxy-pl97j\" (UID: \"f6727827-da9b-437b-8ad1-9883d3976194\") " pod="kube-system/kube-proxy-pl97j" Apr 28 01:25:48.056257 kubelet[2975]: I0428 01:25:48.056236 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6727827-da9b-437b-8ad1-9883d3976194-xtables-lock\") pod \"kube-proxy-pl97j\" (UID: \"f6727827-da9b-437b-8ad1-9883d3976194\") " pod="kube-system/kube-proxy-pl97j" Apr 28 01:25:48.056328 kubelet[2975]: I0428 01:25:48.056303 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-bpf-maps\") pod \"cilium-g2fwx\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " pod="kube-system/cilium-g2fwx" Apr 28 01:25:48.056352 kubelet[2975]: I0428 01:25:48.056336 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-cni-path\") pod \"cilium-g2fwx\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " pod="kube-system/cilium-g2fwx" Apr 28 01:25:48.056621 kubelet[2975]: I0428 01:25:48.056589 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsvxm\" (UniqueName: \"kubernetes.io/projected/f6727827-da9b-437b-8ad1-9883d3976194-kube-api-access-nsvxm\") pod \"kube-proxy-pl97j\" (UID: \"f6727827-da9b-437b-8ad1-9883d3976194\") " pod="kube-system/kube-proxy-pl97j" Apr 28 01:25:48.056723 kubelet[2975]: I0428 01:25:48.056696 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-hostproc\") pod \"cilium-g2fwx\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " pod="kube-system/cilium-g2fwx" Apr 28 01:25:48.056824 kubelet[2975]: I0428 01:25:48.056796 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/147c3752-e4b1-4bee-bb21-d219f93b4aba-clustermesh-secrets\") pod \"cilium-g2fwx\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " pod="kube-system/cilium-g2fwx" Apr 28 01:25:48.056917 kubelet[2975]: I0428 01:25:48.056894 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/147c3752-e4b1-4bee-bb21-d219f93b4aba-cilium-config-path\") pod \"cilium-g2fwx\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " pod="kube-system/cilium-g2fwx" Apr 28 01:25:48.056999 kubelet[2975]: I0428 01:25:48.056964 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/147c3752-e4b1-4bee-bb21-d219f93b4aba-hubble-tls\") pod \"cilium-g2fwx\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " pod="kube-system/cilium-g2fwx" Apr 28 01:25:48.414609 kubelet[2975]: I0428 01:25:48.414493 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k9rx\" (UniqueName: \"kubernetes.io/projected/bde040ca-45b5-4f6f-8fdf-ed9859696254-kube-api-access-8k9rx\") pod \"cilium-operator-6c4d7847fc-rxfcj\" (UID: \"bde040ca-45b5-4f6f-8fdf-ed9859696254\") " pod="kube-system/cilium-operator-6c4d7847fc-rxfcj" Apr 28 01:25:48.414943 kubelet[2975]: I0428 01:25:48.414632 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bde040ca-45b5-4f6f-8fdf-ed9859696254-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rxfcj\" (UID: \"bde040ca-45b5-4f6f-8fdf-ed9859696254\") " pod="kube-system/cilium-operator-6c4d7847fc-rxfcj" Apr 28 01:25:48.448893 kubelet[2975]: E0428 01:25:48.448691 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:48.452294 containerd[1586]: time="2026-04-28T01:25:48.452239939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pl97j,Uid:f6727827-da9b-437b-8ad1-9883d3976194,Namespace:kube-system,Attempt:0,}" Apr 28 01:25:48.462730 kubelet[2975]: E0428 01:25:48.462403 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:48.466114 containerd[1586]: time="2026-04-28T01:25:48.466039446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g2fwx,Uid:147c3752-e4b1-4bee-bb21-d219f93b4aba,Namespace:kube-system,Attempt:0,}" Apr 28 01:25:48.657185 containerd[1586]: time="2026-04-28T01:25:48.657015130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:25:48.658342 containerd[1586]: time="2026-04-28T01:25:48.658303502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:25:48.658544 containerd[1586]: time="2026-04-28T01:25:48.658415865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:25:48.658613 containerd[1586]: time="2026-04-28T01:25:48.658533608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:25:48.665846 containerd[1586]: time="2026-04-28T01:25:48.665410080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:25:48.665846 containerd[1586]: time="2026-04-28T01:25:48.665452535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:25:48.665846 containerd[1586]: time="2026-04-28T01:25:48.665464309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:25:48.665846 containerd[1586]: time="2026-04-28T01:25:48.665549055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:25:48.762105 containerd[1586]: time="2026-04-28T01:25:48.761949289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g2fwx,Uid:147c3752-e4b1-4bee-bb21-d219f93b4aba,Namespace:kube-system,Attempt:0,} returns sandbox id \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\"" Apr 28 01:25:48.764682 kubelet[2975]: E0428 01:25:48.764616 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:48.786651 containerd[1586]: time="2026-04-28T01:25:48.786484215Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 28 01:25:48.790578 containerd[1586]: time="2026-04-28T01:25:48.789988859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pl97j,Uid:f6727827-da9b-437b-8ad1-9883d3976194,Namespace:kube-system,Attempt:0,} returns sandbox id \"bab20c9554f481aa920d1b3190a66e2825d2c0172ddddfaa2737f3d726145f45\"" Apr 28 01:25:48.794378 kubelet[2975]: E0428 01:25:48.793807 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:48.794837 containerd[1586]: time="2026-04-28T01:25:48.794814876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rxfcj,Uid:bde040ca-45b5-4f6f-8fdf-ed9859696254,Namespace:kube-system,Attempt:0,}" Apr 28 01:25:48.795668 kubelet[2975]: E0428 01:25:48.795624 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:48.816142 containerd[1586]: time="2026-04-28T01:25:48.816098012Z" level=info msg="CreateContainer within sandbox \"bab20c9554f481aa920d1b3190a66e2825d2c0172ddddfaa2737f3d726145f45\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 28 01:25:48.843329 containerd[1586]: time="2026-04-28T01:25:48.843270951Z" level=info msg="CreateContainer within sandbox \"bab20c9554f481aa920d1b3190a66e2825d2c0172ddddfaa2737f3d726145f45\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ec32fbd51c4358bf41eb63af489d0e76c2c9056092b6bacf9c0bc4733067db78\"" Apr 28 01:25:48.844180 containerd[1586]: time="2026-04-28T01:25:48.844055113Z" level=info msg="StartContainer for \"ec32fbd51c4358bf41eb63af489d0e76c2c9056092b6bacf9c0bc4733067db78\"" Apr 28 01:25:48.849859 containerd[1586]: time="2026-04-28T01:25:48.849725368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:25:48.849859 containerd[1586]: time="2026-04-28T01:25:48.849820673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:25:48.849859 containerd[1586]: time="2026-04-28T01:25:48.849831979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:25:48.850151 containerd[1586]: time="2026-04-28T01:25:48.850056875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:25:48.896153 containerd[1586]: time="2026-04-28T01:25:48.894781746Z" level=info msg="StartContainer for \"ec32fbd51c4358bf41eb63af489d0e76c2c9056092b6bacf9c0bc4733067db78\" returns successfully" Apr 28 01:25:48.924872 containerd[1586]: time="2026-04-28T01:25:48.924587446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rxfcj,Uid:bde040ca-45b5-4f6f-8fdf-ed9859696254,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8\"" Apr 28 01:25:48.927593 kubelet[2975]: E0428 01:25:48.927188 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:48.988489 kubelet[2975]: E0428 01:25:48.988065 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:49.222055 kubelet[2975]: E0428 01:25:49.220723 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:49.249172 kubelet[2975]: I0428 01:25:49.249098 2975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pl97j" podStartSLOduration=2.249079932 podStartE2EDuration="2.249079932s" podCreationTimestamp="2026-04-28 01:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:25:49.043813791 +0000 UTC m=+4.806392689" watchObservedRunningTime="2026-04-28 01:25:49.249079932 +0000 UTC m=+5.011658842" Apr 28 01:25:49.560995 kubelet[2975]: E0428 01:25:49.560726 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:50.032354 kubelet[2975]: E0428 01:25:50.031193 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:50.054156 kubelet[2975]: E0428 01:25:50.046954 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:52.122420 kubelet[2975]: E0428 01:25:52.122104 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:53.170464 kubelet[2975]: E0428 01:25:53.164455 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:55.814895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1314113583.mount: Deactivated successfully. Apr 28 01:25:58.360108 containerd[1586]: time="2026-04-28T01:25:58.359542321Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:25:58.360108 containerd[1586]: time="2026-04-28T01:25:58.360083827Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 28 01:25:58.368907 containerd[1586]: time="2026-04-28T01:25:58.368117157Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:25:58.382282 containerd[1586]: time="2026-04-28T01:25:58.381932712Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.59525674s" Apr 28 01:25:58.382282 containerd[1586]: time="2026-04-28T01:25:58.382042156Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 28 01:25:58.408775 containerd[1586]: time="2026-04-28T01:25:58.408553868Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 28 01:25:58.448599 containerd[1586]: time="2026-04-28T01:25:58.448081295Z" level=info msg="CreateContainer within sandbox \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 28 01:25:58.553307 containerd[1586]: time="2026-04-28T01:25:58.553076254Z" level=info msg="CreateContainer within sandbox \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e\"" Apr 28 01:25:58.556950 containerd[1586]: time="2026-04-28T01:25:58.556894767Z" level=info msg="StartContainer for \"ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e\"" Apr 28 01:25:58.738699 containerd[1586]: time="2026-04-28T01:25:58.738334199Z" level=info msg="StartContainer for \"ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e\" returns successfully" Apr 28 01:25:59.066064 containerd[1586]: time="2026-04-28T01:25:59.065649487Z" level=info msg="shim disconnected" id=ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e namespace=k8s.io Apr 28 01:25:59.066064 containerd[1586]: time="2026-04-28T01:25:59.065986956Z" level=warning msg="cleaning up after shim disconnected" id=ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e namespace=k8s.io Apr 28 01:25:59.066064 containerd[1586]: time="2026-04-28T01:25:59.066008178Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:25:59.290427 kubelet[2975]: E0428 01:25:59.290305 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:59.314680 containerd[1586]: time="2026-04-28T01:25:59.313952398Z" level=info msg="CreateContainer within sandbox \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 28 01:25:59.404717 containerd[1586]: time="2026-04-28T01:25:59.398874223Z" level=info msg="CreateContainer within sandbox \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c\"" Apr 28 01:25:59.420630 containerd[1586]: time="2026-04-28T01:25:59.420558138Z" level=info msg="StartContainer for \"54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c\"" Apr 28 01:25:59.547343 containerd[1586]: time="2026-04-28T01:25:59.546467249Z" level=info msg="StartContainer for \"54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c\" returns successfully" Apr 28 01:25:59.550409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e-rootfs.mount: Deactivated successfully. Apr 28 01:25:59.574351 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 01:25:59.575395 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 01:25:59.575454 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 28 01:25:59.593335 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 01:25:59.754676 containerd[1586]: time="2026-04-28T01:25:59.749742201Z" level=info msg="shim disconnected" id=54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c namespace=k8s.io Apr 28 01:25:59.754676 containerd[1586]: time="2026-04-28T01:25:59.749954039Z" level=warning msg="cleaning up after shim disconnected" id=54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c namespace=k8s.io Apr 28 01:25:59.754676 containerd[1586]: time="2026-04-28T01:25:59.749962307Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:25:59.754714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c-rootfs.mount: Deactivated successfully. Apr 28 01:25:59.763826 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 01:25:59.782651 containerd[1586]: time="2026-04-28T01:25:59.782594235Z" level=warning msg="cleanup warnings time=\"2026-04-28T01:25:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 01:26:00.137913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4053388110.mount: Deactivated successfully. Apr 28 01:26:00.311811 kubelet[2975]: E0428 01:26:00.311598 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:00.379111 containerd[1586]: time="2026-04-28T01:26:00.378978438Z" level=info msg="CreateContainer within sandbox \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 28 01:26:00.454844 containerd[1586]: time="2026-04-28T01:26:00.453272385Z" level=info msg="CreateContainer within sandbox \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523\"" Apr 28 01:26:00.476756 containerd[1586]: time="2026-04-28T01:26:00.476349734Z" level=info msg="StartContainer for \"b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523\"" Apr 28 01:26:00.929477 containerd[1586]: time="2026-04-28T01:26:00.928648892Z" level=info msg="StartContainer for \"b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523\" returns successfully" Apr 28 01:26:01.006710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523-rootfs.mount: Deactivated successfully. Apr 28 01:26:01.008105 containerd[1586]: time="2026-04-28T01:26:01.008043882Z" level=info msg="shim disconnected" id=b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523 namespace=k8s.io Apr 28 01:26:01.008288 containerd[1586]: time="2026-04-28T01:26:01.008185634Z" level=warning msg="cleaning up after shim disconnected" id=b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523 namespace=k8s.io Apr 28 01:26:01.008288 containerd[1586]: time="2026-04-28T01:26:01.008224842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:26:01.491100 kubelet[2975]: E0428 01:26:01.490982 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:01.821084 containerd[1586]: time="2026-04-28T01:26:01.817506455Z" level=info msg="CreateContainer within sandbox \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 28 01:26:01.905492 containerd[1586]: time="2026-04-28T01:26:01.904483819Z" level=info msg="CreateContainer within sandbox \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04\"" Apr 28 01:26:01.911893 containerd[1586]: time="2026-04-28T01:26:01.911698573Z" level=info msg="StartContainer for \"6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04\"" Apr 28 01:26:02.194373 containerd[1586]: time="2026-04-28T01:26:02.193746660Z" level=info msg="StartContainer for \"6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04\" returns successfully" Apr 28 01:26:02.197469 containerd[1586]: time="2026-04-28T01:26:02.196769595Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:26:02.204854 containerd[1586]: time="2026-04-28T01:26:02.204487880Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 28 01:26:02.209287 containerd[1586]: time="2026-04-28T01:26:02.209067405Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:26:02.239052 containerd[1586]: time="2026-04-28T01:26:02.238432635Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.829652637s" Apr 28 01:26:02.239052 containerd[1586]: time="2026-04-28T01:26:02.238590493Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 28 01:26:02.265145 containerd[1586]: time="2026-04-28T01:26:02.264503758Z" level=info msg="CreateContainer within sandbox \"cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 28 01:26:02.297873 containerd[1586]: time="2026-04-28T01:26:02.296443353Z" level=info msg="shim disconnected" id=6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04 namespace=k8s.io Apr 28 01:26:02.309332 containerd[1586]: time="2026-04-28T01:26:02.298877727Z" level=warning msg="cleaning up after shim disconnected" id=6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04 namespace=k8s.io Apr 28 01:26:02.309332 containerd[1586]: time="2026-04-28T01:26:02.298981747Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:26:02.322465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04-rootfs.mount: Deactivated successfully. Apr 28 01:26:02.432707 containerd[1586]: time="2026-04-28T01:26:02.432519409Z" level=info msg="CreateContainer within sandbox \"cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7\"" Apr 28 01:26:02.438034 containerd[1586]: time="2026-04-28T01:26:02.434059262Z" level=info msg="StartContainer for \"463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7\"" Apr 28 01:26:02.470707 kubelet[2975]: E0428 01:26:02.470060 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:02.526859 containerd[1586]: time="2026-04-28T01:26:02.525567296Z" level=info msg="CreateContainer within sandbox \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 28 01:26:02.606527 containerd[1586]: time="2026-04-28T01:26:02.603387593Z" level=info msg="CreateContainer within sandbox \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc\"" Apr 28 01:26:02.616175 containerd[1586]: time="2026-04-28T01:26:02.616060742Z" level=info msg="StartContainer for \"8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc\"" Apr 28 01:26:02.654348 containerd[1586]: time="2026-04-28T01:26:02.653946887Z" level=info msg="StartContainer for \"463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7\" returns successfully" Apr 28 01:26:02.886482 containerd[1586]: time="2026-04-28T01:26:02.886336836Z" level=info msg="StartContainer for \"8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc\" returns successfully" Apr 28 01:26:03.290584 kubelet[2975]: I0428 01:26:03.289565 2975 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 28 01:26:03.822322 kubelet[2975]: I0428 01:26:03.808323 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qks9l\" (UniqueName: \"kubernetes.io/projected/0053c634-d492-45cd-94d8-0d81ec3088f9-kube-api-access-qks9l\") pod \"coredns-674b8bbfcf-9k4ss\" (UID: \"0053c634-d492-45cd-94d8-0d81ec3088f9\") " pod="kube-system/coredns-674b8bbfcf-9k4ss" Apr 28 01:26:03.822322 kubelet[2975]: I0428 01:26:03.809263 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0053c634-d492-45cd-94d8-0d81ec3088f9-config-volume\") pod \"coredns-674b8bbfcf-9k4ss\" (UID: \"0053c634-d492-45cd-94d8-0d81ec3088f9\") " pod="kube-system/coredns-674b8bbfcf-9k4ss" Apr 28 01:26:03.918059 kubelet[2975]: I0428 01:26:03.914914 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc4b9\" (UniqueName: \"kubernetes.io/projected/b3bab088-fa78-428a-bf00-fef2da577218-kube-api-access-vc4b9\") pod \"coredns-674b8bbfcf-56ckt\" (UID: \"b3bab088-fa78-428a-bf00-fef2da577218\") " pod="kube-system/coredns-674b8bbfcf-56ckt" Apr 28 01:26:03.918059 kubelet[2975]: I0428 01:26:03.915071 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3bab088-fa78-428a-bf00-fef2da577218-config-volume\") pod \"coredns-674b8bbfcf-56ckt\" (UID: \"b3bab088-fa78-428a-bf00-fef2da577218\") " pod="kube-system/coredns-674b8bbfcf-56ckt" Apr 28 01:26:04.009613 kubelet[2975]: E0428 01:26:04.009548 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:04.028398 kubelet[2975]: E0428 01:26:04.028273 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:04.211005 kubelet[2975]: E0428 01:26:04.210342 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:04.228022 containerd[1586]: time="2026-04-28T01:26:04.224803058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9k4ss,Uid:0053c634-d492-45cd-94d8-0d81ec3088f9,Namespace:kube-system,Attempt:0,}" Apr 28 01:26:04.301018 kubelet[2975]: I0428 01:26:04.300973 2975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g2fwx" podStartSLOduration=7.686102518 podStartE2EDuration="17.300957726s" podCreationTimestamp="2026-04-28 01:25:47 +0000 UTC" firstStartedPulling="2026-04-28 01:25:48.779048443 +0000 UTC m=+4.541627342" lastFinishedPulling="2026-04-28 01:25:58.393903645 +0000 UTC m=+14.156482550" observedRunningTime="2026-04-28 01:26:04.299613057 +0000 UTC m=+20.062191959" watchObservedRunningTime="2026-04-28 01:26:04.300957726 +0000 UTC m=+20.063536636" Apr 28 01:26:04.301813 kubelet[2975]: I0428 01:26:04.301780 2975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rxfcj" podStartSLOduration=2.9879113630000003 podStartE2EDuration="16.301769594s" podCreationTimestamp="2026-04-28 01:25:48 +0000 UTC" firstStartedPulling="2026-04-28 01:25:48.927916175 +0000 UTC m=+4.690495074" lastFinishedPulling="2026-04-28 01:26:02.241774396 +0000 UTC m=+18.004353305" observedRunningTime="2026-04-28 01:26:04.199059217 +0000 UTC m=+19.961638137" watchObservedRunningTime="2026-04-28 01:26:04.301769594 +0000 UTC m=+20.064348502" Apr 28 01:26:04.455878 kubelet[2975]: E0428 01:26:04.455855 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:04.460135 containerd[1586]: time="2026-04-28T01:26:04.459634666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-56ckt,Uid:b3bab088-fa78-428a-bf00-fef2da577218,Namespace:kube-system,Attempt:0,}" Apr 28 01:26:05.035977 kubelet[2975]: E0428 01:26:05.035825 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:05.035977 kubelet[2975]: E0428 01:26:05.035852 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:06.115892 kubelet[2975]: E0428 01:26:06.107761 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:06.209920 systemd-networkd[1242]: cilium_host: Link UP Apr 28 01:26:06.210235 systemd-networkd[1242]: cilium_net: Link UP Apr 28 01:26:06.210239 systemd-networkd[1242]: cilium_net: Gained carrier Apr 28 01:26:06.210416 systemd-networkd[1242]: cilium_host: Gained carrier Apr 28 01:26:06.380953 systemd-networkd[1242]: cilium_host: Gained IPv6LL Apr 28 01:26:06.419268 systemd-networkd[1242]: cilium_vxlan: Link UP Apr 28 01:26:06.419866 systemd-networkd[1242]: cilium_vxlan: Gained carrier Apr 28 01:26:06.539126 systemd-networkd[1242]: cilium_net: Gained IPv6LL Apr 28 01:26:06.876274 kernel: NET: Registered PF_ALG protocol family Apr 28 01:26:08.204591 systemd-networkd[1242]: cilium_vxlan: Gained IPv6LL Apr 28 01:26:08.565666 systemd-networkd[1242]: lxc_health: Link UP Apr 28 01:26:08.575162 systemd-networkd[1242]: lxc_health: Gained carrier Apr 28 01:26:08.974428 systemd-networkd[1242]: lxcd8dfe2318347: Link UP Apr 28 01:26:08.992111 kernel: eth0: renamed from tmp9b47f Apr 28 01:26:09.012782 systemd-networkd[1242]: lxcd8dfe2318347: Gained carrier Apr 28 01:26:09.095306 systemd-networkd[1242]: lxc155e35e2bd44: Link UP Apr 28 01:26:09.106258 kernel: eth0: renamed from tmp7bc86 Apr 28 01:26:09.114937 systemd-networkd[1242]: lxc155e35e2bd44: Gained carrier Apr 28 01:26:09.998723 systemd-networkd[1242]: lxc_health: Gained IPv6LL Apr 28 01:26:10.470868 kubelet[2975]: E0428 01:26:10.470736 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:10.572755 systemd-networkd[1242]: lxc155e35e2bd44: Gained IPv6LL Apr 28 01:26:10.956114 systemd-networkd[1242]: lxcd8dfe2318347: Gained IPv6LL Apr 28 01:26:15.576602 systemd[1]: run-containerd-runc-k8s.io-8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc-runc.uXv8La.mount: Deactivated successfully. Apr 28 01:26:15.740942 containerd[1586]: time="2026-04-28T01:26:15.740591894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:26:15.740942 containerd[1586]: time="2026-04-28T01:26:15.740635901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:26:15.740942 containerd[1586]: time="2026-04-28T01:26:15.740648320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:26:15.740942 containerd[1586]: time="2026-04-28T01:26:15.740733243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:26:15.741688 containerd[1586]: time="2026-04-28T01:26:15.741615754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:26:15.741768 containerd[1586]: time="2026-04-28T01:26:15.741737047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:26:15.741827 containerd[1586]: time="2026-04-28T01:26:15.741774679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:26:15.742068 containerd[1586]: time="2026-04-28T01:26:15.741975915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:26:15.795042 kubelet[2975]: I0428 01:26:15.794957 2975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 01:26:15.798003 kubelet[2975]: E0428 01:26:15.797924 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:15.799733 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 01:26:15.807684 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 01:26:15.867910 containerd[1586]: time="2026-04-28T01:26:15.866853448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-56ckt,Uid:b3bab088-fa78-428a-bf00-fef2da577218,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bc86a9c114ffe70e11667f21cd551ab60dd96743a0e95db0c6d0d580337e2a3\"" Apr 28 01:26:15.868059 kubelet[2975]: E0428 01:26:15.867739 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:15.887894 containerd[1586]: time="2026-04-28T01:26:15.887448050Z" level=info msg="CreateContainer within sandbox \"7bc86a9c114ffe70e11667f21cd551ab60dd96743a0e95db0c6d0d580337e2a3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 01:26:15.898428 containerd[1586]: time="2026-04-28T01:26:15.898080758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9k4ss,Uid:0053c634-d492-45cd-94d8-0d81ec3088f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b47f2700f05c9c3f3191114f0679f76c4ab789d986696996022112abb6eb8fd\"" Apr 28 01:26:15.904281 kubelet[2975]: E0428 01:26:15.903410 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:15.911124 containerd[1586]: time="2026-04-28T01:26:15.911041761Z" level=info msg="CreateContainer within sandbox \"9b47f2700f05c9c3f3191114f0679f76c4ab789d986696996022112abb6eb8fd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 01:26:15.938713 containerd[1586]: time="2026-04-28T01:26:15.938290474Z" level=info msg="CreateContainer within sandbox \"7bc86a9c114ffe70e11667f21cd551ab60dd96743a0e95db0c6d0d580337e2a3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b859af55352435c1021918b1704bdb078c369c9e66f7c73faeb45ef6ffc24c4\"" Apr 28 01:26:15.942866 containerd[1586]: time="2026-04-28T01:26:15.942803496Z" level=info msg="StartContainer for \"8b859af55352435c1021918b1704bdb078c369c9e66f7c73faeb45ef6ffc24c4\"" Apr 28 01:26:15.951719 containerd[1586]: time="2026-04-28T01:26:15.950967415Z" level=info msg="CreateContainer within sandbox \"9b47f2700f05c9c3f3191114f0679f76c4ab789d986696996022112abb6eb8fd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f123be93e60a505aaed9eec934901d227c0f27ae58145a8bae85887c7a41cf1\"" Apr 28 01:26:15.955552 containerd[1586]: time="2026-04-28T01:26:15.954919126Z" level=info msg="StartContainer for \"2f123be93e60a505aaed9eec934901d227c0f27ae58145a8bae85887c7a41cf1\"" Apr 28 01:26:16.119861 containerd[1586]: time="2026-04-28T01:26:16.119271256Z" level=info msg="StartContainer for \"8b859af55352435c1021918b1704bdb078c369c9e66f7c73faeb45ef6ffc24c4\" returns successfully" Apr 28 01:26:16.210925 containerd[1586]: time="2026-04-28T01:26:16.209878503Z" level=info msg="StartContainer for \"2f123be93e60a505aaed9eec934901d227c0f27ae58145a8bae85887c7a41cf1\" returns successfully" Apr 28 01:26:16.216777 kubelet[2975]: E0428 01:26:16.216747 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:16.260352 kubelet[2975]: E0428 01:26:16.260041 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:16.261167 kubelet[2975]: E0428 01:26:16.260894 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:16.389518 kubelet[2975]: I0428 01:26:16.388827 2975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-56ckt" podStartSLOduration=28.388691528 podStartE2EDuration="28.388691528s" podCreationTimestamp="2026-04-28 01:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:26:16.319272269 +0000 UTC m=+32.081851176" watchObservedRunningTime="2026-04-28 01:26:16.388691528 +0000 UTC m=+32.151270435" Apr 28 01:26:17.290368 kubelet[2975]: E0428 01:26:17.277273 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:17.310407 kubelet[2975]: E0428 01:26:17.310263 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:17.435844 kubelet[2975]: I0428 01:26:17.429884 2975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9k4ss" podStartSLOduration=29.429320519 podStartE2EDuration="29.429320519s" podCreationTimestamp="2026-04-28 01:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:26:16.405969847 +0000 UTC m=+32.168548779" watchObservedRunningTime="2026-04-28 01:26:17.429320519 +0000 UTC m=+33.191899422" Apr 28 01:26:18.305185 kubelet[2975]: E0428 01:26:18.304504 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:18.305185 kubelet[2975]: E0428 01:26:18.304621 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:18.914823 sudo[1825]: pam_unix(sudo:session): session closed for user root Apr 28 01:26:18.930738 sshd[1818]: pam_unix(sshd:session): session closed for user core Apr 28 01:26:18.946523 systemd[1]: sshd@6-10.0.0.35:22-10.0.0.1:49718.service: Deactivated successfully. Apr 28 01:26:18.956575 systemd[1]: session-7.scope: Deactivated successfully. Apr 28 01:26:18.957504 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Apr 28 01:26:18.971822 systemd-logind[1560]: Removed session 7. Apr 28 01:26:19.317886 kubelet[2975]: E0428 01:26:19.317774 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:51.721141 kubelet[2975]: E0428 01:26:51.719399 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:26:55.814929 kubelet[2975]: E0428 01:26:55.814468 2975 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.098s" Apr 28 01:26:57.905425 kubelet[2975]: E0428 01:26:57.893845 2975 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.14s" Apr 28 01:26:59.747803 kubelet[2975]: E0428 01:26:59.746053 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:27:05.730273 kubelet[2975]: E0428 01:27:05.729633 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:27:09.799552 kubelet[2975]: E0428 01:27:09.796994 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:27:10.500821 systemd[1]: Started sshd@7-10.0.0.35:22-10.0.0.1:53596.service - OpenSSH per-connection server daemon (10.0.0.1:53596). Apr 28 01:27:10.839270 sshd[4520]: Accepted publickey for core from 10.0.0.1 port 53596 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:27:10.848413 sshd[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:27:11.015292 systemd-logind[1560]: New session 8 of user core. Apr 28 01:27:11.053577 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 28 01:27:12.360156 sshd[4520]: pam_unix(sshd:session): session closed for user core Apr 28 01:27:12.384407 systemd[1]: sshd@7-10.0.0.35:22-10.0.0.1:53596.service: Deactivated successfully. Apr 28 01:27:12.394388 systemd[1]: session-8.scope: Deactivated successfully. Apr 28 01:27:12.397086 systemd-logind[1560]: Session 8 logged out. Waiting for processes to exit. Apr 28 01:27:12.398845 systemd-logind[1560]: Removed session 8. Apr 28 01:27:17.676946 systemd[1]: Started sshd@8-10.0.0.35:22-10.0.0.1:53598.service - OpenSSH per-connection server daemon (10.0.0.1:53598). Apr 28 01:27:17.896396 kubelet[2975]: E0428 01:27:17.895993 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:27:18.322973 sshd[4543]: Accepted publickey for core from 10.0.0.1 port 53598 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:27:18.354417 sshd[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:27:18.368003 systemd-logind[1560]: New session 9 of user core. Apr 28 01:27:18.387536 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 28 01:27:19.184766 sshd[4543]: pam_unix(sshd:session): session closed for user core Apr 28 01:27:19.200505 systemd[1]: sshd@8-10.0.0.35:22-10.0.0.1:53598.service: Deactivated successfully. Apr 28 01:27:19.253962 systemd[1]: session-9.scope: Deactivated successfully. Apr 28 01:27:19.256828 systemd-logind[1560]: Session 9 logged out. Waiting for processes to exit. Apr 28 01:27:19.259500 systemd-logind[1560]: Removed session 9. Apr 28 01:27:22.904911 kubelet[2975]: E0428 01:27:22.893868 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:27:24.233808 systemd[1]: Started sshd@9-10.0.0.35:22-10.0.0.1:42418.service - OpenSSH per-connection server daemon (10.0.0.1:42418). Apr 28 01:27:24.597372 sshd[4562]: Accepted publickey for core from 10.0.0.1 port 42418 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:27:24.631504 sshd[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:27:24.871893 systemd-logind[1560]: New session 10 of user core. Apr 28 01:27:25.039286 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 28 01:27:26.831287 sshd[4562]: pam_unix(sshd:session): session closed for user core Apr 28 01:27:26.855959 systemd[1]: sshd@9-10.0.0.35:22-10.0.0.1:42418.service: Deactivated successfully. Apr 28 01:27:26.866003 systemd[1]: session-10.scope: Deactivated successfully. Apr 28 01:27:26.911531 systemd-logind[1560]: Session 10 logged out. Waiting for processes to exit. Apr 28 01:27:27.027593 systemd-logind[1560]: Removed session 10. Apr 28 01:27:31.019834 kubelet[2975]: E0428 01:27:30.962683 2975 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.89s" Apr 28 01:27:31.132550 kubelet[2975]: E0428 01:27:31.132394 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:27:31.929354 systemd[1]: Started sshd@10-10.0.0.35:22-10.0.0.1:47168.service - OpenSSH per-connection server daemon (10.0.0.1:47168). Apr 28 01:27:32.283403 sshd[4581]: Accepted publickey for core from 10.0.0.1 port 47168 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:27:32.293028 sshd[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:27:32.323741 systemd-logind[1560]: New session 11 of user core. Apr 28 01:27:32.335942 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 28 01:27:32.748658 sshd[4581]: pam_unix(sshd:session): session closed for user core Apr 28 01:27:32.766007 systemd[1]: sshd@10-10.0.0.35:22-10.0.0.1:47168.service: Deactivated successfully. Apr 28 01:27:32.770401 systemd-logind[1560]: Session 11 logged out. Waiting for processes to exit. Apr 28 01:27:32.770924 systemd[1]: session-11.scope: Deactivated successfully. Apr 28 01:27:32.772987 systemd-logind[1560]: Removed session 11. Apr 28 01:27:36.724787 kubelet[2975]: E0428 01:27:36.724521 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:27:37.768653 systemd[1]: Started sshd@11-10.0.0.35:22-10.0.0.1:47172.service - OpenSSH per-connection server daemon (10.0.0.1:47172). Apr 28 01:27:37.953782 sshd[4597]: Accepted publickey for core from 10.0.0.1 port 47172 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:27:37.960018 sshd[4597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:27:38.120438 systemd-logind[1560]: New session 12 of user core. Apr 28 01:27:38.153145 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 28 01:27:38.939003 sshd[4597]: pam_unix(sshd:session): session closed for user core Apr 28 01:27:38.966999 systemd[1]: sshd@11-10.0.0.35:22-10.0.0.1:47172.service: Deactivated successfully. Apr 28 01:27:39.003899 systemd[1]: session-12.scope: Deactivated successfully. Apr 28 01:27:39.030681 systemd-logind[1560]: Session 12 logged out. Waiting for processes to exit. Apr 28 01:27:39.034308 systemd-logind[1560]: Removed session 12. Apr 28 01:27:44.060006 systemd[1]: Started sshd@12-10.0.0.35:22-10.0.0.1:58202.service - OpenSSH per-connection server daemon (10.0.0.1:58202). Apr 28 01:27:44.441014 sshd[4614]: Accepted publickey for core from 10.0.0.1 port 58202 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:27:44.450683 sshd[4614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:27:44.548716 systemd-logind[1560]: New session 13 of user core. Apr 28 01:27:44.629966 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 28 01:27:46.612919 sshd[4614]: pam_unix(sshd:session): session closed for user core Apr 28 01:27:46.656771 systemd[1]: sshd@12-10.0.0.35:22-10.0.0.1:58202.service: Deactivated successfully. Apr 28 01:27:46.676862 systemd[1]: session-13.scope: Deactivated successfully. Apr 28 01:27:46.713378 systemd-logind[1560]: Session 13 logged out. Waiting for processes to exit. Apr 28 01:27:46.748632 systemd-logind[1560]: Removed session 13. Apr 28 01:27:51.635746 systemd[1]: Started sshd@13-10.0.0.35:22-10.0.0.1:60006.service - OpenSSH per-connection server daemon (10.0.0.1:60006). Apr 28 01:27:51.763767 sshd[4634]: Accepted publickey for core from 10.0.0.1 port 60006 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:27:51.767710 sshd[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:27:51.823508 systemd-logind[1560]: New session 14 of user core. Apr 28 01:27:51.837698 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 28 01:27:53.024532 sshd[4634]: pam_unix(sshd:session): session closed for user core Apr 28 01:27:53.052914 systemd[1]: sshd@13-10.0.0.35:22-10.0.0.1:60006.service: Deactivated successfully. Apr 28 01:27:53.069004 systemd-logind[1560]: Session 14 logged out. Waiting for processes to exit. Apr 28 01:27:53.069020 systemd[1]: session-14.scope: Deactivated successfully. Apr 28 01:27:53.072259 systemd-logind[1560]: Removed session 14. Apr 28 01:27:58.047043 systemd[1]: Started sshd@14-10.0.0.35:22-10.0.0.1:60014.service - OpenSSH per-connection server daemon (10.0.0.1:60014). Apr 28 01:27:58.160307 sshd[4650]: Accepted publickey for core from 10.0.0.1 port 60014 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:27:58.164457 sshd[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:27:58.229186 systemd-logind[1560]: New session 15 of user core. Apr 28 01:27:58.261671 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 28 01:27:58.689744 sshd[4650]: pam_unix(sshd:session): session closed for user core Apr 28 01:27:58.694551 systemd[1]: sshd@14-10.0.0.35:22-10.0.0.1:60014.service: Deactivated successfully. Apr 28 01:27:58.697894 systemd[1]: session-15.scope: Deactivated successfully. Apr 28 01:27:58.698074 systemd-logind[1560]: Session 15 logged out. Waiting for processes to exit. Apr 28 01:27:58.700841 systemd-logind[1560]: Removed session 15. Apr 28 01:28:03.797850 systemd[1]: Started sshd@15-10.0.0.35:22-10.0.0.1:58966.service - OpenSSH per-connection server daemon (10.0.0.1:58966). Apr 28 01:28:03.897804 sshd[4666]: Accepted publickey for core from 10.0.0.1 port 58966 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:28:03.904031 sshd[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:28:03.931842 systemd-logind[1560]: New session 16 of user core. Apr 28 01:28:03.949543 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 28 01:28:04.665702 sshd[4666]: pam_unix(sshd:session): session closed for user core Apr 28 01:28:04.697292 systemd[1]: sshd@15-10.0.0.35:22-10.0.0.1:58966.service: Deactivated successfully. Apr 28 01:28:04.733831 systemd[1]: session-16.scope: Deactivated successfully. Apr 28 01:28:04.755693 systemd-logind[1560]: Session 16 logged out. Waiting for processes to exit. Apr 28 01:28:04.792542 systemd[1]: Started sshd@16-10.0.0.35:22-10.0.0.1:58974.service - OpenSSH per-connection server daemon (10.0.0.1:58974). Apr 28 01:28:04.808287 systemd-logind[1560]: Removed session 16. Apr 28 01:28:04.893295 sshd[4683]: Accepted publickey for core from 10.0.0.1 port 58974 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:28:04.896419 sshd[4683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:28:04.905908 systemd-logind[1560]: New session 17 of user core. Apr 28 01:28:04.929729 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 28 01:28:05.673720 sshd[4683]: pam_unix(sshd:session): session closed for user core Apr 28 01:28:05.693702 systemd[1]: Started sshd@17-10.0.0.35:22-10.0.0.1:58984.service - OpenSSH per-connection server daemon (10.0.0.1:58984). Apr 28 01:28:05.694042 systemd[1]: sshd@16-10.0.0.35:22-10.0.0.1:58974.service: Deactivated successfully. Apr 28 01:28:05.745644 systemd[1]: session-17.scope: Deactivated successfully. Apr 28 01:28:05.768084 systemd-logind[1560]: Session 17 logged out. Waiting for processes to exit. Apr 28 01:28:05.784028 systemd-logind[1560]: Removed session 17. Apr 28 01:28:06.180703 sshd[4695]: Accepted publickey for core from 10.0.0.1 port 58984 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:28:06.184101 sshd[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:28:06.193999 systemd-logind[1560]: New session 18 of user core. Apr 28 01:28:06.203833 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 28 01:28:06.673951 sshd[4695]: pam_unix(sshd:session): session closed for user core Apr 28 01:28:06.683618 systemd[1]: sshd@17-10.0.0.35:22-10.0.0.1:58984.service: Deactivated successfully. Apr 28 01:28:06.695009 systemd[1]: session-18.scope: Deactivated successfully. Apr 28 01:28:06.696657 systemd-logind[1560]: Session 18 logged out. Waiting for processes to exit. Apr 28 01:28:06.706753 systemd-logind[1560]: Removed session 18. Apr 28 01:28:06.714072 kubelet[2975]: E0428 01:28:06.713794 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:28:11.873342 systemd[1]: Started sshd@18-10.0.0.35:22-10.0.0.1:50574.service - OpenSSH per-connection server daemon (10.0.0.1:50574). Apr 28 01:28:12.158975 sshd[4714]: Accepted publickey for core from 10.0.0.1 port 50574 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:28:12.165836 sshd[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:28:12.237941 systemd-logind[1560]: New session 19 of user core. Apr 28 01:28:12.281772 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 28 01:28:12.939613 sshd[4714]: pam_unix(sshd:session): session closed for user core Apr 28 01:28:12.943895 systemd[1]: sshd@18-10.0.0.35:22-10.0.0.1:50574.service: Deactivated successfully. Apr 28 01:28:12.946770 systemd[1]: session-19.scope: Deactivated successfully. Apr 28 01:28:12.946846 systemd-logind[1560]: Session 19 logged out. Waiting for processes to exit. Apr 28 01:28:12.948323 systemd-logind[1560]: Removed session 19. Apr 28 01:28:16.725348 kubelet[2975]: E0428 01:28:16.723899 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:28:17.999721 systemd[1]: Started sshd@19-10.0.0.35:22-10.0.0.1:50586.service - OpenSSH per-connection server daemon (10.0.0.1:50586). Apr 28 01:28:18.069460 sshd[4730]: Accepted publickey for core from 10.0.0.1 port 50586 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:28:18.076134 sshd[4730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:28:18.098903 systemd-logind[1560]: New session 20 of user core. Apr 28 01:28:18.116885 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 28 01:28:18.744126 kubelet[2975]: E0428 01:28:18.743890 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:28:18.834485 sshd[4730]: pam_unix(sshd:session): session closed for user core Apr 28 01:28:18.841128 systemd[1]: sshd@19-10.0.0.35:22-10.0.0.1:50586.service: Deactivated successfully. Apr 28 01:28:18.844815 systemd[1]: session-20.scope: Deactivated successfully. Apr 28 01:28:18.844936 systemd-logind[1560]: Session 20 logged out. Waiting for processes to exit. Apr 28 01:28:18.847194 systemd-logind[1560]: Removed session 20. Apr 28 01:28:21.713108 kubelet[2975]: E0428 01:28:21.712653 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:28:23.896478 systemd[1]: Started sshd@20-10.0.0.35:22-10.0.0.1:35028.service - OpenSSH per-connection server daemon (10.0.0.1:35028). Apr 28 01:28:24.268759 sshd[4751]: Accepted publickey for core from 10.0.0.1 port 35028 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:28:24.323401 sshd[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:28:24.513476 systemd-logind[1560]: New session 21 of user core. Apr 28 01:28:24.551316 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 28 01:28:24.729519 kubelet[2975]: E0428 01:28:24.729147 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:28:25.236583 sshd[4751]: pam_unix(sshd:session): session closed for user core Apr 28 01:28:25.300848 systemd[1]: sshd@20-10.0.0.35:22-10.0.0.1:35028.service: Deactivated successfully. Apr 28 01:28:25.438612 systemd[1]: session-21.scope: Deactivated successfully. Apr 28 01:28:25.443293 systemd-logind[1560]: Session 21 logged out. Waiting for processes to exit. Apr 28 01:28:25.494805 systemd-logind[1560]: Removed session 21. Apr 28 01:28:30.281156 systemd[1]: Started sshd@21-10.0.0.35:22-10.0.0.1:33578.service - OpenSSH per-connection server daemon (10.0.0.1:33578). Apr 28 01:28:30.386610 sshd[4766]: Accepted publickey for core from 10.0.0.1 port 33578 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:28:30.401149 sshd[4766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:28:30.414366 systemd-logind[1560]: New session 22 of user core. Apr 28 01:28:30.428767 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 28 01:28:31.055333 sshd[4766]: pam_unix(sshd:session): session closed for user core Apr 28 01:28:31.062365 systemd[1]: sshd@21-10.0.0.35:22-10.0.0.1:33578.service: Deactivated successfully. Apr 28 01:28:31.065881 systemd[1]: session-22.scope: Deactivated successfully. Apr 28 01:28:31.065883 systemd-logind[1560]: Session 22 logged out. Waiting for processes to exit. Apr 28 01:28:31.067597 systemd-logind[1560]: Removed session 22. Apr 28 01:28:36.118780 systemd[1]: Started sshd@22-10.0.0.35:22-10.0.0.1:33592.service - OpenSSH per-connection server daemon (10.0.0.1:33592). Apr 28 01:28:36.371408 sshd[4781]: Accepted publickey for core from 10.0.0.1 port 33592 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:28:36.374896 sshd[4781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:28:36.396682 systemd-logind[1560]: New session 23 of user core. Apr 28 01:28:36.430134 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 28 01:28:37.731590 kubelet[2975]: E0428 01:28:37.730076 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:28:37.916997 sshd[4781]: pam_unix(sshd:session): session closed for user core Apr 28 01:28:37.936139 systemd[1]: sshd@22-10.0.0.35:22-10.0.0.1:33592.service: Deactivated successfully. Apr 28 01:28:37.961358 systemd[1]: session-23.scope: Deactivated successfully. Apr 28 01:28:37.964892 systemd-logind[1560]: Session 23 logged out. Waiting for processes to exit. Apr 28 01:28:37.981378 systemd-logind[1560]: Removed session 23. Apr 28 01:28:42.940191 systemd[1]: Started sshd@23-10.0.0.35:22-10.0.0.1:58422.service - OpenSSH per-connection server daemon (10.0.0.1:58422). Apr 28 01:28:43.248664 sshd[4796]: Accepted publickey for core from 10.0.0.1 port 58422 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:28:43.259562 sshd[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:28:43.293887 systemd-logind[1560]: New session 24 of user core. Apr 28 01:28:43.300091 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 28 01:28:43.937518 sshd[4796]: pam_unix(sshd:session): session closed for user core Apr 28 01:28:43.953495 systemd[1]: sshd@23-10.0.0.35:22-10.0.0.1:58422.service: Deactivated successfully. Apr 28 01:28:43.960859 systemd[1]: session-24.scope: Deactivated successfully. Apr 28 01:28:43.965086 systemd-logind[1560]: Session 24 logged out. Waiting for processes to exit. Apr 28 01:28:43.975607 systemd[1]: Started sshd@24-10.0.0.35:22-10.0.0.1:58434.service - OpenSSH per-connection server daemon (10.0.0.1:58434). Apr 28 01:28:43.977302 systemd-logind[1560]: Removed session 24. Apr 28 01:28:44.070392 sshd[4812]: Accepted publickey for core from 10.0.0.1 port 58434 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:28:44.103018 sshd[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:28:44.214371 systemd-logind[1560]: New session 25 of user core. Apr 28 01:28:44.226483 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 28 01:28:45.293490 sshd[4812]: pam_unix(sshd:session): session closed for user core Apr 28 01:28:45.310936 systemd[1]: Started sshd@25-10.0.0.35:22-10.0.0.1:58448.service - OpenSSH per-connection server daemon (10.0.0.1:58448). Apr 28 01:28:45.311852 systemd[1]: sshd@24-10.0.0.35:22-10.0.0.1:58434.service: Deactivated successfully. Apr 28 01:28:45.315832 systemd[1]: session-25.scope: Deactivated successfully. Apr 28 01:28:45.336545 systemd-logind[1560]: Session 25 logged out. Waiting for processes to exit. Apr 28 01:28:45.339881 systemd-logind[1560]: Removed session 25. Apr 28 01:28:45.390762 sshd[4824]: Accepted publickey for core from 10.0.0.1 port 58448 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:28:45.396647 sshd[4824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:28:45.447358 systemd-logind[1560]: New session 26 of user core. Apr 28 01:28:45.459739 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 28 01:28:45.718576 kubelet[2975]: E0428 01:28:45.717729 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:28:47.496086 sshd[4824]: pam_unix(sshd:session): session closed for user core Apr 28 01:28:47.521678 systemd[1]: Started sshd@26-10.0.0.35:22-10.0.0.1:58458.service - OpenSSH per-connection server daemon (10.0.0.1:58458). Apr 28 01:28:47.525851 systemd[1]: sshd@25-10.0.0.35:22-10.0.0.1:58448.service: Deactivated successfully. Apr 28 01:28:47.531160 systemd[1]: session-26.scope: Deactivated successfully. Apr 28 01:28:47.543525 systemd-logind[1560]: Session 26 logged out. Waiting for processes to exit. Apr 28 01:28:47.555006 systemd-logind[1560]: Removed session 26. Apr 28 01:28:47.710109 sshd[4847]: Accepted publickey for core from 10.0.0.1 port 58458 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:28:47.735327 sshd[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:28:47.764414 systemd-logind[1560]: New session 27 of user core. Apr 28 01:28:47.786930 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 28 01:28:48.785521 sshd[4847]: pam_unix(sshd:session): session closed for user core Apr 28 01:28:48.804853 systemd[1]: Started sshd@27-10.0.0.35:22-10.0.0.1:58470.service - OpenSSH per-connection server daemon (10.0.0.1:58470). Apr 28 01:28:48.843118 systemd[1]: sshd@26-10.0.0.35:22-10.0.0.1:58458.service: Deactivated successfully. Apr 28 01:28:48.866188 systemd[1]: session-27.scope: Deactivated successfully. Apr 28 01:28:48.873172 systemd-logind[1560]: Session 27 logged out. Waiting for processes to exit. Apr 28 01:28:48.882826 systemd-logind[1560]: Removed session 27. Apr 28 01:28:48.945438 sshd[4863]: Accepted publickey for core from 10.0.0.1 port 58470 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:28:48.956093 sshd[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:28:49.010699 systemd-logind[1560]: New session 28 of user core. Apr 28 01:28:49.028262 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 28 01:28:49.437867 sshd[4863]: pam_unix(sshd:session): session closed for user core Apr 28 01:28:49.456416 systemd[1]: sshd@27-10.0.0.35:22-10.0.0.1:58470.service: Deactivated successfully. Apr 28 01:28:49.462641 systemd[1]: session-28.scope: Deactivated successfully. Apr 28 01:28:49.463754 systemd-logind[1560]: Session 28 logged out. Waiting for processes to exit. Apr 28 01:28:49.465472 systemd-logind[1560]: Removed session 28. Apr 28 01:28:54.685815 systemd[1]: Started sshd@28-10.0.0.35:22-10.0.0.1:36940.service - OpenSSH per-connection server daemon (10.0.0.1:36940). Apr 28 01:28:54.996489 sshd[4884]: Accepted publickey for core from 10.0.0.1 port 36940 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:28:54.998910 sshd[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:28:55.005946 systemd-logind[1560]: New session 29 of user core. Apr 28 01:28:55.128454 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 28 01:28:56.234039 sshd[4884]: pam_unix(sshd:session): session closed for user core Apr 28 01:28:56.263169 systemd[1]: sshd@28-10.0.0.35:22-10.0.0.1:36940.service: Deactivated successfully. Apr 28 01:28:56.267328 systemd[1]: session-29.scope: Deactivated successfully. Apr 28 01:28:56.268890 systemd-logind[1560]: Session 29 logged out. Waiting for processes to exit. Apr 28 01:28:56.270159 systemd-logind[1560]: Removed session 29. Apr 28 01:29:01.281376 systemd[1]: Started sshd@29-10.0.0.35:22-10.0.0.1:45706.service - OpenSSH per-connection server daemon (10.0.0.1:45706). Apr 28 01:29:01.339599 sshd[4900]: Accepted publickey for core from 10.0.0.1 port 45706 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:29:01.341872 sshd[4900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:29:01.349734 systemd-logind[1560]: New session 30 of user core. Apr 28 01:29:01.366691 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 28 01:29:01.619879 sshd[4900]: pam_unix(sshd:session): session closed for user core Apr 28 01:29:01.624858 systemd[1]: sshd@29-10.0.0.35:22-10.0.0.1:45706.service: Deactivated successfully. Apr 28 01:29:01.628537 systemd-logind[1560]: Session 30 logged out. Waiting for processes to exit. Apr 28 01:29:01.628624 systemd[1]: session-30.scope: Deactivated successfully. Apr 28 01:29:01.630171 systemd-logind[1560]: Removed session 30. Apr 28 01:29:01.711557 kubelet[2975]: E0428 01:29:01.711421 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:29:06.641600 systemd[1]: Started sshd@30-10.0.0.35:22-10.0.0.1:45710.service - OpenSSH per-connection server daemon (10.0.0.1:45710). Apr 28 01:29:06.802557 sshd[4917]: Accepted publickey for core from 10.0.0.1 port 45710 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:29:06.805936 sshd[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:29:06.878176 systemd-logind[1560]: New session 31 of user core. Apr 28 01:29:06.888925 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 28 01:29:09.097506 sshd[4917]: pam_unix(sshd:session): session closed for user core Apr 28 01:29:09.117108 systemd[1]: sshd@30-10.0.0.35:22-10.0.0.1:45710.service: Deactivated successfully. Apr 28 01:29:09.132576 systemd-logind[1560]: Session 31 logged out. Waiting for processes to exit. Apr 28 01:29:09.132651 systemd[1]: session-31.scope: Deactivated successfully. Apr 28 01:29:09.134172 systemd-logind[1560]: Removed session 31. Apr 28 01:29:14.397878 systemd[1]: Started sshd@31-10.0.0.35:22-10.0.0.1:53092.service - OpenSSH per-connection server daemon (10.0.0.1:53092). Apr 28 01:29:14.747076 sshd[4932]: Accepted publickey for core from 10.0.0.1 port 53092 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:29:14.791993 sshd[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:29:14.944939 systemd-logind[1560]: New session 32 of user core. Apr 28 01:29:14.961099 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 28 01:29:17.321953 sshd[4932]: pam_unix(sshd:session): session closed for user core Apr 28 01:29:17.477828 systemd[1]: Started sshd@32-10.0.0.35:22-10.0.0.1:53108.service - OpenSSH per-connection server daemon (10.0.0.1:53108). Apr 28 01:29:17.493780 systemd[1]: sshd@31-10.0.0.35:22-10.0.0.1:53092.service: Deactivated successfully. Apr 28 01:29:17.564702 systemd[1]: session-32.scope: Deactivated successfully. Apr 28 01:29:17.603450 systemd-logind[1560]: Session 32 logged out. Waiting for processes to exit. Apr 28 01:29:17.731946 systemd-logind[1560]: Removed session 32. Apr 28 01:29:18.000040 sshd[4946]: Accepted publickey for core from 10.0.0.1 port 53108 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:29:18.009788 sshd[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:29:18.052183 systemd-logind[1560]: New session 33 of user core. Apr 28 01:29:18.064663 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 28 01:29:20.717452 kubelet[2975]: E0428 01:29:20.717086 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:29:21.716602 kubelet[2975]: E0428 01:29:21.716407 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:29:23.129868 containerd[1586]: time="2026-04-28T01:29:23.129685172Z" level=info msg="StopContainer for \"463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7\" with timeout 30 (s)" Apr 28 01:29:23.211953 containerd[1586]: time="2026-04-28T01:29:23.209509689Z" level=info msg="Stop container \"463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7\" with signal terminated" Apr 28 01:29:23.720922 containerd[1586]: time="2026-04-28T01:29:23.720824473Z" level=info msg="StopContainer for \"8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc\" with timeout 2 (s)" Apr 28 01:29:23.724363 containerd[1586]: time="2026-04-28T01:29:23.724295189Z" level=info msg="Stop container \"8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc\" with signal terminated" Apr 28 01:29:23.750021 containerd[1586]: time="2026-04-28T01:29:23.749863386Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 01:29:23.914411 systemd-networkd[1242]: lxc_health: Link DOWN Apr 28 01:29:23.914421 systemd-networkd[1242]: lxc_health: Lost carrier Apr 28 01:29:24.021586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7-rootfs.mount: Deactivated successfully. Apr 28 01:29:24.042938 sshd[4946]: pam_unix(sshd:session): session closed for user core Apr 28 01:29:24.046339 containerd[1586]: time="2026-04-28T01:29:24.046238152Z" level=info msg="shim disconnected" id=463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7 namespace=k8s.io Apr 28 01:29:24.046339 containerd[1586]: time="2026-04-28T01:29:24.046320761Z" level=warning msg="cleaning up after shim disconnected" id=463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7 namespace=k8s.io Apr 28 01:29:24.046339 containerd[1586]: time="2026-04-28T01:29:24.046328545Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:29:24.077285 systemd[1]: Started sshd@33-10.0.0.35:22-10.0.0.1:58984.service - OpenSSH per-connection server daemon (10.0.0.1:58984). Apr 28 01:29:24.098862 systemd[1]: sshd@32-10.0.0.35:22-10.0.0.1:53108.service: Deactivated successfully. Apr 28 01:29:24.197275 systemd[1]: session-33.scope: Deactivated successfully. Apr 28 01:29:24.201639 systemd-logind[1560]: Session 33 logged out. Waiting for processes to exit. Apr 28 01:29:24.203963 systemd-logind[1560]: Removed session 33. Apr 28 01:29:24.257962 containerd[1586]: time="2026-04-28T01:29:24.257807024Z" level=info msg="StopContainer for \"463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7\" returns successfully" Apr 28 01:29:24.268423 sshd[5011]: Accepted publickey for core from 10.0.0.1 port 58984 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:29:24.271085 sshd[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:29:24.278418 containerd[1586]: time="2026-04-28T01:29:24.277762903Z" level=info msg="StopPodSandbox for \"cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8\"" Apr 28 01:29:24.278418 containerd[1586]: time="2026-04-28T01:29:24.277967581Z" level=info msg="Container to stop \"463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 01:29:24.303570 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8-shm.mount: Deactivated successfully. Apr 28 01:29:24.308126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc-rootfs.mount: Deactivated successfully. Apr 28 01:29:24.335850 containerd[1586]: time="2026-04-28T01:29:24.333898650Z" level=info msg="shim disconnected" id=8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc namespace=k8s.io Apr 28 01:29:24.337094 containerd[1586]: time="2026-04-28T01:29:24.333933285Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/s/80e728ed5f8fcced9eb900bf28e327b73bad7eae8a328c4bd82974b929852e36->@: write: broken pipe" runtime=io.containerd.runc.v2 Apr 28 01:29:24.337094 containerd[1586]: time="2026-04-28T01:29:24.336104342Z" level=warning msg="cleaning up after shim disconnected" id=8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc namespace=k8s.io Apr 28 01:29:24.337094 containerd[1586]: time="2026-04-28T01:29:24.336796894Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:29:24.344578 systemd-logind[1560]: New session 34 of user core. Apr 28 01:29:24.349551 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 28 01:29:24.376372 containerd[1586]: time="2026-04-28T01:29:24.375540573Z" level=info msg="StopContainer for \"8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc\" returns successfully" Apr 28 01:29:24.387332 containerd[1586]: time="2026-04-28T01:29:24.384818578Z" level=info msg="StopPodSandbox for \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\"" Apr 28 01:29:24.387332 containerd[1586]: time="2026-04-28T01:29:24.386027859Z" level=info msg="Container to stop \"ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 01:29:24.387332 containerd[1586]: time="2026-04-28T01:29:24.386087782Z" level=info msg="Container to stop \"54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 01:29:24.387332 containerd[1586]: time="2026-04-28T01:29:24.386137397Z" level=info msg="Container to stop \"6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 01:29:24.387332 containerd[1586]: time="2026-04-28T01:29:24.386146837Z" level=info msg="Container to stop \"8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 01:29:24.387332 containerd[1586]: time="2026-04-28T01:29:24.386245337Z" level=info msg="Container to stop \"b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 01:29:24.581415 containerd[1586]: time="2026-04-28T01:29:24.580790030Z" level=info msg="shim disconnected" id=cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8 namespace=k8s.io Apr 28 01:29:24.582642 containerd[1586]: time="2026-04-28T01:29:24.581465199Z" level=warning msg="cleaning up after shim disconnected" id=cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8 namespace=k8s.io Apr 28 01:29:24.582642 containerd[1586]: time="2026-04-28T01:29:24.581485816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:29:24.630830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8-rootfs.mount: Deactivated successfully. Apr 28 01:29:24.631036 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b-shm.mount: Deactivated successfully. Apr 28 01:29:24.949654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b-rootfs.mount: Deactivated successfully. Apr 28 01:29:24.954344 containerd[1586]: time="2026-04-28T01:29:24.953858533Z" level=info msg="shim disconnected" id=66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b namespace=k8s.io Apr 28 01:29:24.954344 containerd[1586]: time="2026-04-28T01:29:24.953923478Z" level=warning msg="cleaning up after shim disconnected" id=66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b namespace=k8s.io Apr 28 01:29:24.954344 containerd[1586]: time="2026-04-28T01:29:24.953932422Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:29:24.971553 containerd[1586]: time="2026-04-28T01:29:24.971392422Z" level=info msg="TearDown network for sandbox \"cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8\" successfully" Apr 28 01:29:24.971553 containerd[1586]: time="2026-04-28T01:29:24.971428659Z" level=info msg="StopPodSandbox for \"cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8\" returns successfully" Apr 28 01:29:25.051520 containerd[1586]: time="2026-04-28T01:29:25.050732997Z" level=info msg="TearDown network for sandbox \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\" successfully" Apr 28 01:29:25.053923 containerd[1586]: time="2026-04-28T01:29:25.053812596Z" level=info msg="StopPodSandbox for \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\" returns successfully" Apr 28 01:29:25.177310 kubelet[2975]: I0428 01:29:25.173885 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-hostproc\") pod \"147c3752-e4b1-4bee-bb21-d219f93b4aba\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " Apr 28 01:29:25.195521 kubelet[2975]: I0428 01:29:25.175968 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-hostproc" (OuterVolumeSpecName: "hostproc") pod "147c3752-e4b1-4bee-bb21-d219f93b4aba" (UID: "147c3752-e4b1-4bee-bb21-d219f93b4aba"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 01:29:25.195521 kubelet[2975]: I0428 01:29:25.183124 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/147c3752-e4b1-4bee-bb21-d219f93b4aba-clustermesh-secrets\") pod \"147c3752-e4b1-4bee-bb21-d219f93b4aba\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " Apr 28 01:29:25.195521 kubelet[2975]: I0428 01:29:25.184514 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s7q9\" (UniqueName: \"kubernetes.io/projected/147c3752-e4b1-4bee-bb21-d219f93b4aba-kube-api-access-5s7q9\") pod \"147c3752-e4b1-4bee-bb21-d219f93b4aba\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " Apr 28 01:29:25.195521 kubelet[2975]: I0428 01:29:25.184736 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-xtables-lock\") pod \"147c3752-e4b1-4bee-bb21-d219f93b4aba\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " Apr 28 01:29:25.195521 kubelet[2975]: I0428 01:29:25.184752 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-host-proc-sys-net\") pod \"147c3752-e4b1-4bee-bb21-d219f93b4aba\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " Apr 28 01:29:25.195521 kubelet[2975]: I0428 01:29:25.184767 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/147c3752-e4b1-4bee-bb21-d219f93b4aba-hubble-tls\") pod \"147c3752-e4b1-4bee-bb21-d219f93b4aba\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " Apr 28 01:29:25.203865 kubelet[2975]: I0428 01:29:25.184784 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-cni-path\") pod \"147c3752-e4b1-4bee-bb21-d219f93b4aba\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " Apr 28 01:29:25.203865 kubelet[2975]: I0428 01:29:25.184849 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/147c3752-e4b1-4bee-bb21-d219f93b4aba-cilium-config-path\") pod \"147c3752-e4b1-4bee-bb21-d219f93b4aba\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " Apr 28 01:29:25.203865 kubelet[2975]: I0428 01:29:25.184861 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-cilium-cgroup\") pod \"147c3752-e4b1-4bee-bb21-d219f93b4aba\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " Apr 28 01:29:25.203865 kubelet[2975]: I0428 01:29:25.184946 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-cilium-run\") pod \"147c3752-e4b1-4bee-bb21-d219f93b4aba\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " Apr 28 01:29:25.203865 kubelet[2975]: I0428 01:29:25.184958 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-etc-cni-netd\") pod \"147c3752-e4b1-4bee-bb21-d219f93b4aba\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " Apr 28 01:29:25.203865 kubelet[2975]: I0428 01:29:25.184973 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-lib-modules\") pod \"147c3752-e4b1-4bee-bb21-d219f93b4aba\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " Apr 28 01:29:25.204085 kubelet[2975]: I0428 01:29:25.184985 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-host-proc-sys-kernel\") pod \"147c3752-e4b1-4bee-bb21-d219f93b4aba\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " Apr 28 01:29:25.204085 kubelet[2975]: I0428 01:29:25.185189 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-bpf-maps\") pod \"147c3752-e4b1-4bee-bb21-d219f93b4aba\" (UID: \"147c3752-e4b1-4bee-bb21-d219f93b4aba\") " Apr 28 01:29:25.204085 kubelet[2975]: I0428 01:29:25.185270 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bde040ca-45b5-4f6f-8fdf-ed9859696254-cilium-config-path\") pod \"bde040ca-45b5-4f6f-8fdf-ed9859696254\" (UID: \"bde040ca-45b5-4f6f-8fdf-ed9859696254\") " Apr 28 01:29:25.204085 kubelet[2975]: I0428 01:29:25.185377 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k9rx\" (UniqueName: \"kubernetes.io/projected/bde040ca-45b5-4f6f-8fdf-ed9859696254-kube-api-access-8k9rx\") pod \"bde040ca-45b5-4f6f-8fdf-ed9859696254\" (UID: \"bde040ca-45b5-4f6f-8fdf-ed9859696254\") " Apr 28 01:29:25.204085 kubelet[2975]: I0428 01:29:25.187959 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "147c3752-e4b1-4bee-bb21-d219f93b4aba" (UID: "147c3752-e4b1-4bee-bb21-d219f93b4aba"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 01:29:25.204183 kubelet[2975]: I0428 01:29:25.188649 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "147c3752-e4b1-4bee-bb21-d219f93b4aba" (UID: "147c3752-e4b1-4bee-bb21-d219f93b4aba"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 01:29:25.204183 kubelet[2975]: I0428 01:29:25.188666 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "147c3752-e4b1-4bee-bb21-d219f93b4aba" (UID: "147c3752-e4b1-4bee-bb21-d219f93b4aba"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 01:29:25.204183 kubelet[2975]: I0428 01:29:25.188681 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "147c3752-e4b1-4bee-bb21-d219f93b4aba" (UID: "147c3752-e4b1-4bee-bb21-d219f93b4aba"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 01:29:25.204183 kubelet[2975]: I0428 01:29:25.187967 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "147c3752-e4b1-4bee-bb21-d219f93b4aba" (UID: "147c3752-e4b1-4bee-bb21-d219f93b4aba"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 01:29:25.204183 kubelet[2975]: I0428 01:29:25.188714 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "147c3752-e4b1-4bee-bb21-d219f93b4aba" (UID: "147c3752-e4b1-4bee-bb21-d219f93b4aba"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 01:29:25.204350 kubelet[2975]: I0428 01:29:25.188815 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "147c3752-e4b1-4bee-bb21-d219f93b4aba" (UID: "147c3752-e4b1-4bee-bb21-d219f93b4aba"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 01:29:25.204350 kubelet[2975]: I0428 01:29:25.193583 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/147c3752-e4b1-4bee-bb21-d219f93b4aba-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "147c3752-e4b1-4bee-bb21-d219f93b4aba" (UID: "147c3752-e4b1-4bee-bb21-d219f93b4aba"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 01:29:25.204350 kubelet[2975]: I0428 01:29:25.201542 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-cni-path" (OuterVolumeSpecName: "cni-path") pod "147c3752-e4b1-4bee-bb21-d219f93b4aba" (UID: "147c3752-e4b1-4bee-bb21-d219f93b4aba"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 01:29:25.204350 kubelet[2975]: I0428 01:29:25.201618 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "147c3752-e4b1-4bee-bb21-d219f93b4aba" (UID: "147c3752-e4b1-4bee-bb21-d219f93b4aba"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 01:29:25.220341 systemd[1]: var-lib-kubelet-pods-147c3752\x2de4b1\x2d4bee\x2dbb21\x2dd219f93b4aba-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 28 01:29:25.220876 kubelet[2975]: I0428 01:29:25.220169 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147c3752-e4b1-4bee-bb21-d219f93b4aba-kube-api-access-5s7q9" (OuterVolumeSpecName: "kube-api-access-5s7q9") pod "147c3752-e4b1-4bee-bb21-d219f93b4aba" (UID: "147c3752-e4b1-4bee-bb21-d219f93b4aba"). InnerVolumeSpecName "kube-api-access-5s7q9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 01:29:25.222543 kubelet[2975]: I0428 01:29:25.222513 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bde040ca-45b5-4f6f-8fdf-ed9859696254-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bde040ca-45b5-4f6f-8fdf-ed9859696254" (UID: "bde040ca-45b5-4f6f-8fdf-ed9859696254"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 01:29:25.222941 kubelet[2975]: I0428 01:29:25.222862 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147c3752-e4b1-4bee-bb21-d219f93b4aba-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "147c3752-e4b1-4bee-bb21-d219f93b4aba" (UID: "147c3752-e4b1-4bee-bb21-d219f93b4aba"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 28 01:29:25.224045 systemd[1]: var-lib-kubelet-pods-147c3752\x2de4b1\x2d4bee\x2dbb21\x2dd219f93b4aba-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5s7q9.mount: Deactivated successfully. Apr 28 01:29:25.228123 kubelet[2975]: I0428 01:29:25.227640 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde040ca-45b5-4f6f-8fdf-ed9859696254-kube-api-access-8k9rx" (OuterVolumeSpecName: "kube-api-access-8k9rx") pod "bde040ca-45b5-4f6f-8fdf-ed9859696254" (UID: "bde040ca-45b5-4f6f-8fdf-ed9859696254"). InnerVolumeSpecName "kube-api-access-8k9rx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 01:29:25.237870 kubelet[2975]: I0428 01:29:25.237320 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147c3752-e4b1-4bee-bb21-d219f93b4aba-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "147c3752-e4b1-4bee-bb21-d219f93b4aba" (UID: "147c3752-e4b1-4bee-bb21-d219f93b4aba"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 01:29:25.240600 systemd[1]: var-lib-kubelet-pods-bde040ca\x2d45b5\x2d4f6f\x2d8fdf\x2ded9859696254-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8k9rx.mount: Deactivated successfully. Apr 28 01:29:25.240779 systemd[1]: var-lib-kubelet-pods-147c3752\x2de4b1\x2d4bee\x2dbb21\x2dd219f93b4aba-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 28 01:29:25.245257 kubelet[2975]: I0428 01:29:25.244839 2975 scope.go:117] "RemoveContainer" containerID="463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7" Apr 28 01:29:25.291582 kubelet[2975]: I0428 01:29:25.290536 2975 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5s7q9\" (UniqueName: \"kubernetes.io/projected/147c3752-e4b1-4bee-bb21-d219f93b4aba-kube-api-access-5s7q9\") on node \"localhost\" DevicePath \"\"" Apr 28 01:29:25.300773 kubelet[2975]: I0428 01:29:25.298143 2975 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 28 01:29:25.321607 kubelet[2975]: I0428 01:29:25.306627 2975 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 28 01:29:25.321607 kubelet[2975]: I0428 01:29:25.320695 2975 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/147c3752-e4b1-4bee-bb21-d219f93b4aba-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 28 01:29:25.321607 kubelet[2975]: I0428 01:29:25.320804 2975 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 28 01:29:25.321607 kubelet[2975]: I0428 01:29:25.320813 2975 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/147c3752-e4b1-4bee-bb21-d219f93b4aba-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 28 01:29:25.321607 kubelet[2975]: I0428 01:29:25.320821 2975 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 28 01:29:25.321607 kubelet[2975]: I0428 01:29:25.320933 2975 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 28 01:29:25.321607 kubelet[2975]: I0428 01:29:25.320941 2975 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 28 01:29:25.321607 kubelet[2975]: I0428 01:29:25.320948 2975 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 28 01:29:25.322034 containerd[1586]: time="2026-04-28T01:29:25.320115105Z" level=info msg="RemoveContainer for \"463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7\"" Apr 28 01:29:25.324621 kubelet[2975]: I0428 01:29:25.320955 2975 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 28 01:29:25.324621 kubelet[2975]: I0428 01:29:25.320961 2975 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 28 01:29:25.324621 kubelet[2975]: I0428 01:29:25.320967 2975 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bde040ca-45b5-4f6f-8fdf-ed9859696254-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 28 01:29:25.324621 kubelet[2975]: I0428 01:29:25.320974 2975 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8k9rx\" (UniqueName: \"kubernetes.io/projected/bde040ca-45b5-4f6f-8fdf-ed9859696254-kube-api-access-8k9rx\") on node \"localhost\" DevicePath \"\"" Apr 28 01:29:25.324621 kubelet[2975]: I0428 01:29:25.321032 2975 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/147c3752-e4b1-4bee-bb21-d219f93b4aba-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 28 01:29:25.324621 kubelet[2975]: I0428 01:29:25.321040 2975 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/147c3752-e4b1-4bee-bb21-d219f93b4aba-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 28 01:29:25.439369 containerd[1586]: time="2026-04-28T01:29:25.437795315Z" level=info msg="RemoveContainer for \"463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7\" returns successfully" Apr 28 01:29:25.444260 kubelet[2975]: I0428 01:29:25.444068 2975 scope.go:117] "RemoveContainer" containerID="463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7" Apr 28 01:29:25.445658 containerd[1586]: time="2026-04-28T01:29:25.445503009Z" level=error msg="ContainerStatus for \"463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7\": not found" Apr 28 01:29:25.446077 kubelet[2975]: E0428 01:29:25.445978 2975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7\": not found" containerID="463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7" Apr 28 01:29:25.446369 kubelet[2975]: I0428 01:29:25.446271 2975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7"} err="failed to get container status \"463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7\": rpc error: code = NotFound desc = an error occurred when try to find container \"463982eee743fc94e9ec866279611339a90fb062a925131d6a98db72a5bebda7\": not found" Apr 28 01:29:25.446369 kubelet[2975]: I0428 01:29:25.446346 2975 scope.go:117] "RemoveContainer" containerID="8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc" Apr 28 01:29:25.460374 containerd[1586]: time="2026-04-28T01:29:25.458135860Z" level=info msg="RemoveContainer for \"8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc\"" Apr 28 01:29:25.521055 containerd[1586]: time="2026-04-28T01:29:25.520317150Z" level=info msg="RemoveContainer for \"8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc\" returns successfully" Apr 28 01:29:25.521687 kubelet[2975]: I0428 01:29:25.521297 2975 scope.go:117] "RemoveContainer" containerID="6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04" Apr 28 01:29:25.525634 containerd[1586]: time="2026-04-28T01:29:25.525294642Z" level=info msg="RemoveContainer for \"6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04\"" Apr 28 01:29:25.539187 containerd[1586]: time="2026-04-28T01:29:25.538967310Z" level=info msg="RemoveContainer for \"6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04\" returns successfully" Apr 28 01:29:25.545259 kubelet[2975]: I0428 01:29:25.544364 2975 scope.go:117] "RemoveContainer" containerID="b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523" Apr 28 01:29:25.550166 containerd[1586]: time="2026-04-28T01:29:25.549988832Z" level=info msg="RemoveContainer for \"b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523\"" Apr 28 01:29:25.567695 containerd[1586]: time="2026-04-28T01:29:25.566621044Z" level=info msg="RemoveContainer for \"b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523\" returns successfully" Apr 28 01:29:25.576594 kubelet[2975]: I0428 01:29:25.575715 2975 scope.go:117] "RemoveContainer" containerID="54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c" Apr 28 01:29:25.600887 containerd[1586]: time="2026-04-28T01:29:25.600763608Z" level=info msg="RemoveContainer for \"54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c\"" Apr 28 01:29:25.717186 containerd[1586]: time="2026-04-28T01:29:25.716343269Z" level=info msg="RemoveContainer for \"54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c\" returns successfully" Apr 28 01:29:25.722089 kubelet[2975]: I0428 01:29:25.721934 2975 scope.go:117] "RemoveContainer" containerID="ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e" Apr 28 01:29:25.727483 containerd[1586]: time="2026-04-28T01:29:25.727380181Z" level=info msg="RemoveContainer for \"ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e\"" Apr 28 01:29:25.758635 containerd[1586]: time="2026-04-28T01:29:25.758069474Z" level=info msg="RemoveContainer for \"ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e\" returns successfully" Apr 28 01:29:25.767323 kubelet[2975]: I0428 01:29:25.766685 2975 scope.go:117] "RemoveContainer" containerID="8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc" Apr 28 01:29:25.769494 containerd[1586]: time="2026-04-28T01:29:25.769308170Z" level=error msg="ContainerStatus for \"8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc\": not found" Apr 28 01:29:25.769936 kubelet[2975]: E0428 01:29:25.769828 2975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc\": not found" containerID="8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc" Apr 28 01:29:25.770959 kubelet[2975]: I0428 01:29:25.770402 2975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc"} err="failed to get container status \"8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e91b5db624f12f794016732ac25f768a02f0069b07fde0c36bcc3b8eab5f3cc\": not found" Apr 28 01:29:25.770959 kubelet[2975]: I0428 01:29:25.770631 2975 scope.go:117] "RemoveContainer" containerID="6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04" Apr 28 01:29:25.771170 containerd[1586]: time="2026-04-28T01:29:25.770917967Z" level=error msg="ContainerStatus for \"6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04\": not found" Apr 28 01:29:25.771643 kubelet[2975]: E0428 01:29:25.771492 2975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04\": not found" containerID="6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04" Apr 28 01:29:25.771643 kubelet[2975]: I0428 01:29:25.771556 2975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04"} err="failed to get container status \"6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04\": rpc error: code = NotFound desc = an error occurred when try to find container \"6bcf15eb8c0ec4e64e78410c92eab45cfe64cf530a42a1e62caa6329fbc09e04\": not found" Apr 28 01:29:25.771643 kubelet[2975]: I0428 01:29:25.771581 2975 scope.go:117] "RemoveContainer" containerID="b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523" Apr 28 01:29:25.771873 containerd[1586]: time="2026-04-28T01:29:25.771818343Z" level=error msg="ContainerStatus for \"b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523\": not found" Apr 28 01:29:25.772094 kubelet[2975]: E0428 01:29:25.772007 2975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523\": not found" containerID="b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523" Apr 28 01:29:25.772315 kubelet[2975]: I0428 01:29:25.772095 2975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523"} err="failed to get container status \"b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0a283dd482e55015c99c19bf3f7026f7a0f1bb74ff2cbeca94588daa90f4523\": not found" Apr 28 01:29:25.772315 kubelet[2975]: I0428 01:29:25.772271 2975 scope.go:117] "RemoveContainer" containerID="54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c" Apr 28 01:29:25.772649 containerd[1586]: time="2026-04-28T01:29:25.772598316Z" level=error msg="ContainerStatus for \"54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c\": not found" Apr 28 01:29:25.772886 kubelet[2975]: E0428 01:29:25.772786 2975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c\": not found" containerID="54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c" Apr 28 01:29:25.772886 kubelet[2975]: I0428 01:29:25.772805 2975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c"} err="failed to get container status \"54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c\": rpc error: code = NotFound desc = an error occurred when try to find container \"54dbbd8394d94bed89cdafa906a1cbdf08285c992fdc83cb0c03d918b6ae5e6c\": not found" Apr 28 01:29:25.772886 kubelet[2975]: I0428 01:29:25.772818 2975 scope.go:117] "RemoveContainer" containerID="ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e" Apr 28 01:29:25.773114 containerd[1586]: time="2026-04-28T01:29:25.772965036Z" level=error msg="ContainerStatus for \"ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e\": not found" Apr 28 01:29:25.773144 kubelet[2975]: E0428 01:29:25.773103 2975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e\": not found" containerID="ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e" Apr 28 01:29:25.773144 kubelet[2975]: I0428 01:29:25.773121 2975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e"} err="failed to get container status \"ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec08ae7e067948e801d3b4e294924ec86bdd3a81569f1c87383c2851cf0e6a9e\": not found" Apr 28 01:29:26.777123 kubelet[2975]: I0428 01:29:26.776895 2975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="147c3752-e4b1-4bee-bb21-d219f93b4aba" path="/var/lib/kubelet/pods/147c3752-e4b1-4bee-bb21-d219f93b4aba/volumes" Apr 28 01:29:26.812128 sshd[5011]: pam_unix(sshd:session): session closed for user core Apr 28 01:29:26.826182 kubelet[2975]: I0428 01:29:26.812182 2975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bde040ca-45b5-4f6f-8fdf-ed9859696254" path="/var/lib/kubelet/pods/bde040ca-45b5-4f6f-8fdf-ed9859696254/volumes" Apr 28 01:29:26.937128 systemd[1]: Started sshd@34-10.0.0.35:22-10.0.0.1:58992.service - OpenSSH per-connection server daemon (10.0.0.1:58992). Apr 28 01:29:26.938034 systemd[1]: sshd@33-10.0.0.35:22-10.0.0.1:58984.service: Deactivated successfully. Apr 28 01:29:27.040641 systemd[1]: session-34.scope: Deactivated successfully. Apr 28 01:29:27.126284 systemd-logind[1560]: Session 34 logged out. Waiting for processes to exit. Apr 28 01:29:27.352747 systemd-logind[1560]: Removed session 34. Apr 28 01:29:27.871725 sshd[5130]: Accepted publickey for core from 10.0.0.1 port 58992 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:29:28.014906 sshd[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:29:28.023037 systemd-logind[1560]: New session 35 of user core. Apr 28 01:29:28.044750 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 28 01:29:28.148270 kubelet[2975]: I0428 01:29:28.145629 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4262882-f9da-4f8d-8dd3-2c2c65993ce7-cilium-cgroup\") pod \"cilium-gjq5g\" (UID: \"f4262882-f9da-4f8d-8dd3-2c2c65993ce7\") " pod="kube-system/cilium-gjq5g" Apr 28 01:29:28.148270 kubelet[2975]: I0428 01:29:28.145757 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4262882-f9da-4f8d-8dd3-2c2c65993ce7-host-proc-sys-kernel\") pod \"cilium-gjq5g\" (UID: \"f4262882-f9da-4f8d-8dd3-2c2c65993ce7\") " pod="kube-system/cilium-gjq5g" Apr 28 01:29:28.148270 kubelet[2975]: I0428 01:29:28.145772 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4262882-f9da-4f8d-8dd3-2c2c65993ce7-etc-cni-netd\") pod \"cilium-gjq5g\" (UID: \"f4262882-f9da-4f8d-8dd3-2c2c65993ce7\") " pod="kube-system/cilium-gjq5g" Apr 28 01:29:28.148270 kubelet[2975]: I0428 01:29:28.145796 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4262882-f9da-4f8d-8dd3-2c2c65993ce7-cni-path\") pod \"cilium-gjq5g\" (UID: \"f4262882-f9da-4f8d-8dd3-2c2c65993ce7\") " pod="kube-system/cilium-gjq5g" Apr 28 01:29:28.148270 kubelet[2975]: I0428 01:29:28.145812 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4262882-f9da-4f8d-8dd3-2c2c65993ce7-cilium-run\") pod \"cilium-gjq5g\" (UID: \"f4262882-f9da-4f8d-8dd3-2c2c65993ce7\") " pod="kube-system/cilium-gjq5g" Apr 28 01:29:28.148270 kubelet[2975]: I0428 01:29:28.145823 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4262882-f9da-4f8d-8dd3-2c2c65993ce7-lib-modules\") pod \"cilium-gjq5g\" (UID: \"f4262882-f9da-4f8d-8dd3-2c2c65993ce7\") " pod="kube-system/cilium-gjq5g" Apr 28 01:29:28.153482 kubelet[2975]: I0428 01:29:28.145834 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4262882-f9da-4f8d-8dd3-2c2c65993ce7-clustermesh-secrets\") pod \"cilium-gjq5g\" (UID: \"f4262882-f9da-4f8d-8dd3-2c2c65993ce7\") " pod="kube-system/cilium-gjq5g" Apr 28 01:29:28.153482 kubelet[2975]: I0428 01:29:28.145980 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f4262882-f9da-4f8d-8dd3-2c2c65993ce7-cilium-ipsec-secrets\") pod \"cilium-gjq5g\" (UID: \"f4262882-f9da-4f8d-8dd3-2c2c65993ce7\") " pod="kube-system/cilium-gjq5g" Apr 28 01:29:28.153482 kubelet[2975]: I0428 01:29:28.145993 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpjwq\" (UniqueName: \"kubernetes.io/projected/f4262882-f9da-4f8d-8dd3-2c2c65993ce7-kube-api-access-tpjwq\") pod \"cilium-gjq5g\" (UID: \"f4262882-f9da-4f8d-8dd3-2c2c65993ce7\") " pod="kube-system/cilium-gjq5g" Apr 28 01:29:28.153482 kubelet[2975]: I0428 01:29:28.146010 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4262882-f9da-4f8d-8dd3-2c2c65993ce7-hostproc\") pod \"cilium-gjq5g\" (UID: \"f4262882-f9da-4f8d-8dd3-2c2c65993ce7\") " pod="kube-system/cilium-gjq5g" Apr 28 01:29:28.153482 kubelet[2975]: I0428 01:29:28.146023 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4262882-f9da-4f8d-8dd3-2c2c65993ce7-xtables-lock\") pod \"cilium-gjq5g\" (UID: \"f4262882-f9da-4f8d-8dd3-2c2c65993ce7\") " pod="kube-system/cilium-gjq5g" Apr 28 01:29:28.152861 sshd[5130]: pam_unix(sshd:session): session closed for user core Apr 28 01:29:28.153640 kubelet[2975]: I0428 01:29:28.146036 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4262882-f9da-4f8d-8dd3-2c2c65993ce7-cilium-config-path\") pod \"cilium-gjq5g\" (UID: \"f4262882-f9da-4f8d-8dd3-2c2c65993ce7\") " pod="kube-system/cilium-gjq5g" Apr 28 01:29:28.153640 kubelet[2975]: I0428 01:29:28.146116 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4262882-f9da-4f8d-8dd3-2c2c65993ce7-hubble-tls\") pod \"cilium-gjq5g\" (UID: \"f4262882-f9da-4f8d-8dd3-2c2c65993ce7\") " pod="kube-system/cilium-gjq5g" Apr 28 01:29:28.153640 kubelet[2975]: I0428 01:29:28.146128 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4262882-f9da-4f8d-8dd3-2c2c65993ce7-bpf-maps\") pod \"cilium-gjq5g\" (UID: \"f4262882-f9da-4f8d-8dd3-2c2c65993ce7\") " pod="kube-system/cilium-gjq5g" Apr 28 01:29:28.153640 kubelet[2975]: I0428 01:29:28.146187 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4262882-f9da-4f8d-8dd3-2c2c65993ce7-host-proc-sys-net\") pod \"cilium-gjq5g\" (UID: \"f4262882-f9da-4f8d-8dd3-2c2c65993ce7\") " pod="kube-system/cilium-gjq5g" Apr 28 01:29:28.166851 systemd[1]: Started sshd@35-10.0.0.35:22-10.0.0.1:59002.service - OpenSSH per-connection server daemon (10.0.0.1:59002). Apr 28 01:29:28.167931 systemd[1]: sshd@34-10.0.0.35:22-10.0.0.1:58992.service: Deactivated successfully. Apr 28 01:29:28.170582 systemd-logind[1560]: Session 35 logged out. Waiting for processes to exit. Apr 28 01:29:28.171573 systemd[1]: session-35.scope: Deactivated successfully. Apr 28 01:29:28.172611 systemd-logind[1560]: Removed session 35. Apr 28 01:29:28.381375 sshd[5140]: Accepted publickey for core from 10.0.0.1 port 59002 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:29:28.427773 sshd[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:29:28.528327 systemd-logind[1560]: New session 36 of user core. Apr 28 01:29:28.562648 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 28 01:29:28.639843 kubelet[2975]: E0428 01:29:28.639597 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:29:28.685688 containerd[1586]: time="2026-04-28T01:29:28.684113116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gjq5g,Uid:f4262882-f9da-4f8d-8dd3-2c2c65993ce7,Namespace:kube-system,Attempt:0,}" Apr 28 01:29:28.770334 kubelet[2975]: E0428 01:29:28.769942 2975 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:29:28.806700 kubelet[2975]: E0428 01:29:28.806661 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:29:29.374986 containerd[1586]: time="2026-04-28T01:29:29.372292104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:29:29.374986 containerd[1586]: time="2026-04-28T01:29:29.374394464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:29:29.374986 containerd[1586]: time="2026-04-28T01:29:29.374421939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:29:29.374986 containerd[1586]: time="2026-04-28T01:29:29.375098398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:29:30.001837 containerd[1586]: time="2026-04-28T01:29:30.001333078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gjq5g,Uid:f4262882-f9da-4f8d-8dd3-2c2c65993ce7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ff84bd1c99c8dc38a258a9fe9930bbcc6428b835486ecbbb303b6c3504d93ae\"" Apr 28 01:29:30.038109 kubelet[2975]: E0428 01:29:30.038002 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:29:30.103360 containerd[1586]: time="2026-04-28T01:29:30.102033532Z" level=info msg="CreateContainer within sandbox \"7ff84bd1c99c8dc38a258a9fe9930bbcc6428b835486ecbbb303b6c3504d93ae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 28 01:29:30.169502 containerd[1586]: time="2026-04-28T01:29:30.169322137Z" level=info msg="CreateContainer within sandbox \"7ff84bd1c99c8dc38a258a9fe9930bbcc6428b835486ecbbb303b6c3504d93ae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"753b5e8b91bd077e53cb2fc3b7b6f635c4eba0890733bd3f956419d1ec8ad868\"" Apr 28 01:29:30.188286 containerd[1586]: time="2026-04-28T01:29:30.185694007Z" level=info msg="StartContainer for \"753b5e8b91bd077e53cb2fc3b7b6f635c4eba0890733bd3f956419d1ec8ad868\"" Apr 28 01:29:30.543103 containerd[1586]: time="2026-04-28T01:29:30.543011950Z" level=info msg="StartContainer for \"753b5e8b91bd077e53cb2fc3b7b6f635c4eba0890733bd3f956419d1ec8ad868\" returns successfully" Apr 28 01:29:31.167844 containerd[1586]: time="2026-04-28T01:29:31.166839844Z" level=info msg="shim disconnected" id=753b5e8b91bd077e53cb2fc3b7b6f635c4eba0890733bd3f956419d1ec8ad868 namespace=k8s.io Apr 28 01:29:31.167844 containerd[1586]: time="2026-04-28T01:29:31.167101197Z" level=warning msg="cleaning up after shim disconnected" id=753b5e8b91bd077e53cb2fc3b7b6f635c4eba0890733bd3f956419d1ec8ad868 namespace=k8s.io Apr 28 01:29:31.167844 containerd[1586]: time="2026-04-28T01:29:31.167110096Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:29:31.174297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-753b5e8b91bd077e53cb2fc3b7b6f635c4eba0890733bd3f956419d1ec8ad868-rootfs.mount: Deactivated successfully. Apr 28 01:29:31.749348 kubelet[2975]: E0428 01:29:31.747550 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:29:31.977235 containerd[1586]: time="2026-04-28T01:29:31.966011245Z" level=warning msg="cleanup warnings time=\"2026-04-28T01:29:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 01:29:33.981749 kubelet[2975]: E0428 01:29:33.981566 2975 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:29:34.028137 kubelet[2975]: I0428 01:29:33.999353 2975 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-28T01:29:33Z","lastTransitionTime":"2026-04-28T01:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 28 01:29:34.037511 kubelet[2975]: E0428 01:29:34.000917 2975 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.103s" Apr 28 01:29:39.833911 kubelet[2975]: E0428 01:29:39.804271 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:29:51.093120 kubelet[2975]: E0428 01:29:51.032816 2975 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:29:52.504848 containerd[1586]: time="2026-04-28T01:29:52.461616350Z" level=info msg="StopPodSandbox for \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\"" Apr 28 01:29:52.517588 containerd[1586]: time="2026-04-28T01:29:52.501419596Z" level=info msg="CreateContainer within sandbox \"7ff84bd1c99c8dc38a258a9fe9930bbcc6428b835486ecbbb303b6c3504d93ae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 28 01:29:52.574428 containerd[1586]: time="2026-04-28T01:29:52.569577648Z" level=info msg="TearDown network for sandbox \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\" successfully" Apr 28 01:29:52.576688 containerd[1586]: time="2026-04-28T01:29:52.576059911Z" level=info msg="StopPodSandbox for \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\" returns successfully" Apr 28 01:29:52.695300 kubelet[2975]: E0428 01:29:52.694417 2975 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="17.72s" Apr 28 01:29:52.870004 containerd[1586]: time="2026-04-28T01:29:52.867144703Z" level=info msg="RemovePodSandbox for \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\"" Apr 28 01:29:52.887485 containerd[1586]: time="2026-04-28T01:29:52.871088178Z" level=info msg="Forcibly stopping sandbox \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\"" Apr 28 01:29:52.912073 containerd[1586]: time="2026-04-28T01:29:52.900731551Z" level=info msg="TearDown network for sandbox \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\" successfully" Apr 28 01:29:53.068721 kubelet[2975]: E0428 01:29:53.063821 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-56ckt" podUID="b3bab088-fa78-428a-bf00-fef2da577218" Apr 28 01:29:53.112341 kubelet[2975]: E0428 01:29:53.110704 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:29:53.157123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-057a5014731b90dbeae37eac1703b47b7a6a71a60084b7cd58e0c5221db23cca-rootfs.mount: Deactivated successfully. Apr 28 01:29:55.055271 containerd[1586]: time="2026-04-28T01:29:54.992456847Z" level=info msg="shim disconnected" id=057a5014731b90dbeae37eac1703b47b7a6a71a60084b7cd58e0c5221db23cca namespace=k8s.io Apr 28 01:29:55.057886 containerd[1586]: time="2026-04-28T01:29:55.056589407Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 01:29:55.057886 containerd[1586]: time="2026-04-28T01:29:55.056882433Z" level=info msg="RemovePodSandbox \"66dcbb0a5cd57ffc6cc6d84cece053f33507c34f0533274e8700a012783b6c2b\" returns successfully" Apr 28 01:29:55.059324 kubelet[2975]: E0428 01:29:55.059171 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:29:55.060677 containerd[1586]: time="2026-04-28T01:29:55.060350432Z" level=warning msg="cleaning up after shim disconnected" id=057a5014731b90dbeae37eac1703b47b7a6a71a60084b7cd58e0c5221db23cca namespace=k8s.io Apr 28 01:29:55.060677 containerd[1586]: time="2026-04-28T01:29:55.060384244Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:29:55.124511 containerd[1586]: time="2026-04-28T01:29:55.124387678Z" level=info msg="StopPodSandbox for \"cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8\"" Apr 28 01:29:55.125622 containerd[1586]: time="2026-04-28T01:29:55.124582428Z" level=info msg="TearDown network for sandbox \"cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8\" successfully" Apr 28 01:29:55.125622 containerd[1586]: time="2026-04-28T01:29:55.124638397Z" level=info msg="StopPodSandbox for \"cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8\" returns successfully" Apr 28 01:29:55.194758 containerd[1586]: time="2026-04-28T01:29:55.186329831Z" level=error msg="collecting metrics for 057a5014731b90dbeae37eac1703b47b7a6a71a60084b7cd58e0c5221db23cca" error="ttrpc: closed: unknown" Apr 28 01:29:55.390311 kubelet[2975]: E0428 01:29:55.382126 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-56ckt" podUID="b3bab088-fa78-428a-bf00-fef2da577218" Apr 28 01:29:55.606642 containerd[1586]: time="2026-04-28T01:29:55.594668040Z" level=info msg="RemovePodSandbox for \"cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8\"" Apr 28 01:29:55.606642 containerd[1586]: time="2026-04-28T01:29:55.594941801Z" level=info msg="Forcibly stopping sandbox \"cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8\"" Apr 28 01:29:55.606642 containerd[1586]: time="2026-04-28T01:29:55.595154812Z" level=info msg="TearDown network for sandbox \"cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8\" successfully" Apr 28 01:29:55.671516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df40bcba28c46e50f6c1089a06c527aa7bc27c9ff638ebcbcb4bf52d3e4ab5cd-rootfs.mount: Deactivated successfully. Apr 28 01:29:55.789404 kubelet[2975]: E0428 01:29:55.788125 2975 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.593s" Apr 28 01:29:55.799180 containerd[1586]: time="2026-04-28T01:29:55.799066043Z" level=info msg="shim disconnected" id=df40bcba28c46e50f6c1089a06c527aa7bc27c9ff638ebcbcb4bf52d3e4ab5cd namespace=k8s.io Apr 28 01:29:55.801366 containerd[1586]: time="2026-04-28T01:29:55.800841702Z" level=warning msg="cleaning up after shim disconnected" id=df40bcba28c46e50f6c1089a06c527aa7bc27c9ff638ebcbcb4bf52d3e4ab5cd namespace=k8s.io Apr 28 01:29:55.801366 containerd[1586]: time="2026-04-28T01:29:55.800865149Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:29:56.443490 containerd[1586]: time="2026-04-28T01:29:56.424527524Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 01:29:56.443490 containerd[1586]: time="2026-04-28T01:29:56.439795817Z" level=info msg="RemovePodSandbox \"cc3427f0ec52e736ce4960920746e8dd625ed2dfd96df52b445da31b5b14d9a8\" returns successfully" Apr 28 01:29:57.268546 kubelet[2975]: E0428 01:29:57.265969 2975 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:29:57.973134 containerd[1586]: time="2026-04-28T01:29:57.950972556Z" level=info msg="CreateContainer within sandbox \"7ff84bd1c99c8dc38a258a9fe9930bbcc6428b835486ecbbb303b6c3504d93ae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"57f7dc148a4984d1d354b04b7c1f0ca00b0624ae9cc8a486ecbabc8e2763af98\"" Apr 28 01:29:59.199893 containerd[1586]: time="2026-04-28T01:29:59.198808047Z" level=info msg="StartContainer for \"57f7dc148a4984d1d354b04b7c1f0ca00b0624ae9cc8a486ecbabc8e2763af98\"" Apr 28 01:29:59.309709 containerd[1586]: time="2026-04-28T01:29:59.309446347Z" level=warning msg="cleanup warnings time=\"2026-04-28T01:29:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 01:30:02.961456 containerd[1586]: time="2026-04-28T01:30:02.959895641Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id df40bcba28c46e50f6c1089a06c527aa7bc27c9ff638ebcbcb4bf52d3e4ab5cd -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/df40bcba28c46e50f6c1089a06c527aa7bc27c9ff638ebcbcb4bf52d3e4ab5cd delete" error="signal: killed" namespace=k8s.io Apr 28 01:30:03.094484 containerd[1586]: time="2026-04-28T01:30:03.039453664Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=df40bcba28c46e50f6c1089a06c527aa7bc27c9ff638ebcbcb4bf52d3e4ab5cd namespace=k8s.io Apr 28 01:30:03.535723 containerd[1586]: time="2026-04-28T01:30:03.423131796Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=057a5014731b90dbeae37eac1703b47b7a6a71a60084b7cd58e0c5221db23cca Apr 28 01:30:04.382287 containerd[1586]: time="2026-04-28T01:30:04.332820896Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=df40bcba28c46e50f6c1089a06c527aa7bc27c9ff638ebcbcb4bf52d3e4ab5cd Apr 28 01:30:05.428715 kubelet[2975]: E0428 01:30:05.427893 2975 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:30:05.616847 kubelet[2975]: E0428 01:30:05.615044 2975 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.818s" Apr 28 01:30:05.630752 kubelet[2975]: E0428 01:30:05.630564 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-9k4ss" podUID="0053c634-d492-45cd-94d8-0d81ec3088f9" Apr 28 01:30:05.683644 kubelet[2975]: E0428 01:30:05.677480 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:30:05.915723 kubelet[2975]: E0428 01:30:05.908335 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-56ckt" podUID="b3bab088-fa78-428a-bf00-fef2da577218" Apr 28 01:30:07.851391 kubelet[2975]: E0428 01:30:07.845980 2975 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.137s" Apr 28 01:30:11.733987 kubelet[2975]: I0428 01:30:11.713835 2975 scope.go:117] "RemoveContainer" containerID="057a5014731b90dbeae37eac1703b47b7a6a71a60084b7cd58e0c5221db23cca" Apr 28 01:30:13.385460 kubelet[2975]: E0428 01:30:13.184112 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:30:15.162184 kubelet[2975]: E0428 01:30:15.162048 2975 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:30:20.133120 containerd[1586]: time="2026-04-28T01:30:20.132382426Z" level=info msg="StartContainer for \"57f7dc148a4984d1d354b04b7c1f0ca00b0624ae9cc8a486ecbabc8e2763af98\" returns successfully" Apr 28 01:30:21.969062 containerd[1586]: time="2026-04-28T01:30:21.967986078Z" level=info msg="CreateContainer within sandbox \"8b6266d7ba8c24c0e2cc849dc0e30a0f3e9d83bec15a504b5197217a01c44716\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 28 01:30:23.610644 kubelet[2975]: E0428 01:30:23.610559 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-9k4ss" podUID="0053c634-d492-45cd-94d8-0d81ec3088f9" Apr 28 01:30:24.298363 kubelet[2975]: E0428 01:30:24.295961 2975 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.495s" Apr 28 01:30:26.195913 kubelet[2975]: E0428 01:30:26.191282 2975 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:30:26.859821 kubelet[2975]: E0428 01:30:26.249845 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-56ckt" podUID="b3bab088-fa78-428a-bf00-fef2da577218" Apr 28 01:30:31.605458 kubelet[2975]: I0428 01:30:31.550259 2975 scope.go:117] "RemoveContainer" containerID="df40bcba28c46e50f6c1089a06c527aa7bc27c9ff638ebcbcb4bf52d3e4ab5cd" Apr 28 01:30:33.795849 containerd[1586]: time="2026-04-28T01:30:33.783788641Z" level=error msg="failed to handle container TaskExit event container_id:\"57f7dc148a4984d1d354b04b7c1f0ca00b0624ae9cc8a486ecbabc8e2763af98\" id:\"57f7dc148a4984d1d354b04b7c1f0ca00b0624ae9cc8a486ecbabc8e2763af98\" pid:5332 exited_at:{seconds:1777339820 nanos:657898899}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 01:30:35.626081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57f7dc148a4984d1d354b04b7c1f0ca00b0624ae9cc8a486ecbabc8e2763af98-rootfs.mount: Deactivated successfully. Apr 28 01:30:36.026583 containerd[1586]: time="2026-04-28T01:30:36.008778583Z" level=error msg="ttrpc: received message on inactive stream" stream=27 Apr 28 01:30:36.026583 containerd[1586]: time="2026-04-28T01:30:36.015580200Z" level=info msg="TaskExit event container_id:\"57f7dc148a4984d1d354b04b7c1f0ca00b0624ae9cc8a486ecbabc8e2763af98\" id:\"57f7dc148a4984d1d354b04b7c1f0ca00b0624ae9cc8a486ecbabc8e2763af98\" pid:5332 exited_at:{seconds:1777339820 nanos:657898899}" Apr 28 01:30:36.887296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1204541052.mount: Deactivated successfully. Apr 28 01:30:37.200811 kubelet[2975]: E0428 01:30:37.199772 2975 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:30:37.955188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount320063357.mount: Deactivated successfully. Apr 28 01:30:38.365838 kubelet[2975]: E0428 01:30:38.365623 2975 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:30:38.703632 containerd[1586]: time="2026-04-28T01:30:38.631631499Z" level=info msg="CreateContainer within sandbox \"8b6266d7ba8c24c0e2cc849dc0e30a0f3e9d83bec15a504b5197217a01c44716\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0ba534d1f406f4532ff0728d1486c30c6c1d08e5e4c4050f92cf61f488514003\"" Apr 28 01:30:39.427704 sshd[5140]: pam_unix(sshd:session): session closed for user core Apr 28 01:30:39.798873 kubelet[2975]: E0428 01:30:39.798845 2975 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.943s" Apr 28 01:30:40.007292 systemd[1]: sshd@35-10.0.0.35:22-10.0.0.1:59002.service: Deactivated successfully. Apr 28 01:30:40.217721 containerd[1586]: time="2026-04-28T01:30:40.216350777Z" level=info msg="StartContainer for \"0ba534d1f406f4532ff0728d1486c30c6c1d08e5e4c4050f92cf61f488514003\"" Apr 28 01:30:40.319551 systemd[1]: session-36.scope: Deactivated successfully. Apr 28 01:30:40.618127 systemd-logind[1560]: Session 36 logged out. Waiting for processes to exit. Apr 28 01:30:41.023608 systemd-logind[1560]: Removed session 36. Apr 28 01:30:41.268514 containerd[1586]: time="2026-04-28T01:30:41.266156754Z" level=info msg="shim disconnected" id=57f7dc148a4984d1d354b04b7c1f0ca00b0624ae9cc8a486ecbabc8e2763af98 namespace=k8s.io Apr 28 01:30:41.268514 containerd[1586]: time="2026-04-28T01:30:41.266603303Z" level=warning msg="cleaning up after shim disconnected" id=57f7dc148a4984d1d354b04b7c1f0ca00b0624ae9cc8a486ecbabc8e2763af98 namespace=k8s.io Apr 28 01:30:41.268514 containerd[1586]: time="2026-04-28T01:30:41.266720157Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:30:41.289827 kubelet[2975]: E0428 01:30:41.286946 2975 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.487s"