Apr 14 01:10:16.895412 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 14 01:10:16.895432 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 01:10:16.895442 kernel: BIOS-provided physical RAM map: Apr 14 01:10:16.895447 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 14 01:10:16.895452 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 14 01:10:16.895456 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 14 01:10:16.895461 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 14 01:10:16.895465 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 14 01:10:16.895469 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 14 01:10:16.895475 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 14 01:10:16.895479 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 14 01:10:16.895483 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 14 01:10:16.895487 kernel: NX (Execute Disable) protection: active Apr 14 01:10:16.895491 kernel: APIC: Static calls initialized Apr 14 01:10:16.895497 kernel: SMBIOS 2.8 present. Apr 14 01:10:16.895503 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 14 01:10:16.895508 kernel: Hypervisor detected: KVM Apr 14 01:10:16.895512 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 14 01:10:16.895517 kernel: kvm-clock: using sched offset of 3607376166 cycles Apr 14 01:10:16.895522 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 14 01:10:16.895527 kernel: tsc: Detected 2793.438 MHz processor Apr 14 01:10:16.895531 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 14 01:10:16.895536 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 14 01:10:16.895541 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 14 01:10:16.895547 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 14 01:10:16.895552 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 14 01:10:16.895556 kernel: Using GB pages for direct mapping Apr 14 01:10:16.895561 kernel: ACPI: Early table checksum verification disabled Apr 14 01:10:16.895566 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 14 01:10:16.895570 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:10:16.895575 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:10:16.895580 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:10:16.895584 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 14 01:10:16.895590 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:10:16.895595 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:10:16.895599 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:10:16.895604 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:10:16.895609 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 14 01:10:16.895613 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 14 01:10:16.895618 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 14 01:10:16.895625 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 14 01:10:16.895631 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 14 01:10:16.895636 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 14 01:10:16.895641 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 14 01:10:16.895646 kernel: No NUMA configuration found Apr 14 01:10:16.895651 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 14 01:10:16.895656 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 14 01:10:16.895662 kernel: Zone ranges: Apr 14 01:10:16.895667 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 14 01:10:16.895672 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 14 01:10:16.895677 kernel: Normal empty Apr 14 01:10:16.895681 kernel: Movable zone start for each node Apr 14 01:10:16.895686 kernel: Early memory node ranges Apr 14 01:10:16.895691 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 14 01:10:16.895696 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 14 01:10:16.895701 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 14 01:10:16.895706 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 14 01:10:16.895712 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 14 01:10:16.895717 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 14 01:10:16.895722 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 14 01:10:16.895727 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 14 01:10:16.895732 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 14 01:10:16.895737 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 14 01:10:16.895742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 14 01:10:16.895747 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 14 01:10:16.895751 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 14 01:10:16.895758 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 14 01:10:16.895762 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 14 01:10:16.895767 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 14 01:10:16.895772 kernel: TSC deadline timer available Apr 14 01:10:16.895777 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 14 01:10:16.895782 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 14 01:10:16.895787 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 14 01:10:16.895792 kernel: kvm-guest: setup PV sched yield Apr 14 01:10:16.895797 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 14 01:10:16.895803 kernel: Booting paravirtualized kernel on KVM Apr 14 01:10:16.895808 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 14 01:10:16.895813 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 14 01:10:16.895818 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 14 01:10:16.895823 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 14 01:10:16.895827 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 14 01:10:16.895832 kernel: kvm-guest: PV spinlocks enabled Apr 14 01:10:16.895837 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 14 01:10:16.895842 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 01:10:16.895849 kernel: random: crng init done Apr 14 01:10:16.895854 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 14 01:10:16.895859 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 14 01:10:16.895864 kernel: Fallback order for Node 0: 0 Apr 14 01:10:16.895869 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 14 01:10:16.895874 kernel: Policy zone: DMA32 Apr 14 01:10:16.895879 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 14 01:10:16.895884 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137896K reserved, 0K cma-reserved) Apr 14 01:10:16.895890 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 14 01:10:16.895895 kernel: ftrace: allocating 37996 entries in 149 pages Apr 14 01:10:16.895900 kernel: ftrace: allocated 149 pages with 4 groups Apr 14 01:10:16.895905 kernel: Dynamic Preempt: voluntary Apr 14 01:10:16.895909 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 14 01:10:16.895915 kernel: rcu: RCU event tracing is enabled. Apr 14 01:10:16.895920 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 14 01:10:16.895943 kernel: Trampoline variant of Tasks RCU enabled. Apr 14 01:10:16.895949 kernel: Rude variant of Tasks RCU enabled. Apr 14 01:10:16.895954 kernel: Tracing variant of Tasks RCU enabled. Apr 14 01:10:16.895960 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 14 01:10:16.895966 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 14 01:10:16.895970 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 14 01:10:16.895975 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 14 01:10:16.895980 kernel: Console: colour VGA+ 80x25 Apr 14 01:10:16.895985 kernel: printk: console [ttyS0] enabled Apr 14 01:10:16.895990 kernel: ACPI: Core revision 20230628 Apr 14 01:10:16.895995 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 14 01:10:16.896000 kernel: APIC: Switch to symmetric I/O mode setup Apr 14 01:10:16.896007 kernel: x2apic enabled Apr 14 01:10:16.896012 kernel: APIC: Switched APIC routing to: physical x2apic Apr 14 01:10:16.896017 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 14 01:10:16.896022 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 14 01:10:16.896027 kernel: kvm-guest: setup PV IPIs Apr 14 01:10:16.896032 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 14 01:10:16.896037 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 01:10:16.896049 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 14 01:10:16.896054 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 14 01:10:16.896059 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 14 01:10:16.896098 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 14 01:10:16.896107 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 14 01:10:16.896112 kernel: Spectre V2 : Mitigation: Retpolines Apr 14 01:10:16.896118 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 14 01:10:16.896123 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 14 01:10:16.896129 kernel: RETBleed: Vulnerable Apr 14 01:10:16.896136 kernel: Speculative Store Bypass: Vulnerable Apr 14 01:10:16.896141 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 14 01:10:16.896146 kernel: GDS: Unknown: Dependent on hypervisor status Apr 14 01:10:16.896152 kernel: active return thunk: its_return_thunk Apr 14 01:10:16.896157 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 14 01:10:16.896163 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 14 01:10:16.896189 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 14 01:10:16.896194 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 14 01:10:16.896200 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 14 01:10:16.896230 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 14 01:10:16.896236 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 14 01:10:16.896241 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 14 01:10:16.896247 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 14 01:10:16.896252 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 14 01:10:16.896257 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 14 01:10:16.896263 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 14 01:10:16.896268 kernel: Freeing SMP alternatives memory: 32K Apr 14 01:10:16.896273 kernel: pid_max: default: 32768 minimum: 301 Apr 14 01:10:16.896281 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 14 01:10:16.896286 kernel: landlock: Up and running. Apr 14 01:10:16.896291 kernel: SELinux: Initializing. Apr 14 01:10:16.896297 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 01:10:16.896303 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 01:10:16.896308 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 14 01:10:16.896314 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 01:10:16.896319 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 01:10:16.896324 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 01:10:16.896331 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 14 01:10:16.896337 kernel: signal: max sigframe size: 3632 Apr 14 01:10:16.896342 kernel: rcu: Hierarchical SRCU implementation. Apr 14 01:10:16.896347 kernel: rcu: Max phase no-delay instances is 400. Apr 14 01:10:16.896353 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 14 01:10:16.896358 kernel: smp: Bringing up secondary CPUs ... Apr 14 01:10:16.896364 kernel: smpboot: x86: Booting SMP configuration: Apr 14 01:10:16.896369 kernel: .... node #0, CPUs: #1 #2 #3 Apr 14 01:10:16.896374 kernel: smp: Brought up 1 node, 4 CPUs Apr 14 01:10:16.896381 kernel: smpboot: Max logical packages: 1 Apr 14 01:10:16.896387 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 14 01:10:16.896392 kernel: devtmpfs: initialized Apr 14 01:10:16.896397 kernel: x86/mm: Memory block size: 128MB Apr 14 01:10:16.896403 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 14 01:10:16.896408 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 14 01:10:16.896414 kernel: pinctrl core: initialized pinctrl subsystem Apr 14 01:10:16.896419 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 14 01:10:16.896425 kernel: audit: initializing netlink subsys (disabled) Apr 14 01:10:16.896431 kernel: audit: type=2000 audit(1776129015.840:1): state=initialized audit_enabled=0 res=1 Apr 14 01:10:16.896437 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 14 01:10:16.896442 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 14 01:10:16.896448 kernel: cpuidle: using governor menu Apr 14 01:10:16.896453 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 14 01:10:16.896459 kernel: dca service started, version 1.12.1 Apr 14 01:10:16.896464 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 14 01:10:16.896470 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 14 01:10:16.896475 kernel: PCI: Using configuration type 1 for base access Apr 14 01:10:16.896483 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 14 01:10:16.896488 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 14 01:10:16.896494 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 14 01:10:16.896499 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 14 01:10:16.896504 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 14 01:10:16.896510 kernel: ACPI: Added _OSI(Module Device) Apr 14 01:10:16.896515 kernel: ACPI: Added _OSI(Processor Device) Apr 14 01:10:16.896521 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 14 01:10:16.896526 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 14 01:10:16.896533 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 14 01:10:16.896538 kernel: ACPI: Interpreter enabled Apr 14 01:10:16.896544 kernel: ACPI: PM: (supports S0 S3 S5) Apr 14 01:10:16.896549 kernel: ACPI: Using IOAPIC for interrupt routing Apr 14 01:10:16.896555 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 14 01:10:16.896560 kernel: PCI: Using E820 reservations for host bridge windows Apr 14 01:10:16.896565 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 14 01:10:16.896571 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 14 01:10:16.896675 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 14 01:10:16.896738 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 14 01:10:16.896792 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 14 01:10:16.896799 kernel: PCI host bridge to bus 0000:00 Apr 14 01:10:16.896858 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 14 01:10:16.896907 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 14 01:10:16.896977 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 14 01:10:16.897029 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 14 01:10:16.897078 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 14 01:10:16.897127 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 14 01:10:16.897414 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 14 01:10:16.897494 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 14 01:10:16.897555 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 14 01:10:16.897649 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 14 01:10:16.897705 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 14 01:10:16.897760 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 14 01:10:16.897816 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 14 01:10:16.897877 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 14 01:10:16.898022 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 14 01:10:16.898081 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 14 01:10:16.898158 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 14 01:10:16.898264 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 14 01:10:16.898322 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 14 01:10:16.898377 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 14 01:10:16.898485 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 14 01:10:16.898550 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 14 01:10:16.898606 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 14 01:10:16.898665 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 14 01:10:16.898720 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 14 01:10:16.898774 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 14 01:10:16.898833 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 14 01:10:16.898887 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 14 01:10:16.898969 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 14 01:10:16.899025 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 14 01:10:16.899082 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 14 01:10:16.899142 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 14 01:10:16.899238 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 14 01:10:16.899246 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 14 01:10:16.899252 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 14 01:10:16.899258 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 14 01:10:16.899263 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 14 01:10:16.899271 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 14 01:10:16.899277 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 14 01:10:16.899282 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 14 01:10:16.899288 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 14 01:10:16.899293 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 14 01:10:16.899298 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 14 01:10:16.899304 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 14 01:10:16.899309 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 14 01:10:16.899315 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 14 01:10:16.899322 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 14 01:10:16.899327 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 14 01:10:16.899333 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 14 01:10:16.899338 kernel: iommu: Default domain type: Translated Apr 14 01:10:16.899344 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 14 01:10:16.899349 kernel: PCI: Using ACPI for IRQ routing Apr 14 01:10:16.899354 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 14 01:10:16.899360 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 14 01:10:16.899365 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 14 01:10:16.899422 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 14 01:10:16.899477 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 14 01:10:16.899532 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 14 01:10:16.899539 kernel: vgaarb: loaded Apr 14 01:10:16.899545 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 14 01:10:16.899550 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 14 01:10:16.899556 kernel: clocksource: Switched to clocksource kvm-clock Apr 14 01:10:16.899561 kernel: VFS: Disk quotas dquot_6.6.0 Apr 14 01:10:16.899567 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 14 01:10:16.899574 kernel: pnp: PnP ACPI init Apr 14 01:10:16.899639 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 14 01:10:16.899647 kernel: pnp: PnP ACPI: found 6 devices Apr 14 01:10:16.899653 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 14 01:10:16.899658 kernel: NET: Registered PF_INET protocol family Apr 14 01:10:16.899664 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 14 01:10:16.899669 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 14 01:10:16.899675 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 14 01:10:16.899682 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 14 01:10:16.899688 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 14 01:10:16.899693 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 14 01:10:16.899699 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 01:10:16.899704 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 01:10:16.899710 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 14 01:10:16.899715 kernel: NET: Registered PF_XDP protocol family Apr 14 01:10:16.899767 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 14 01:10:16.899817 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 14 01:10:16.899883 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 14 01:10:16.899978 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 14 01:10:16.900028 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 14 01:10:16.900078 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 14 01:10:16.900085 kernel: PCI: CLS 0 bytes, default 64 Apr 14 01:10:16.900090 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 14 01:10:16.900096 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 01:10:16.900101 kernel: Initialise system trusted keyrings Apr 14 01:10:16.900109 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 14 01:10:16.900115 kernel: Key type asymmetric registered Apr 14 01:10:16.900120 kernel: Asymmetric key parser 'x509' registered Apr 14 01:10:16.900126 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 14 01:10:16.900131 kernel: io scheduler mq-deadline registered Apr 14 01:10:16.900137 kernel: io scheduler kyber registered Apr 14 01:10:16.900142 kernel: io scheduler bfq registered Apr 14 01:10:16.900147 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 14 01:10:16.900153 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 14 01:10:16.900160 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 14 01:10:16.900189 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 14 01:10:16.900195 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 14 01:10:16.900200 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 14 01:10:16.900206 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 14 01:10:16.900211 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 14 01:10:16.900217 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 14 01:10:16.900331 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 14 01:10:16.900340 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 14 01:10:16.900396 kernel: rtc_cmos 00:04: registered as rtc0 Apr 14 01:10:16.900447 kernel: rtc_cmos 00:04: setting system clock to 2026-04-14T01:10:16 UTC (1776129016) Apr 14 01:10:16.900498 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 14 01:10:16.900505 kernel: intel_pstate: CPU model not supported Apr 14 01:10:16.900510 kernel: NET: Registered PF_INET6 protocol family Apr 14 01:10:16.900516 kernel: Segment Routing with IPv6 Apr 14 01:10:16.900521 kernel: In-situ OAM (IOAM) with IPv6 Apr 14 01:10:16.900527 kernel: NET: Registered PF_PACKET protocol family Apr 14 01:10:16.900533 kernel: Key type dns_resolver registered Apr 14 01:10:16.900539 kernel: IPI shorthand broadcast: enabled Apr 14 01:10:16.900544 kernel: sched_clock: Marking stable (807008329, 203315237)->(1056830981, -46507415) Apr 14 01:10:16.900550 kernel: registered taskstats version 1 Apr 14 01:10:16.900555 kernel: Loading compiled-in X.509 certificates Apr 14 01:10:16.900560 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 14 01:10:16.900566 kernel: Key type .fscrypt registered Apr 14 01:10:16.900571 kernel: Key type fscrypt-provisioning registered Apr 14 01:10:16.900577 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 14 01:10:16.900583 kernel: ima: Allocated hash algorithm: sha1 Apr 14 01:10:16.900589 kernel: ima: No architecture policies found Apr 14 01:10:16.900594 kernel: clk: Disabling unused clocks Apr 14 01:10:16.900599 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 14 01:10:16.900605 kernel: Write protecting the kernel read-only data: 36864k Apr 14 01:10:16.900610 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 14 01:10:16.900615 kernel: Run /init as init process Apr 14 01:10:16.900621 kernel: with arguments: Apr 14 01:10:16.900626 kernel: /init Apr 14 01:10:16.900634 kernel: with environment: Apr 14 01:10:16.900639 kernel: HOME=/ Apr 14 01:10:16.900645 kernel: TERM=linux Apr 14 01:10:16.900652 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 01:10:16.900659 systemd[1]: Detected virtualization kvm. Apr 14 01:10:16.900665 systemd[1]: Detected architecture x86-64. Apr 14 01:10:16.900671 systemd[1]: Running in initrd. Apr 14 01:10:16.900677 systemd[1]: No hostname configured, using default hostname. Apr 14 01:10:16.900684 systemd[1]: Hostname set to . Apr 14 01:10:16.900690 systemd[1]: Initializing machine ID from VM UUID. Apr 14 01:10:16.900695 systemd[1]: Queued start job for default target initrd.target. Apr 14 01:10:16.900701 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 01:10:16.900707 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 01:10:16.900713 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 14 01:10:16.900719 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 01:10:16.900725 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 14 01:10:16.900732 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 14 01:10:16.900747 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 14 01:10:16.900753 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 14 01:10:16.900759 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 01:10:16.900766 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 01:10:16.900772 systemd[1]: Reached target paths.target - Path Units. Apr 14 01:10:16.900777 systemd[1]: Reached target slices.target - Slice Units. Apr 14 01:10:16.900784 systemd[1]: Reached target swap.target - Swaps. Apr 14 01:10:16.900789 systemd[1]: Reached target timers.target - Timer Units. Apr 14 01:10:16.900795 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 01:10:16.900801 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 01:10:16.900807 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 01:10:16.900816 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 01:10:16.900824 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 01:10:16.900830 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 01:10:16.900836 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 01:10:16.900841 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 01:10:16.900847 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 14 01:10:16.900853 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 01:10:16.900859 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 14 01:10:16.900865 systemd[1]: Starting systemd-fsck-usr.service... Apr 14 01:10:16.900871 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 01:10:16.900878 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 01:10:16.900886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 01:10:16.900891 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 14 01:10:16.900909 systemd-journald[194]: Collecting audit messages is disabled. Apr 14 01:10:16.900944 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 01:10:16.900951 systemd[1]: Finished systemd-fsck-usr.service. Apr 14 01:10:16.900961 systemd-journald[194]: Journal started Apr 14 01:10:16.900976 systemd-journald[194]: Runtime Journal (/run/log/journal/62f0afcd86864474910d5d8d4fdfc605) is 6.0M, max 48.4M, 42.3M free. Apr 14 01:10:16.889570 systemd-modules-load[195]: Inserted module 'overlay' Apr 14 01:10:16.903556 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 01:10:16.914343 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 14 01:10:16.915771 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 14 01:10:17.017436 kernel: Bridge firewalling registered Apr 14 01:10:16.916315 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 01:10:17.031685 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 01:10:17.034107 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 01:10:17.037972 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 01:10:17.039680 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 01:10:17.041672 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 01:10:17.045121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 01:10:17.047273 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 01:10:17.047822 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 01:10:17.057432 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 01:10:17.059589 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 01:10:17.064870 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 01:10:17.086476 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 14 01:10:17.090794 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 01:10:17.094846 dracut-cmdline[231]: dracut-dracut-053 Apr 14 01:10:17.098395 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 01:10:17.114295 systemd-resolved[236]: Positive Trust Anchors: Apr 14 01:10:17.114316 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 01:10:17.114341 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 01:10:17.116412 systemd-resolved[236]: Defaulting to hostname 'linux'. Apr 14 01:10:17.117117 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 01:10:17.117773 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 01:10:17.199367 kernel: SCSI subsystem initialized Apr 14 01:10:17.209350 kernel: Loading iSCSI transport class v2.0-870. Apr 14 01:10:17.221462 kernel: iscsi: registered transport (tcp) Apr 14 01:10:17.244018 kernel: iscsi: registered transport (qla4xxx) Apr 14 01:10:17.244077 kernel: QLogic iSCSI HBA Driver Apr 14 01:10:17.278502 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 14 01:10:17.287522 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 14 01:10:17.309877 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 14 01:10:17.309986 kernel: device-mapper: uevent: version 1.0.3 Apr 14 01:10:17.309997 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 14 01:10:17.350419 kernel: raid6: avx512x4 gen() 44270 MB/s Apr 14 01:10:17.367363 kernel: raid6: avx512x2 gen() 42851 MB/s Apr 14 01:10:17.385489 kernel: raid6: avx512x1 gen() 41084 MB/s Apr 14 01:10:17.402540 kernel: raid6: avx2x4 gen() 35504 MB/s Apr 14 01:10:17.420465 kernel: raid6: avx2x2 gen() 35157 MB/s Apr 14 01:10:17.441806 kernel: raid6: avx2x1 gen() 24758 MB/s Apr 14 01:10:17.441912 kernel: raid6: using algorithm avx512x4 gen() 44270 MB/s Apr 14 01:10:17.462834 kernel: raid6: .... xor() 6596 MB/s, rmw enabled Apr 14 01:10:17.463723 kernel: raid6: using avx512x2 recovery algorithm Apr 14 01:10:17.494370 kernel: xor: automatically using best checksumming function avx Apr 14 01:10:17.625453 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 14 01:10:17.636844 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 14 01:10:17.646515 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 01:10:17.665226 systemd-udevd[417]: Using default interface naming scheme 'v255'. Apr 14 01:10:17.670339 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 01:10:17.690594 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 14 01:10:17.701591 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Apr 14 01:10:17.724698 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 01:10:17.737699 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 01:10:17.768730 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 01:10:17.778435 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 14 01:10:17.787678 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 14 01:10:17.791991 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 01:10:17.794708 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 01:10:17.800091 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 01:10:17.809235 kernel: cryptd: max_cpu_qlen set to 1000 Apr 14 01:10:17.813957 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 14 01:10:17.819316 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 14 01:10:17.823891 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 14 01:10:17.825537 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 14 01:10:17.829721 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 01:10:17.844500 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 14 01:10:17.844545 kernel: GPT:9289727 != 19775487 Apr 14 01:10:17.844553 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 14 01:10:17.844569 kernel: GPT:9289727 != 19775487 Apr 14 01:10:17.844591 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 14 01:10:17.844606 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 01:10:17.829808 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 01:10:17.838276 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 01:10:17.844411 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 01:10:17.844673 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 01:10:17.853962 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 01:10:17.869341 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (463) Apr 14 01:10:17.869395 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Apr 14 01:10:17.866538 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 01:10:17.875155 kernel: AVX2 version of gcm_enc/dec engaged. Apr 14 01:10:17.876233 kernel: libata version 3.00 loaded. Apr 14 01:10:17.881217 kernel: AES CTR mode by8 optimization enabled Apr 14 01:10:17.886215 kernel: ahci 0000:00:1f.2: version 3.0 Apr 14 01:10:17.888419 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 14 01:10:17.889824 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 14 01:10:17.984984 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 14 01:10:17.985228 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 14 01:10:17.985329 kernel: scsi host0: ahci Apr 14 01:10:17.985431 kernel: scsi host1: ahci Apr 14 01:10:17.985513 kernel: scsi host2: ahci Apr 14 01:10:17.985599 kernel: scsi host3: ahci Apr 14 01:10:17.985681 kernel: scsi host4: ahci Apr 14 01:10:17.985762 kernel: scsi host5: ahci Apr 14 01:10:17.985842 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 14 01:10:17.985852 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 14 01:10:17.985860 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 14 01:10:17.985869 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 14 01:10:17.985877 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 14 01:10:17.985885 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 14 01:10:17.989980 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 01:10:18.000688 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 14 01:10:18.003636 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 14 01:10:18.005263 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 14 01:10:18.013287 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 01:10:18.024377 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 14 01:10:18.026710 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 01:10:18.034861 disk-uuid[555]: Primary Header is updated. Apr 14 01:10:18.034861 disk-uuid[555]: Secondary Entries is updated. Apr 14 01:10:18.034861 disk-uuid[555]: Secondary Header is updated. Apr 14 01:10:18.042195 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 01:10:18.048201 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 01:10:18.053286 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 01:10:18.203394 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 14 01:10:18.203458 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 14 01:10:18.206233 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 14 01:10:18.206276 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 14 01:10:18.207215 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 14 01:10:18.210321 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 14 01:10:18.210354 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 14 01:10:18.210363 kernel: ata3.00: applying bridge limits Apr 14 01:10:18.212302 kernel: ata3.00: configured for UDMA/100 Apr 14 01:10:18.215258 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 14 01:10:18.260472 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 14 01:10:18.260798 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 14 01:10:18.277257 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 14 01:10:19.045226 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 01:10:19.045873 disk-uuid[557]: The operation has completed successfully. Apr 14 01:10:19.074495 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 14 01:10:19.074659 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 14 01:10:19.102737 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 14 01:10:19.107514 sh[593]: Success Apr 14 01:10:19.123204 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 14 01:10:19.159818 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 14 01:10:19.175040 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 14 01:10:19.179230 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 14 01:10:19.189392 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 14 01:10:19.189447 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 14 01:10:19.189461 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 14 01:10:19.190858 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 14 01:10:19.191905 kernel: BTRFS info (device dm-0): using free space tree Apr 14 01:10:19.198796 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 14 01:10:19.202288 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 14 01:10:19.221773 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 14 01:10:19.224251 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 14 01:10:19.238326 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 01:10:19.238380 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 01:10:19.238396 kernel: BTRFS info (device vda6): using free space tree Apr 14 01:10:19.243357 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 01:10:19.253161 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 14 01:10:19.257687 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 01:10:19.265879 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 14 01:10:19.276526 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 14 01:10:19.333464 ignition[697]: Ignition 2.19.0 Apr 14 01:10:19.333476 ignition[697]: Stage: fetch-offline Apr 14 01:10:19.333512 ignition[697]: no configs at "/usr/lib/ignition/base.d" Apr 14 01:10:19.333524 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:10:19.333643 ignition[697]: parsed url from cmdline: "" Apr 14 01:10:19.333647 ignition[697]: no config URL provided Apr 14 01:10:19.333652 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Apr 14 01:10:19.333660 ignition[697]: no config at "/usr/lib/ignition/user.ign" Apr 14 01:10:19.333682 ignition[697]: op(1): [started] loading QEMU firmware config module Apr 14 01:10:19.333687 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 14 01:10:19.348883 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 01:10:19.358020 ignition[697]: op(1): [finished] loading QEMU firmware config module Apr 14 01:10:19.358048 ignition[697]: QEMU firmware config was not found. Ignoring... Apr 14 01:10:19.362399 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 01:10:19.440282 systemd-networkd[781]: lo: Link UP Apr 14 01:10:19.440301 systemd-networkd[781]: lo: Gained carrier Apr 14 01:10:19.441528 systemd-networkd[781]: Enumeration completed Apr 14 01:10:19.441637 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 01:10:19.442219 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 01:10:19.442222 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 01:10:19.442384 systemd[1]: Reached target network.target - Network. Apr 14 01:10:19.443269 systemd-networkd[781]: eth0: Link UP Apr 14 01:10:19.443272 systemd-networkd[781]: eth0: Gained carrier Apr 14 01:10:19.443280 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 01:10:19.478477 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.9/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 01:10:19.488333 ignition[697]: parsing config with SHA512: da002479a1108fcfb773f4758bb1f65cb50e6d00f8dc3fdf76b9765717e54201b9d9b8117eadefaec1379cff9f2768905ebd4ba599e17ef26c573249e22f9d78 Apr 14 01:10:19.493716 unknown[697]: fetched base config from "system" Apr 14 01:10:19.493726 unknown[697]: fetched user config from "qemu" Apr 14 01:10:19.494192 ignition[697]: fetch-offline: fetch-offline passed Apr 14 01:10:19.495363 systemd-resolved[236]: Detected conflict on linux IN A 10.0.0.9 Apr 14 01:10:19.494246 ignition[697]: Ignition finished successfully Apr 14 01:10:19.495374 systemd-resolved[236]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Apr 14 01:10:19.495952 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 01:10:19.497746 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 14 01:10:19.506533 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 14 01:10:19.525789 ignition[785]: Ignition 2.19.0 Apr 14 01:10:19.526055 ignition[785]: Stage: kargs Apr 14 01:10:19.526399 ignition[785]: no configs at "/usr/lib/ignition/base.d" Apr 14 01:10:19.526411 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:10:19.530933 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 14 01:10:19.527776 ignition[785]: kargs: kargs passed Apr 14 01:10:19.527842 ignition[785]: Ignition finished successfully Apr 14 01:10:19.541445 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 14 01:10:19.560593 ignition[793]: Ignition 2.19.0 Apr 14 01:10:19.560670 ignition[793]: Stage: disks Apr 14 01:10:19.560859 ignition[793]: no configs at "/usr/lib/ignition/base.d" Apr 14 01:10:19.560871 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:10:19.562158 ignition[793]: disks: disks passed Apr 14 01:10:19.562308 ignition[793]: Ignition finished successfully Apr 14 01:10:19.571854 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 14 01:10:19.576591 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 14 01:10:19.579059 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 01:10:19.582595 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 01:10:19.584664 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 01:10:19.588327 systemd[1]: Reached target basic.target - Basic System. Apr 14 01:10:19.608689 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 14 01:10:19.625526 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 14 01:10:19.631673 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 14 01:10:19.641320 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 14 01:10:19.739217 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 14 01:10:19.739773 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 14 01:10:19.740432 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 14 01:10:19.752317 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 01:10:19.757290 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 14 01:10:19.762124 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Apr 14 01:10:19.762264 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 01:10:19.757515 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 14 01:10:19.772417 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 01:10:19.772444 kernel: BTRFS info (device vda6): using free space tree Apr 14 01:10:19.772457 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 01:10:19.757547 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 14 01:10:19.757567 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 01:10:19.774616 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 01:10:19.787312 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 14 01:10:19.790812 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 14 01:10:19.830240 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Apr 14 01:10:19.836749 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Apr 14 01:10:19.842895 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Apr 14 01:10:19.845657 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Apr 14 01:10:19.932749 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 14 01:10:19.941587 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 14 01:10:19.944803 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 14 01:10:19.949252 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 01:10:19.969400 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 14 01:10:19.975209 ignition[925]: INFO : Ignition 2.19.0 Apr 14 01:10:19.976728 ignition[925]: INFO : Stage: mount Apr 14 01:10:19.976728 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 01:10:19.976728 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:10:19.976728 ignition[925]: INFO : mount: mount passed Apr 14 01:10:19.976728 ignition[925]: INFO : Ignition finished successfully Apr 14 01:10:19.981247 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 14 01:10:19.996326 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 14 01:10:20.188494 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 14 01:10:20.201519 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 01:10:20.212290 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Apr 14 01:10:20.212413 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 01:10:20.215334 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 01:10:20.215373 kernel: BTRFS info (device vda6): using free space tree Apr 14 01:10:20.221213 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 01:10:20.221971 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 01:10:20.244619 ignition[957]: INFO : Ignition 2.19.0 Apr 14 01:10:20.244619 ignition[957]: INFO : Stage: files Apr 14 01:10:20.244619 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 01:10:20.244619 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:10:20.251406 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Apr 14 01:10:20.251406 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 14 01:10:20.251406 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 14 01:10:20.260816 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 14 01:10:20.263300 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 14 01:10:20.265740 unknown[957]: wrote ssh authorized keys file for user: core Apr 14 01:10:20.267669 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 14 01:10:20.270202 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 01:10:20.270202 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 14 01:10:20.309577 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 14 01:10:20.358076 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 01:10:20.358076 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 14 01:10:20.365092 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 14 01:10:20.584520 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 14 01:10:20.710989 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 14 01:10:20.710989 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 14 01:10:20.716117 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 14 01:10:20.716117 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 14 01:10:20.716117 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 14 01:10:20.716117 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 01:10:20.725452 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 01:10:20.725452 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 01:10:20.730081 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 01:10:20.732487 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 01:10:20.735011 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 01:10:20.738480 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 14 01:10:20.742999 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 14 01:10:20.746217 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 14 01:10:20.749204 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 14 01:10:20.898034 systemd-networkd[781]: eth0: Gained IPv6LL Apr 14 01:10:21.021117 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 14 01:10:21.491410 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 14 01:10:21.491410 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 14 01:10:21.498123 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 01:10:21.498123 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 01:10:21.498123 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 14 01:10:21.498123 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 14 01:10:21.498123 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 01:10:21.498123 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 01:10:21.498123 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 14 01:10:21.498123 ignition[957]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 14 01:10:21.523611 ignition[957]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 01:10:21.527448 ignition[957]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 01:10:21.529712 ignition[957]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 14 01:10:21.529712 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 14 01:10:21.529712 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 14 01:10:21.529712 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 14 01:10:21.529712 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 14 01:10:21.529712 ignition[957]: INFO : files: files passed Apr 14 01:10:21.529712 ignition[957]: INFO : Ignition finished successfully Apr 14 01:10:21.542076 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 14 01:10:21.566739 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 14 01:10:21.573129 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 14 01:10:21.579772 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 14 01:10:21.581640 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 14 01:10:21.587871 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Apr 14 01:10:21.591689 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 01:10:21.591689 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 14 01:10:21.596695 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 01:10:21.600316 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 01:10:21.602852 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 14 01:10:21.613708 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 14 01:10:21.646662 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 14 01:10:21.646780 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 14 01:10:21.653312 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 14 01:10:21.653434 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 14 01:10:21.659078 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 14 01:10:21.663510 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 14 01:10:21.683830 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 01:10:21.693379 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 14 01:10:21.704405 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 14 01:10:21.704644 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 01:10:21.710235 systemd[1]: Stopped target timers.target - Timer Units. Apr 14 01:10:21.713604 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 14 01:10:21.713711 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 01:10:21.720141 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 14 01:10:21.724654 systemd[1]: Stopped target basic.target - Basic System. Apr 14 01:10:21.726531 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 14 01:10:21.730680 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 01:10:21.734436 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 14 01:10:21.738628 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 14 01:10:21.742032 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 01:10:21.746086 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 14 01:10:21.750117 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 14 01:10:21.752974 systemd[1]: Stopped target swap.target - Swaps. Apr 14 01:10:21.755715 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 14 01:10:21.755839 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 14 01:10:21.760732 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 14 01:10:21.764198 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 01:10:21.768070 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 14 01:10:21.769981 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 01:10:21.771466 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 14 01:10:21.771574 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 14 01:10:21.778625 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 14 01:10:21.778744 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 01:10:21.780906 systemd[1]: Stopped target paths.target - Path Units. Apr 14 01:10:21.788795 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 14 01:10:21.790874 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 01:10:21.796552 systemd[1]: Stopped target slices.target - Slice Units. Apr 14 01:10:21.796720 systemd[1]: Stopped target sockets.target - Socket Units. Apr 14 01:10:21.802073 systemd[1]: iscsid.socket: Deactivated successfully. Apr 14 01:10:21.802267 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 01:10:21.803439 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 14 01:10:21.803565 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 01:10:21.807528 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 14 01:10:21.807701 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 01:10:21.812085 systemd[1]: ignition-files.service: Deactivated successfully. Apr 14 01:10:21.812267 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 14 01:10:21.830557 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 14 01:10:21.832648 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 14 01:10:21.832861 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 01:10:21.838509 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 14 01:10:21.838629 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 14 01:10:21.838739 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 01:10:21.846643 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 14 01:10:21.846726 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 01:10:21.850630 ignition[1011]: INFO : Ignition 2.19.0 Apr 14 01:10:21.850630 ignition[1011]: INFO : Stage: umount Apr 14 01:10:21.850630 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 01:10:21.850630 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:10:21.850630 ignition[1011]: INFO : umount: umount passed Apr 14 01:10:21.850630 ignition[1011]: INFO : Ignition finished successfully Apr 14 01:10:21.864618 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 14 01:10:21.865538 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 14 01:10:21.865630 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 14 01:10:21.868718 systemd[1]: Stopped target network.target - Network. Apr 14 01:10:21.871791 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 14 01:10:21.871870 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 14 01:10:21.874546 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 14 01:10:21.874614 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 14 01:10:21.878550 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 14 01:10:21.878591 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 14 01:10:21.881457 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 14 01:10:21.881534 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 14 01:10:21.887299 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 14 01:10:21.889933 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 14 01:10:21.893524 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 14 01:10:21.893609 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 14 01:10:21.905535 systemd-networkd[781]: eth0: DHCPv6 lease lost Apr 14 01:10:21.911317 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 14 01:10:21.913073 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 14 01:10:21.917488 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 14 01:10:21.917645 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 14 01:10:21.923429 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 14 01:10:21.923569 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 14 01:10:21.925704 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 14 01:10:21.925731 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 14 01:10:21.929078 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 14 01:10:21.929115 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 14 01:10:21.946887 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 14 01:10:21.950089 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 14 01:10:21.950224 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 01:10:21.953987 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 01:10:21.954049 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 01:10:21.958521 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 14 01:10:21.958576 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 14 01:10:21.962828 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 14 01:10:21.962882 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 01:10:21.967538 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 01:10:21.983132 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 14 01:10:21.983272 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 14 01:10:21.984903 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 14 01:10:21.985049 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 01:10:21.989726 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 14 01:10:21.989771 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 14 01:10:21.990555 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 14 01:10:21.990582 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 01:10:21.996808 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 14 01:10:21.998273 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 14 01:10:22.005891 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 14 01:10:22.005967 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 14 01:10:22.010107 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 01:10:22.010210 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 01:10:22.036457 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 14 01:10:22.036557 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 14 01:10:22.036603 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 01:10:22.042053 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 01:10:22.042094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 01:10:22.045479 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 14 01:10:22.045552 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 14 01:10:22.050349 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 14 01:10:22.052549 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 14 01:10:22.064731 systemd[1]: Switching root. Apr 14 01:10:22.095873 systemd-journald[194]: Journal stopped Apr 14 01:10:22.947640 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 14 01:10:22.947688 kernel: SELinux: policy capability network_peer_controls=1 Apr 14 01:10:22.947701 kernel: SELinux: policy capability open_perms=1 Apr 14 01:10:22.947711 kernel: SELinux: policy capability extended_socket_class=1 Apr 14 01:10:22.947718 kernel: SELinux: policy capability always_check_network=0 Apr 14 01:10:22.947726 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 14 01:10:22.947736 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 14 01:10:22.947743 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 14 01:10:22.947750 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 14 01:10:22.947761 kernel: audit: type=1403 audit(1776129022.219:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 14 01:10:22.947773 systemd[1]: Successfully loaded SELinux policy in 32.946ms. Apr 14 01:10:22.947787 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.109ms. Apr 14 01:10:22.947796 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 01:10:22.947807 systemd[1]: Detected virtualization kvm. Apr 14 01:10:22.947815 systemd[1]: Detected architecture x86-64. Apr 14 01:10:22.947824 systemd[1]: Detected first boot. Apr 14 01:10:22.947833 systemd[1]: Initializing machine ID from VM UUID. Apr 14 01:10:22.947841 zram_generator::config[1056]: No configuration found. Apr 14 01:10:22.947850 systemd[1]: Populated /etc with preset unit settings. Apr 14 01:10:22.947858 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 14 01:10:22.947868 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 14 01:10:22.947875 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 14 01:10:22.947884 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 14 01:10:22.947892 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 14 01:10:22.947900 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 14 01:10:22.947907 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 14 01:10:22.947914 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 14 01:10:22.947922 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 14 01:10:22.947931 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 14 01:10:22.947939 systemd[1]: Created slice user.slice - User and Session Slice. Apr 14 01:10:22.947946 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 01:10:22.947977 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 01:10:22.947986 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 14 01:10:22.947994 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 14 01:10:22.948001 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 14 01:10:22.948010 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 01:10:22.948017 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 14 01:10:22.948029 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 01:10:22.948037 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 14 01:10:22.948044 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 14 01:10:22.948052 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 14 01:10:22.948059 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 14 01:10:22.948067 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 01:10:22.948075 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 01:10:22.948083 systemd[1]: Reached target slices.target - Slice Units. Apr 14 01:10:22.948093 systemd[1]: Reached target swap.target - Swaps. Apr 14 01:10:22.948101 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 14 01:10:22.948109 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 14 01:10:22.948117 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 01:10:22.948125 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 01:10:22.948133 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 01:10:22.948141 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 14 01:10:22.948148 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 14 01:10:22.948156 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 14 01:10:22.948274 systemd[1]: Mounting media.mount - External Media Directory... Apr 14 01:10:22.948291 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:10:22.948299 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 14 01:10:22.948307 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 14 01:10:22.948316 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 14 01:10:22.948324 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 14 01:10:22.948332 systemd[1]: Reached target machines.target - Containers. Apr 14 01:10:22.948341 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 14 01:10:22.948351 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 01:10:22.948359 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 01:10:22.948367 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 14 01:10:22.948374 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 01:10:22.948381 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 01:10:22.948389 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 01:10:22.948397 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 14 01:10:22.948405 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 01:10:22.948413 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 14 01:10:22.948422 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 14 01:10:22.948430 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 14 01:10:22.948437 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 14 01:10:22.948445 systemd[1]: Stopped systemd-fsck-usr.service. Apr 14 01:10:22.948453 kernel: fuse: init (API version 7.39) Apr 14 01:10:22.948463 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 01:10:22.948470 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 01:10:22.948478 kernel: ACPI: bus type drm_connector registered Apr 14 01:10:22.948486 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 14 01:10:22.948495 kernel: loop: module loaded Apr 14 01:10:22.948518 systemd-journald[1140]: Collecting audit messages is disabled. Apr 14 01:10:22.948536 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 14 01:10:22.948544 systemd-journald[1140]: Journal started Apr 14 01:10:22.948561 systemd-journald[1140]: Runtime Journal (/run/log/journal/62f0afcd86864474910d5d8d4fdfc605) is 6.0M, max 48.4M, 42.3M free. Apr 14 01:10:22.622305 systemd[1]: Queued start job for default target multi-user.target. Apr 14 01:10:22.644514 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 14 01:10:22.645467 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 14 01:10:22.954205 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 01:10:22.957273 systemd[1]: verity-setup.service: Deactivated successfully. Apr 14 01:10:22.957315 systemd[1]: Stopped verity-setup.service. Apr 14 01:10:22.963190 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:10:22.966247 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 01:10:22.966651 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 14 01:10:22.968279 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 14 01:10:22.970122 systemd[1]: Mounted media.mount - External Media Directory. Apr 14 01:10:22.971834 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 14 01:10:22.973662 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 14 01:10:22.975559 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 14 01:10:22.977531 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 14 01:10:22.979598 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 01:10:22.981791 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 14 01:10:22.981969 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 14 01:10:22.984606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 01:10:22.984798 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 01:10:22.987002 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 01:10:22.987160 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 01:10:22.989259 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 01:10:22.989426 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 01:10:22.992037 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 14 01:10:22.992456 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 14 01:10:22.994601 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 01:10:22.994816 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 01:10:22.997247 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 01:10:22.999485 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 14 01:10:23.002242 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 14 01:10:23.010695 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 01:10:23.017439 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 14 01:10:23.029335 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 14 01:10:23.031897 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 14 01:10:23.033865 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 14 01:10:23.033913 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 01:10:23.036595 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 14 01:10:23.040375 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 14 01:10:23.043118 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 14 01:10:23.044923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 01:10:23.046557 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 14 01:10:23.049354 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 14 01:10:23.051236 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 01:10:23.052143 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 14 01:10:23.054300 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 01:10:23.055509 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 01:10:23.062316 systemd-journald[1140]: Time spent on flushing to /var/log/journal/62f0afcd86864474910d5d8d4fdfc605 is 27.141ms for 956 entries. Apr 14 01:10:23.062316 systemd-journald[1140]: System Journal (/var/log/journal/62f0afcd86864474910d5d8d4fdfc605) is 8.0M, max 195.6M, 187.6M free. Apr 14 01:10:23.108397 systemd-journald[1140]: Received client request to flush runtime journal. Apr 14 01:10:23.108444 kernel: loop0: detected capacity change from 0 to 219192 Apr 14 01:10:23.064793 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 14 01:10:23.067787 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 14 01:10:23.075382 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 14 01:10:23.079574 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 14 01:10:23.083433 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 14 01:10:23.090025 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 14 01:10:23.092606 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 14 01:10:23.099598 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 14 01:10:23.113423 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 14 01:10:23.116094 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 14 01:10:23.118459 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 01:10:23.123775 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 14 01:10:23.121401 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 14 01:10:23.123809 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 14 01:10:23.134397 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 01:10:23.139267 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 14 01:10:23.140755 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 14 01:10:23.155239 kernel: loop1: detected capacity change from 0 to 142488 Apr 14 01:10:23.161842 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 14 01:10:23.161854 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 14 01:10:23.166866 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 01:10:23.199275 kernel: loop2: detected capacity change from 0 to 140768 Apr 14 01:10:23.251220 kernel: loop3: detected capacity change from 0 to 219192 Apr 14 01:10:23.263272 kernel: loop4: detected capacity change from 0 to 142488 Apr 14 01:10:23.276356 kernel: loop5: detected capacity change from 0 to 140768 Apr 14 01:10:23.287930 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 14 01:10:23.288318 (sd-merge)[1195]: Merged extensions into '/usr'. Apr 14 01:10:23.291976 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Apr 14 01:10:23.292003 systemd[1]: Reloading... Apr 14 01:10:23.329239 zram_generator::config[1218]: No configuration found. Apr 14 01:10:23.387916 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 14 01:10:23.436540 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 01:10:23.469302 systemd[1]: Reloading finished in 176 ms. Apr 14 01:10:23.503133 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 14 01:10:23.505492 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 14 01:10:23.507845 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 14 01:10:23.522444 systemd[1]: Starting ensure-sysext.service... Apr 14 01:10:23.525687 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 01:10:23.529132 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 01:10:23.533497 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Apr 14 01:10:23.533522 systemd[1]: Reloading... Apr 14 01:10:23.544582 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 01:10:23.545535 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 01:10:23.546123 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 01:10:23.546329 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Apr 14 01:10:23.546382 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Apr 14 01:10:23.548542 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 01:10:23.548639 systemd-tmpfiles[1260]: Skipping /boot Apr 14 01:10:23.554465 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 01:10:23.555771 systemd-udevd[1261]: Using default interface naming scheme 'v255'. Apr 14 01:10:23.556751 systemd-tmpfiles[1260]: Skipping /boot Apr 14 01:10:23.580225 zram_generator::config[1287]: No configuration found. Apr 14 01:10:23.631254 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1294) Apr 14 01:10:23.679217 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 14 01:10:23.684426 kernel: ACPI: button: Power Button [PWRF] Apr 14 01:10:23.698832 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 14 01:10:23.699134 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 14 01:10:23.699362 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 14 01:10:23.707481 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 01:10:23.714209 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 14 01:10:23.751919 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 01:10:23.754047 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 14 01:10:23.754542 systemd[1]: Reloading finished in 220 ms. Apr 14 01:10:23.801266 kernel: mousedev: PS/2 mouse device common for all mice Apr 14 01:10:23.809020 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 01:10:23.812054 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 01:10:23.859465 systemd[1]: Finished ensure-sysext.service. Apr 14 01:10:23.870235 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 14 01:10:23.873822 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:10:23.885934 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 01:10:23.889514 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 14 01:10:23.891416 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 01:10:23.892212 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 14 01:10:23.894986 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 01:10:23.899845 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 01:10:23.902244 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 01:10:23.907562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 01:10:23.908629 lvm[1362]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 01:10:23.909613 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 01:10:23.915502 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 14 01:10:23.918729 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 14 01:10:23.923345 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 01:10:23.927376 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 01:10:23.931315 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 14 01:10:23.940348 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 14 01:10:23.945266 augenrules[1386]: No rules Apr 14 01:10:23.944238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 01:10:23.946990 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:10:23.948096 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 01:10:23.951396 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 14 01:10:23.955127 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 14 01:10:23.957782 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 01:10:23.957920 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 01:10:23.959942 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 01:10:23.960697 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 01:10:23.960951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 01:10:23.961509 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 01:10:23.962141 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 01:10:23.962269 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 01:10:23.962485 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 14 01:10:23.962922 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 14 01:10:23.970032 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 01:10:23.976931 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 14 01:10:23.977090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 01:10:23.977160 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 01:10:23.978726 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 14 01:10:23.980842 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 14 01:10:23.987723 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 01:10:23.983055 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 14 01:10:23.983725 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 14 01:10:23.990857 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 14 01:10:24.007752 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 14 01:10:24.021521 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 14 01:10:24.066029 systemd-networkd[1380]: lo: Link UP Apr 14 01:10:24.066050 systemd-networkd[1380]: lo: Gained carrier Apr 14 01:10:24.070426 systemd-networkd[1380]: Enumeration completed Apr 14 01:10:24.070908 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 01:10:24.070921 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 01:10:24.071527 systemd-networkd[1380]: eth0: Link UP Apr 14 01:10:24.071593 systemd-networkd[1380]: eth0: Gained carrier Apr 14 01:10:24.071640 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 01:10:24.079029 systemd-resolved[1383]: Positive Trust Anchors: Apr 14 01:10:24.079050 systemd-resolved[1383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 01:10:24.079074 systemd-resolved[1383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 01:10:24.082560 systemd-resolved[1383]: Defaulting to hostname 'linux'. Apr 14 01:10:24.084233 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.9/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 01:10:24.084902 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Apr 14 01:10:24.086369 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 14 01:10:24.086419 systemd-timesyncd[1385]: Initial clock synchronization to Tue 2026-04-14 01:10:24.352247 UTC. Apr 14 01:10:24.108591 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 01:10:24.109018 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 14 01:10:24.109254 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 01:10:24.109475 systemd[1]: Reached target network.target - Network. Apr 14 01:10:24.109628 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 01:10:24.109862 systemd[1]: Reached target time-set.target - System Time Set. Apr 14 01:10:24.133396 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 14 01:10:24.135749 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 01:10:24.138388 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 01:10:24.140075 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 14 01:10:24.141916 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 14 01:10:24.144416 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 14 01:10:24.146154 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 14 01:10:24.148101 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 14 01:10:24.150001 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 14 01:10:24.150029 systemd[1]: Reached target paths.target - Path Units. Apr 14 01:10:24.151435 systemd[1]: Reached target timers.target - Timer Units. Apr 14 01:10:24.153339 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 14 01:10:24.156231 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 14 01:10:24.168153 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 14 01:10:24.170416 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 14 01:10:24.172073 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 01:10:24.173512 systemd[1]: Reached target basic.target - Basic System. Apr 14 01:10:24.174919 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 14 01:10:24.174946 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 14 01:10:24.175795 systemd[1]: Starting containerd.service - containerd container runtime... Apr 14 01:10:24.178029 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 14 01:10:24.180897 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 14 01:10:24.184441 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 14 01:10:24.186045 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 14 01:10:24.186935 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 14 01:10:24.188802 jq[1426]: false Apr 14 01:10:24.189542 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 14 01:10:24.194311 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 14 01:10:24.197226 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 14 01:10:24.201601 dbus-daemon[1425]: [system] SELinux support is enabled Apr 14 01:10:24.205411 extend-filesystems[1427]: Found loop3 Apr 14 01:10:24.205411 extend-filesystems[1427]: Found loop4 Apr 14 01:10:24.205411 extend-filesystems[1427]: Found loop5 Apr 14 01:10:24.205411 extend-filesystems[1427]: Found sr0 Apr 14 01:10:24.205411 extend-filesystems[1427]: Found vda Apr 14 01:10:24.205411 extend-filesystems[1427]: Found vda1 Apr 14 01:10:24.205411 extend-filesystems[1427]: Found vda2 Apr 14 01:10:24.205411 extend-filesystems[1427]: Found vda3 Apr 14 01:10:24.205411 extend-filesystems[1427]: Found usr Apr 14 01:10:24.205411 extend-filesystems[1427]: Found vda4 Apr 14 01:10:24.205411 extend-filesystems[1427]: Found vda6 Apr 14 01:10:24.205411 extend-filesystems[1427]: Found vda7 Apr 14 01:10:24.205411 extend-filesystems[1427]: Found vda9 Apr 14 01:10:24.205411 extend-filesystems[1427]: Checking size of /dev/vda9 Apr 14 01:10:24.245605 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1300) Apr 14 01:10:24.245627 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 14 01:10:24.245671 extend-filesystems[1427]: Resized partition /dev/vda9 Apr 14 01:10:24.205417 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 14 01:10:24.247563 extend-filesystems[1446]: resize2fs 1.47.1 (20-May-2024) Apr 14 01:10:24.208570 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 14 01:10:24.208889 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 14 01:10:24.249511 update_engine[1441]: I20260414 01:10:24.236236 1441 main.cc:92] Flatcar Update Engine starting Apr 14 01:10:24.249511 update_engine[1441]: I20260414 01:10:24.245404 1441 update_check_scheduler.cc:74] Next update check in 9m49s Apr 14 01:10:24.214873 systemd[1]: Starting update-engine.service - Update Engine... Apr 14 01:10:24.222300 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 14 01:10:24.230849 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 14 01:10:24.242532 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 14 01:10:24.242700 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 14 01:10:24.242903 systemd[1]: motdgen.service: Deactivated successfully. Apr 14 01:10:24.243069 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 14 01:10:24.250597 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 14 01:10:24.252853 jq[1445]: true Apr 14 01:10:24.250758 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 14 01:10:24.252050 systemd-logind[1434]: Watching system buttons on /dev/input/event1 (Power Button) Apr 14 01:10:24.252062 systemd-logind[1434]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 14 01:10:24.253585 systemd-logind[1434]: New seat seat0. Apr 14 01:10:24.254237 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 14 01:10:24.257377 systemd[1]: Started systemd-logind.service - User Login Management. Apr 14 01:10:24.267032 dbus-daemon[1425]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 14 01:10:24.260039 (ntainerd)[1453]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 14 01:10:24.270012 jq[1452]: true Apr 14 01:10:24.272107 extend-filesystems[1446]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 14 01:10:24.272107 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 14 01:10:24.272107 extend-filesystems[1446]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 14 01:10:24.271860 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 14 01:10:24.283950 tar[1450]: linux-amd64/LICENSE Apr 14 01:10:24.283950 tar[1450]: linux-amd64/helm Apr 14 01:10:24.284124 extend-filesystems[1427]: Resized filesystem in /dev/vda9 Apr 14 01:10:24.272031 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 14 01:10:24.275370 systemd[1]: Started update-engine.service - Update Engine. Apr 14 01:10:24.279803 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 14 01:10:24.279951 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 14 01:10:24.285801 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 14 01:10:24.285885 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 14 01:10:24.295981 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 14 01:10:24.309761 bash[1480]: Updated "/home/core/.ssh/authorized_keys" Apr 14 01:10:24.311309 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 14 01:10:24.313778 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 14 01:10:24.341474 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 14 01:10:24.351463 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 14 01:10:24.361277 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 14 01:10:24.369440 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 14 01:10:24.377088 systemd[1]: issuegen.service: Deactivated successfully. Apr 14 01:10:24.377254 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 14 01:10:24.380059 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 14 01:10:24.391412 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 14 01:10:24.401564 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 14 01:10:24.404371 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 14 01:10:24.406636 systemd[1]: Reached target getty.target - Login Prompts. Apr 14 01:10:24.430412 containerd[1453]: time="2026-04-14T01:10:24.429334873Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 14 01:10:24.453509 containerd[1453]: time="2026-04-14T01:10:24.453325919Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 14 01:10:24.456269 containerd[1453]: time="2026-04-14T01:10:24.456224543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 14 01:10:24.456269 containerd[1453]: time="2026-04-14T01:10:24.456262959Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 14 01:10:24.456362 containerd[1453]: time="2026-04-14T01:10:24.456278939Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 14 01:10:24.456415 containerd[1453]: time="2026-04-14T01:10:24.456395881Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 14 01:10:24.456446 containerd[1453]: time="2026-04-14T01:10:24.456428883Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 14 01:10:24.456500 containerd[1453]: time="2026-04-14T01:10:24.456482932Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 01:10:24.456517 containerd[1453]: time="2026-04-14T01:10:24.456501103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 14 01:10:24.456664 containerd[1453]: time="2026-04-14T01:10:24.456645624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 01:10:24.456686 containerd[1453]: time="2026-04-14T01:10:24.456665810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 14 01:10:24.456686 containerd[1453]: time="2026-04-14T01:10:24.456676150Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 01:10:24.456686 containerd[1453]: time="2026-04-14T01:10:24.456683152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 14 01:10:24.456749 containerd[1453]: time="2026-04-14T01:10:24.456733567Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 14 01:10:24.456903 containerd[1453]: time="2026-04-14T01:10:24.456883601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 14 01:10:24.457027 containerd[1453]: time="2026-04-14T01:10:24.457002572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 01:10:24.457050 containerd[1453]: time="2026-04-14T01:10:24.457032087Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 14 01:10:24.457113 containerd[1453]: time="2026-04-14T01:10:24.457094897Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 14 01:10:24.457157 containerd[1453]: time="2026-04-14T01:10:24.457141668Z" level=info msg="metadata content store policy set" policy=shared Apr 14 01:10:24.463060 containerd[1453]: time="2026-04-14T01:10:24.463008949Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 14 01:10:24.463155 containerd[1453]: time="2026-04-14T01:10:24.463145853Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 14 01:10:24.463268 containerd[1453]: time="2026-04-14T01:10:24.463163460Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 14 01:10:24.463268 containerd[1453]: time="2026-04-14T01:10:24.463235976Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 14 01:10:24.463268 containerd[1453]: time="2026-04-14T01:10:24.463252984Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 14 01:10:24.463488 containerd[1453]: time="2026-04-14T01:10:24.463455366Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 14 01:10:24.463795 containerd[1453]: time="2026-04-14T01:10:24.463737820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 14 01:10:24.463882 containerd[1453]: time="2026-04-14T01:10:24.463857936Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 14 01:10:24.463903 containerd[1453]: time="2026-04-14T01:10:24.463883808Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 14 01:10:24.463903 containerd[1453]: time="2026-04-14T01:10:24.463894857Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 14 01:10:24.463934 containerd[1453]: time="2026-04-14T01:10:24.463906509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 14 01:10:24.463934 containerd[1453]: time="2026-04-14T01:10:24.463917335Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 14 01:10:24.463934 containerd[1453]: time="2026-04-14T01:10:24.463926602Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 14 01:10:24.464010 containerd[1453]: time="2026-04-14T01:10:24.463937226Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 14 01:10:24.464010 containerd[1453]: time="2026-04-14T01:10:24.463947825Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 14 01:10:24.464010 containerd[1453]: time="2026-04-14T01:10:24.463980619Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 14 01:10:24.464010 containerd[1453]: time="2026-04-14T01:10:24.463990716Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 14 01:10:24.464010 containerd[1453]: time="2026-04-14T01:10:24.463999198Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 14 01:10:24.464083 containerd[1453]: time="2026-04-14T01:10:24.464015339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464083 containerd[1453]: time="2026-04-14T01:10:24.464029382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464083 containerd[1453]: time="2026-04-14T01:10:24.464043237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464083 containerd[1453]: time="2026-04-14T01:10:24.464052301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464083 containerd[1453]: time="2026-04-14T01:10:24.464060674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464083 containerd[1453]: time="2026-04-14T01:10:24.464069631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464083 containerd[1453]: time="2026-04-14T01:10:24.464078797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464238 containerd[1453]: time="2026-04-14T01:10:24.464088380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464238 containerd[1453]: time="2026-04-14T01:10:24.464097730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464238 containerd[1453]: time="2026-04-14T01:10:24.464108745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464238 containerd[1453]: time="2026-04-14T01:10:24.464116713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464238 containerd[1453]: time="2026-04-14T01:10:24.464125716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464238 containerd[1453]: time="2026-04-14T01:10:24.464134987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464238 containerd[1453]: time="2026-04-14T01:10:24.464149336Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 14 01:10:24.464238 containerd[1453]: time="2026-04-14T01:10:24.464194731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464238 containerd[1453]: time="2026-04-14T01:10:24.464208181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464238 containerd[1453]: time="2026-04-14T01:10:24.464217449Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 14 01:10:24.464391 containerd[1453]: time="2026-04-14T01:10:24.464254754Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 14 01:10:24.464391 containerd[1453]: time="2026-04-14T01:10:24.464268039Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 14 01:10:24.464391 containerd[1453]: time="2026-04-14T01:10:24.464276410Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 14 01:10:24.464391 containerd[1453]: time="2026-04-14T01:10:24.464285578Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 14 01:10:24.464391 containerd[1453]: time="2026-04-14T01:10:24.464292687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464391 containerd[1453]: time="2026-04-14T01:10:24.464305083Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 14 01:10:24.464391 containerd[1453]: time="2026-04-14T01:10:24.464313339Z" level=info msg="NRI interface is disabled by configuration." Apr 14 01:10:24.464391 containerd[1453]: time="2026-04-14T01:10:24.464321061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.464568 containerd[1453]: time="2026-04-14T01:10:24.464523204Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 14 01:10:24.464706 containerd[1453]: time="2026-04-14T01:10:24.464575298Z" level=info msg="Connect containerd service" Apr 14 01:10:24.464706 containerd[1453]: time="2026-04-14T01:10:24.464603884Z" level=info msg="using legacy CRI server" Apr 14 01:10:24.464706 containerd[1453]: time="2026-04-14T01:10:24.464608446Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 14 01:10:24.464754 containerd[1453]: time="2026-04-14T01:10:24.464712702Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 14 01:10:24.465258 containerd[1453]: time="2026-04-14T01:10:24.465232331Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 01:10:24.465605 containerd[1453]: time="2026-04-14T01:10:24.465532766Z" level=info msg="Start subscribing containerd event" Apr 14 01:10:24.465668 containerd[1453]: time="2026-04-14T01:10:24.465610421Z" level=info msg="Start recovering state" Apr 14 01:10:24.465668 containerd[1453]: time="2026-04-14T01:10:24.465661622Z" level=info msg="Start event monitor" Apr 14 01:10:24.465720 containerd[1453]: time="2026-04-14T01:10:24.465678536Z" level=info msg="Start snapshots syncer" Apr 14 01:10:24.465720 containerd[1453]: time="2026-04-14T01:10:24.465685369Z" level=info msg="Start cni network conf syncer for default" Apr 14 01:10:24.465782 containerd[1453]: time="2026-04-14T01:10:24.465567723Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 14 01:10:24.465810 containerd[1453]: time="2026-04-14T01:10:24.465710690Z" level=info msg="Start streaming server" Apr 14 01:10:24.465810 containerd[1453]: time="2026-04-14T01:10:24.465792493Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 14 01:10:24.465922 systemd[1]: Started containerd.service - containerd container runtime. Apr 14 01:10:24.467946 containerd[1453]: time="2026-04-14T01:10:24.467765240Z" level=info msg="containerd successfully booted in 0.039313s" Apr 14 01:10:24.709290 tar[1450]: linux-amd64/README.md Apr 14 01:10:24.728746 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 14 01:10:25.766794 systemd-networkd[1380]: eth0: Gained IPv6LL Apr 14 01:10:25.770716 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 14 01:10:25.774543 systemd[1]: Reached target network-online.target - Network is Online. Apr 14 01:10:25.787624 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 14 01:10:25.793424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:10:25.799257 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 14 01:10:25.834757 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 14 01:10:25.835663 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 14 01:10:25.839619 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 14 01:10:25.877119 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 14 01:10:26.882084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:10:26.884296 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 14 01:10:26.885870 (kubelet)[1537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:10:26.886506 systemd[1]: Startup finished in 923ms (kernel) + 5.547s (initrd) + 4.699s (userspace) = 11.170s. Apr 14 01:10:27.669478 kubelet[1537]: E0414 01:10:27.669164 1537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:10:27.675800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:10:27.676019 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:10:27.676504 systemd[1]: kubelet.service: Consumed 1.198s CPU time. Apr 14 01:10:30.166970 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 14 01:10:30.169471 systemd[1]: Started sshd@0-10.0.0.9:22-10.0.0.1:38158.service - OpenSSH per-connection server daemon (10.0.0.1:38158). Apr 14 01:10:30.226492 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 38158 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:10:30.229572 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:10:30.237463 systemd-logind[1434]: New session 1 of user core. Apr 14 01:10:30.238698 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 14 01:10:30.252245 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 14 01:10:30.264416 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 14 01:10:30.267179 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 14 01:10:30.275493 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 14 01:10:30.371436 systemd[1555]: Queued start job for default target default.target. Apr 14 01:10:30.382122 systemd[1555]: Created slice app.slice - User Application Slice. Apr 14 01:10:30.382251 systemd[1555]: Reached target paths.target - Paths. Apr 14 01:10:30.382264 systemd[1555]: Reached target timers.target - Timers. Apr 14 01:10:30.385371 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 14 01:10:30.407978 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 14 01:10:30.408491 systemd[1555]: Reached target sockets.target - Sockets. Apr 14 01:10:30.408871 systemd[1555]: Reached target basic.target - Basic System. Apr 14 01:10:30.409742 systemd[1555]: Reached target default.target - Main User Target. Apr 14 01:10:30.410081 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 14 01:10:30.410127 systemd[1555]: Startup finished in 129ms. Apr 14 01:10:30.421867 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 14 01:10:30.485620 systemd[1]: Started sshd@1-10.0.0.9:22-10.0.0.1:38162.service - OpenSSH per-connection server daemon (10.0.0.1:38162). Apr 14 01:10:30.526100 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 38162 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:10:30.527720 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:10:30.532003 systemd-logind[1434]: New session 2 of user core. Apr 14 01:10:30.542937 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 14 01:10:30.608274 sshd[1566]: pam_unix(sshd:session): session closed for user core Apr 14 01:10:30.626383 systemd[1]: sshd@1-10.0.0.9:22-10.0.0.1:38162.service: Deactivated successfully. Apr 14 01:10:30.627640 systemd[1]: session-2.scope: Deactivated successfully. Apr 14 01:10:30.628826 systemd-logind[1434]: Session 2 logged out. Waiting for processes to exit. Apr 14 01:10:30.629635 systemd[1]: Started sshd@2-10.0.0.9:22-10.0.0.1:38172.service - OpenSSH per-connection server daemon (10.0.0.1:38172). Apr 14 01:10:30.630305 systemd-logind[1434]: Removed session 2. Apr 14 01:10:30.675068 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 38172 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:10:30.677429 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:10:30.684245 systemd-logind[1434]: New session 3 of user core. Apr 14 01:10:30.693956 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 14 01:10:30.744725 sshd[1573]: pam_unix(sshd:session): session closed for user core Apr 14 01:10:30.756086 systemd[1]: sshd@2-10.0.0.9:22-10.0.0.1:38172.service: Deactivated successfully. Apr 14 01:10:30.757395 systemd[1]: session-3.scope: Deactivated successfully. Apr 14 01:10:30.758412 systemd-logind[1434]: Session 3 logged out. Waiting for processes to exit. Apr 14 01:10:30.772625 systemd[1]: Started sshd@3-10.0.0.9:22-10.0.0.1:38188.service - OpenSSH per-connection server daemon (10.0.0.1:38188). Apr 14 01:10:30.773545 systemd-logind[1434]: Removed session 3. Apr 14 01:10:30.804264 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 38188 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:10:30.807054 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:10:30.811272 systemd-logind[1434]: New session 4 of user core. Apr 14 01:10:30.817662 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 14 01:10:30.879242 sshd[1580]: pam_unix(sshd:session): session closed for user core Apr 14 01:10:30.894749 systemd[1]: sshd@3-10.0.0.9:22-10.0.0.1:38188.service: Deactivated successfully. Apr 14 01:10:30.899043 systemd[1]: session-4.scope: Deactivated successfully. Apr 14 01:10:30.901761 systemd-logind[1434]: Session 4 logged out. Waiting for processes to exit. Apr 14 01:10:30.914813 systemd[1]: Started sshd@4-10.0.0.9:22-10.0.0.1:38204.service - OpenSSH per-connection server daemon (10.0.0.1:38204). Apr 14 01:10:30.916561 systemd-logind[1434]: Removed session 4. Apr 14 01:10:30.953915 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 38204 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:10:30.955997 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:10:30.961161 systemd-logind[1434]: New session 5 of user core. Apr 14 01:10:30.973125 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 14 01:10:31.033575 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 14 01:10:31.033790 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 01:10:31.055789 sudo[1590]: pam_unix(sudo:session): session closed for user root Apr 14 01:10:31.058612 sshd[1587]: pam_unix(sshd:session): session closed for user core Apr 14 01:10:31.071493 systemd[1]: sshd@4-10.0.0.9:22-10.0.0.1:38204.service: Deactivated successfully. Apr 14 01:10:31.073270 systemd[1]: session-5.scope: Deactivated successfully. Apr 14 01:10:31.075232 systemd-logind[1434]: Session 5 logged out. Waiting for processes to exit. Apr 14 01:10:31.086012 systemd[1]: Started sshd@5-10.0.0.9:22-10.0.0.1:38208.service - OpenSSH per-connection server daemon (10.0.0.1:38208). Apr 14 01:10:31.088341 systemd-logind[1434]: Removed session 5. Apr 14 01:10:31.135235 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 38208 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:10:31.136386 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:10:31.141839 systemd-logind[1434]: New session 6 of user core. Apr 14 01:10:31.152128 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 14 01:10:31.213259 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 14 01:10:31.213492 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 01:10:31.219314 sudo[1599]: pam_unix(sudo:session): session closed for user root Apr 14 01:10:31.234402 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 14 01:10:31.234742 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 01:10:31.256282 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 14 01:10:31.258056 auditctl[1602]: No rules Apr 14 01:10:31.258859 systemd[1]: audit-rules.service: Deactivated successfully. Apr 14 01:10:31.259048 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 14 01:10:31.260505 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 01:10:31.300997 augenrules[1620]: No rules Apr 14 01:10:31.302503 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 01:10:31.303657 sudo[1598]: pam_unix(sudo:session): session closed for user root Apr 14 01:10:31.306975 sshd[1595]: pam_unix(sshd:session): session closed for user core Apr 14 01:10:31.314608 systemd[1]: sshd@5-10.0.0.9:22-10.0.0.1:38208.service: Deactivated successfully. Apr 14 01:10:31.316922 systemd[1]: session-6.scope: Deactivated successfully. Apr 14 01:10:31.318042 systemd-logind[1434]: Session 6 logged out. Waiting for processes to exit. Apr 14 01:10:31.318969 systemd[1]: Started sshd@6-10.0.0.9:22-10.0.0.1:38218.service - OpenSSH per-connection server daemon (10.0.0.1:38218). Apr 14 01:10:31.320130 systemd-logind[1434]: Removed session 6. Apr 14 01:10:31.360666 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 38218 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:10:31.362746 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:10:31.369133 systemd-logind[1434]: New session 7 of user core. Apr 14 01:10:31.385393 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 14 01:10:31.446098 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 14 01:10:31.446359 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 01:10:31.773554 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 14 01:10:31.773711 (dockerd)[1649]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 14 01:10:32.153143 dockerd[1649]: time="2026-04-14T01:10:32.152794498Z" level=info msg="Starting up" Apr 14 01:10:32.343958 dockerd[1649]: time="2026-04-14T01:10:32.343864204Z" level=info msg="Loading containers: start." Apr 14 01:10:32.488266 kernel: Initializing XFRM netlink socket Apr 14 01:10:32.576200 systemd-networkd[1380]: docker0: Link UP Apr 14 01:10:32.620982 dockerd[1649]: time="2026-04-14T01:10:32.620880524Z" level=info msg="Loading containers: done." Apr 14 01:10:32.658428 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3417999718-merged.mount: Deactivated successfully. Apr 14 01:10:32.660070 dockerd[1649]: time="2026-04-14T01:10:32.659944428Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 14 01:10:32.660289 dockerd[1649]: time="2026-04-14T01:10:32.660166323Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 14 01:10:32.660313 dockerd[1649]: time="2026-04-14T01:10:32.660296810Z" level=info msg="Daemon has completed initialization" Apr 14 01:10:32.712285 dockerd[1649]: time="2026-04-14T01:10:32.711898759Z" level=info msg="API listen on /run/docker.sock" Apr 14 01:10:32.712470 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 14 01:10:33.286104 containerd[1453]: time="2026-04-14T01:10:33.286043444Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\"" Apr 14 01:10:33.785922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3408682601.mount: Deactivated successfully. Apr 14 01:10:34.641523 containerd[1453]: time="2026-04-14T01:10:34.641459416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:34.642746 containerd[1453]: time="2026-04-14T01:10:34.642677935Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.6: active requests=0, bytes read=26947180" Apr 14 01:10:34.645069 containerd[1453]: time="2026-04-14T01:10:34.644943171Z" level=info msg="ImageCreate event name:\"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:34.647986 containerd[1453]: time="2026-04-14T01:10:34.647915330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:34.649384 containerd[1453]: time="2026-04-14T01:10:34.649330093Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.6\" with image id \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\", size \"26944341\" in 1.36320223s" Apr 14 01:10:34.649465 containerd[1453]: time="2026-04-14T01:10:34.649385957Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\" returns image reference \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\"" Apr 14 01:10:34.652038 containerd[1453]: time="2026-04-14T01:10:34.652003476Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\"" Apr 14 01:10:35.430906 containerd[1453]: time="2026-04-14T01:10:35.430825645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:35.431978 containerd[1453]: time="2026-04-14T01:10:35.431910327Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.6: active requests=0, bytes read=21165744" Apr 14 01:10:35.433435 containerd[1453]: time="2026-04-14T01:10:35.433391620Z" level=info msg="ImageCreate event name:\"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:35.437003 containerd[1453]: time="2026-04-14T01:10:35.436397742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:35.437575 containerd[1453]: time="2026-04-14T01:10:35.437519046Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.6\" with image id \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\", size \"22695997\" in 785.401303ms" Apr 14 01:10:35.437612 containerd[1453]: time="2026-04-14T01:10:35.437579583Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\" returns image reference \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\"" Apr 14 01:10:35.438152 containerd[1453]: time="2026-04-14T01:10:35.438130808Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\"" Apr 14 01:10:36.178665 containerd[1453]: time="2026-04-14T01:10:36.178592924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:36.179397 containerd[1453]: time="2026-04-14T01:10:36.179349183Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.6: active requests=0, bytes read=15729779" Apr 14 01:10:36.180536 containerd[1453]: time="2026-04-14T01:10:36.180482681Z" level=info msg="ImageCreate event name:\"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:36.182690 containerd[1453]: time="2026-04-14T01:10:36.182640986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:36.184249 containerd[1453]: time="2026-04-14T01:10:36.184157802Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.6\" with image id \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\", size \"17260050\" in 746.000127ms" Apr 14 01:10:36.184249 containerd[1453]: time="2026-04-14T01:10:36.184226787Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\" returns image reference \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\"" Apr 14 01:10:36.185620 containerd[1453]: time="2026-04-14T01:10:36.184774865Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\"" Apr 14 01:10:37.051236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount245702049.mount: Deactivated successfully. Apr 14 01:10:37.272155 containerd[1453]: time="2026-04-14T01:10:37.270941037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:37.274640 containerd[1453]: time="2026-04-14T01:10:37.274505582Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.6: active requests=0, bytes read=25861668" Apr 14 01:10:37.279965 containerd[1453]: time="2026-04-14T01:10:37.279853757Z" level=info msg="ImageCreate event name:\"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:37.286327 containerd[1453]: time="2026-04-14T01:10:37.286252091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:37.286636 containerd[1453]: time="2026-04-14T01:10:37.286575704Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.6\" with image id \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\", size \"25860793\" in 1.101771549s" Apr 14 01:10:37.286636 containerd[1453]: time="2026-04-14T01:10:37.286624094Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\" returns image reference \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\"" Apr 14 01:10:37.287365 containerd[1453]: time="2026-04-14T01:10:37.287320674Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 14 01:10:37.777987 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 14 01:10:37.782436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:10:37.784568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3290105460.mount: Deactivated successfully. Apr 14 01:10:37.923954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:10:37.931086 (kubelet)[1885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:10:37.984211 kubelet[1885]: E0414 01:10:37.984051 1885 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:10:37.987759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:10:37.987880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:10:38.538671 containerd[1453]: time="2026-04-14T01:10:38.538537500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:38.539369 containerd[1453]: time="2026-04-14T01:10:38.539318988Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 14 01:10:38.541367 containerd[1453]: time="2026-04-14T01:10:38.541218051Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:38.544416 containerd[1453]: time="2026-04-14T01:10:38.544330193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:38.546977 containerd[1453]: time="2026-04-14T01:10:38.546874716Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.259497056s" Apr 14 01:10:38.546977 containerd[1453]: time="2026-04-14T01:10:38.546970805Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 14 01:10:38.547611 containerd[1453]: time="2026-04-14T01:10:38.547549428Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 14 01:10:38.919096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount205215829.mount: Deactivated successfully. Apr 14 01:10:38.927862 containerd[1453]: time="2026-04-14T01:10:38.927766207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:38.928736 containerd[1453]: time="2026-04-14T01:10:38.928656422Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 14 01:10:38.930413 containerd[1453]: time="2026-04-14T01:10:38.930330097Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:38.933726 containerd[1453]: time="2026-04-14T01:10:38.933574257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:38.934365 containerd[1453]: time="2026-04-14T01:10:38.934325783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 386.673394ms" Apr 14 01:10:38.934409 containerd[1453]: time="2026-04-14T01:10:38.934364540Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 14 01:10:38.934932 containerd[1453]: time="2026-04-14T01:10:38.934897865Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 14 01:10:39.348082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3689099987.mount: Deactivated successfully. Apr 14 01:10:39.931967 containerd[1453]: time="2026-04-14T01:10:39.931704176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:39.932466 containerd[1453]: time="2026-04-14T01:10:39.932300452Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22873707" Apr 14 01:10:39.934250 containerd[1453]: time="2026-04-14T01:10:39.934120941Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:39.937325 containerd[1453]: time="2026-04-14T01:10:39.937226000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:39.940822 containerd[1453]: time="2026-04-14T01:10:39.940628596Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.00569178s" Apr 14 01:10:39.940822 containerd[1453]: time="2026-04-14T01:10:39.940809333Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 14 01:10:42.871031 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:10:42.880460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:10:42.902210 systemd[1]: Reloading requested from client PID 2034 ('systemctl') (unit session-7.scope)... Apr 14 01:10:42.902228 systemd[1]: Reloading... Apr 14 01:10:42.970376 zram_generator::config[2073]: No configuration found. Apr 14 01:10:43.088483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 01:10:43.135461 systemd[1]: Reloading finished in 233 ms. Apr 14 01:10:43.175645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:10:43.177786 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:10:43.178915 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 01:10:43.179107 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:10:43.180346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:10:43.297566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:10:43.304128 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 01:10:43.344288 kubelet[2123]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 01:10:43.344288 kubelet[2123]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 01:10:43.344800 kubelet[2123]: I0414 01:10:43.344341 2123 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 01:10:43.989026 kubelet[2123]: I0414 01:10:43.988968 2123 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 14 01:10:43.989026 kubelet[2123]: I0414 01:10:43.989004 2123 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 01:10:43.989026 kubelet[2123]: I0414 01:10:43.989030 2123 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 14 01:10:43.989026 kubelet[2123]: I0414 01:10:43.989038 2123 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 01:10:43.989272 kubelet[2123]: I0414 01:10:43.989242 2123 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 01:10:44.084084 kubelet[2123]: I0414 01:10:44.083873 2123 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 01:10:44.084542 kubelet[2123]: E0414 01:10:44.084479 2123 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 01:10:44.088907 kubelet[2123]: E0414 01:10:44.088865 2123 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 01:10:44.088974 kubelet[2123]: I0414 01:10:44.088922 2123 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 14 01:10:44.092020 kubelet[2123]: I0414 01:10:44.091983 2123 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 14 01:10:44.092568 kubelet[2123]: I0414 01:10:44.092523 2123 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 01:10:44.092786 kubelet[2123]: I0414 01:10:44.092559 2123 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 01:10:44.092786 kubelet[2123]: I0414 01:10:44.092763 2123 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 01:10:44.092786 kubelet[2123]: I0414 01:10:44.092771 2123 container_manager_linux.go:306] "Creating device plugin manager" Apr 14 01:10:44.092900 kubelet[2123]: I0414 01:10:44.092886 2123 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 14 01:10:44.095831 kubelet[2123]: I0414 01:10:44.095801 2123 state_mem.go:36] "Initialized new in-memory state store" Apr 14 01:10:44.095979 kubelet[2123]: I0414 01:10:44.095956 2123 kubelet.go:475] "Attempting to sync node with API server" Apr 14 01:10:44.095979 kubelet[2123]: I0414 01:10:44.095977 2123 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 01:10:44.096018 kubelet[2123]: I0414 01:10:44.095995 2123 kubelet.go:387] "Adding apiserver pod source" Apr 14 01:10:44.096018 kubelet[2123]: I0414 01:10:44.096006 2123 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 01:10:44.096599 kubelet[2123]: E0414 01:10:44.096541 2123 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 01:10:44.096702 kubelet[2123]: E0414 01:10:44.096602 2123 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 01:10:44.097629 kubelet[2123]: I0414 01:10:44.097580 2123 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 01:10:44.098290 kubelet[2123]: I0414 01:10:44.098267 2123 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 01:10:44.098325 kubelet[2123]: I0414 01:10:44.098313 2123 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 14 01:10:44.098391 kubelet[2123]: W0414 01:10:44.098369 2123 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 14 01:10:44.102370 kubelet[2123]: I0414 01:10:44.102335 2123 server.go:1262] "Started kubelet" Apr 14 01:10:44.102873 kubelet[2123]: I0414 01:10:44.102466 2123 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 01:10:44.102873 kubelet[2123]: I0414 01:10:44.102508 2123 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 14 01:10:44.102873 kubelet[2123]: I0414 01:10:44.102752 2123 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 01:10:44.102873 kubelet[2123]: I0414 01:10:44.102799 2123 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 01:10:44.105227 kubelet[2123]: I0414 01:10:44.104306 2123 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 01:10:44.105227 kubelet[2123]: I0414 01:10:44.104743 2123 server.go:310] "Adding debug handlers to kubelet server" Apr 14 01:10:44.105813 kubelet[2123]: I0414 01:10:44.105756 2123 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 01:10:44.106838 kubelet[2123]: E0414 01:10:44.105642 2123 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.9:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.9:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a613f48cf87eae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 01:10:44.102282926 +0000 UTC m=+0.794368468,LastTimestamp:2026-04-14 01:10:44.102282926 +0000 UTC m=+0.794368468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 01:10:44.107028 kubelet[2123]: E0414 01:10:44.107004 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:44.107058 kubelet[2123]: I0414 01:10:44.107036 2123 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 14 01:10:44.107124 kubelet[2123]: I0414 01:10:44.107082 2123 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 14 01:10:44.107144 kubelet[2123]: I0414 01:10:44.107124 2123 reconciler.go:29] "Reconciler: start to sync state" Apr 14 01:10:44.107493 kubelet[2123]: E0414 01:10:44.107438 2123 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 01:10:44.107493 kubelet[2123]: E0414 01:10:44.107475 2123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="200ms" Apr 14 01:10:44.108269 kubelet[2123]: I0414 01:10:44.108239 2123 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 01:10:44.109211 kubelet[2123]: E0414 01:10:44.109143 2123 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 01:10:44.109315 kubelet[2123]: I0414 01:10:44.109296 2123 factory.go:223] Registration of the containerd container factory successfully Apr 14 01:10:44.109334 kubelet[2123]: I0414 01:10:44.109317 2123 factory.go:223] Registration of the systemd container factory successfully Apr 14 01:10:44.120034 kubelet[2123]: I0414 01:10:44.120008 2123 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 01:10:44.120034 kubelet[2123]: I0414 01:10:44.120030 2123 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 01:10:44.120150 kubelet[2123]: I0414 01:10:44.120049 2123 state_mem.go:36] "Initialized new in-memory state store" Apr 14 01:10:44.123049 kubelet[2123]: I0414 01:10:44.123017 2123 policy_none.go:49] "None policy: Start" Apr 14 01:10:44.123049 kubelet[2123]: I0414 01:10:44.123046 2123 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 14 01:10:44.123128 kubelet[2123]: I0414 01:10:44.123056 2123 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 14 01:10:44.124909 kubelet[2123]: I0414 01:10:44.124879 2123 policy_none.go:47] "Start" Apr 14 01:10:44.127585 kubelet[2123]: I0414 01:10:44.127511 2123 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 14 01:10:44.129052 kubelet[2123]: I0414 01:10:44.128685 2123 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 14 01:10:44.129052 kubelet[2123]: I0414 01:10:44.128713 2123 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 14 01:10:44.129052 kubelet[2123]: I0414 01:10:44.128734 2123 kubelet.go:2428] "Starting kubelet main sync loop" Apr 14 01:10:44.129052 kubelet[2123]: E0414 01:10:44.128814 2123 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 01:10:44.130108 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 14 01:10:44.133333 kubelet[2123]: E0414 01:10:44.132387 2123 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 01:10:44.143280 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 14 01:10:44.145845 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 14 01:10:44.157211 kubelet[2123]: E0414 01:10:44.157042 2123 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 01:10:44.157376 kubelet[2123]: I0414 01:10:44.157322 2123 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 01:10:44.157376 kubelet[2123]: I0414 01:10:44.157349 2123 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 01:10:44.157668 kubelet[2123]: I0414 01:10:44.157609 2123 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 01:10:44.158638 kubelet[2123]: E0414 01:10:44.158616 2123 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 01:10:44.158698 kubelet[2123]: E0414 01:10:44.158658 2123 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 01:10:44.259893 systemd[1]: Created slice kubepods-burstable-poda137125e1e6a73398940436c0b6e6d18.slice - libcontainer container kubepods-burstable-poda137125e1e6a73398940436c0b6e6d18.slice. Apr 14 01:10:44.262936 kubelet[2123]: I0414 01:10:44.261088 2123 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 01:10:44.262936 kubelet[2123]: E0414 01:10:44.261541 2123 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Apr 14 01:10:44.289115 kubelet[2123]: E0414 01:10:44.289050 2123 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:10:44.291461 systemd[1]: Created slice kubepods-burstable-poddc6a32a2019cd173b38de969cf403b25.slice - libcontainer container kubepods-burstable-poddc6a32a2019cd173b38de969cf403b25.slice. Apr 14 01:10:44.302249 kubelet[2123]: E0414 01:10:44.302209 2123 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:10:44.304312 systemd[1]: Created slice kubepods-burstable-pod3ef4c7b0b14aacb703d6788ed41a925d.slice - libcontainer container kubepods-burstable-pod3ef4c7b0b14aacb703d6788ed41a925d.slice. Apr 14 01:10:44.305476 kubelet[2123]: E0414 01:10:44.305436 2123 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:10:44.307816 kubelet[2123]: E0414 01:10:44.307791 2123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="400ms" Apr 14 01:10:44.308992 kubelet[2123]: I0414 01:10:44.308963 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a137125e1e6a73398940436c0b6e6d18-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a137125e1e6a73398940436c0b6e6d18\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:44.309037 kubelet[2123]: I0414 01:10:44.309011 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a137125e1e6a73398940436c0b6e6d18-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a137125e1e6a73398940436c0b6e6d18\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:44.309037 kubelet[2123]: I0414 01:10:44.309027 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:44.309076 kubelet[2123]: I0414 01:10:44.309041 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a137125e1e6a73398940436c0b6e6d18-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a137125e1e6a73398940436c0b6e6d18\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:44.309076 kubelet[2123]: I0414 01:10:44.309052 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:44.309076 kubelet[2123]: I0414 01:10:44.309064 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:44.309130 kubelet[2123]: I0414 01:10:44.309076 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:44.309130 kubelet[2123]: I0414 01:10:44.309087 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:44.309130 kubelet[2123]: I0414 01:10:44.309098 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ef4c7b0b14aacb703d6788ed41a925d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3ef4c7b0b14aacb703d6788ed41a925d\") " pod="kube-system/kube-scheduler-localhost" Apr 14 01:10:44.463926 kubelet[2123]: I0414 01:10:44.463872 2123 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 01:10:44.464443 kubelet[2123]: E0414 01:10:44.464369 2123 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Apr 14 01:10:44.594223 kubelet[2123]: E0414 01:10:44.594055 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:44.595231 containerd[1453]: time="2026-04-14T01:10:44.595149310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a137125e1e6a73398940436c0b6e6d18,Namespace:kube-system,Attempt:0,}" Apr 14 01:10:44.605310 kubelet[2123]: E0414 01:10:44.605267 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:44.605978 containerd[1453]: time="2026-04-14T01:10:44.605906268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dc6a32a2019cd173b38de969cf403b25,Namespace:kube-system,Attempt:0,}" Apr 14 01:10:44.607982 kubelet[2123]: E0414 01:10:44.607921 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:44.608749 containerd[1453]: time="2026-04-14T01:10:44.608633829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3ef4c7b0b14aacb703d6788ed41a925d,Namespace:kube-system,Attempt:0,}" Apr 14 01:10:44.709300 kubelet[2123]: E0414 01:10:44.709225 2123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="800ms" Apr 14 01:10:44.865980 kubelet[2123]: I0414 01:10:44.865948 2123 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 01:10:44.866348 kubelet[2123]: E0414 01:10:44.866286 2123 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Apr 14 01:10:44.912683 kubelet[2123]: E0414 01:10:44.912630 2123 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 01:10:44.932492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount84229593.mount: Deactivated successfully. Apr 14 01:10:44.941461 containerd[1453]: time="2026-04-14T01:10:44.941408909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 01:10:44.942223 containerd[1453]: time="2026-04-14T01:10:44.942160344Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 01:10:44.942958 containerd[1453]: time="2026-04-14T01:10:44.942898635Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 01:10:44.943747 containerd[1453]: time="2026-04-14T01:10:44.943688482Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 01:10:44.945009 containerd[1453]: time="2026-04-14T01:10:44.944943261Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 14 01:10:44.945759 containerd[1453]: time="2026-04-14T01:10:44.945703129Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 01:10:44.946378 containerd[1453]: time="2026-04-14T01:10:44.946350285Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 01:10:44.951400 containerd[1453]: time="2026-04-14T01:10:44.951285443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 01:10:44.951901 containerd[1453]: time="2026-04-14T01:10:44.951852185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 345.889762ms" Apr 14 01:10:44.952359 containerd[1453]: time="2026-04-14T01:10:44.952327808Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 343.517683ms" Apr 14 01:10:44.954142 containerd[1453]: time="2026-04-14T01:10:44.954111698Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 358.853692ms" Apr 14 01:10:45.020182 kubelet[2123]: E0414 01:10:45.020046 2123 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.9:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.9:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a613f48cf87eae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 01:10:44.102282926 +0000 UTC m=+0.794368468,LastTimestamp:2026-04-14 01:10:44.102282926 +0000 UTC m=+0.794368468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 01:10:45.037682 kubelet[2123]: E0414 01:10:45.037204 2123 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 01:10:45.084329 containerd[1453]: time="2026-04-14T01:10:45.083915703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:10:45.084329 containerd[1453]: time="2026-04-14T01:10:45.083964474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:10:45.084329 containerd[1453]: time="2026-04-14T01:10:45.083985060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:45.084329 containerd[1453]: time="2026-04-14T01:10:45.084052248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:45.084602 containerd[1453]: time="2026-04-14T01:10:45.084384216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:10:45.084602 containerd[1453]: time="2026-04-14T01:10:45.084431101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:10:45.084602 containerd[1453]: time="2026-04-14T01:10:45.084453208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:45.084602 containerd[1453]: time="2026-04-14T01:10:45.084518065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:45.088007 containerd[1453]: time="2026-04-14T01:10:45.085625886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:10:45.088007 containerd[1453]: time="2026-04-14T01:10:45.085691990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:10:45.088007 containerd[1453]: time="2026-04-14T01:10:45.085700487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:45.088007 containerd[1453]: time="2026-04-14T01:10:45.085755276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:45.107827 systemd[1]: Started cri-containerd-97747223463d0b153a8780b80e7d25b7da60b0f79acc30b036d8efafd76c6840.scope - libcontainer container 97747223463d0b153a8780b80e7d25b7da60b0f79acc30b036d8efafd76c6840. Apr 14 01:10:45.111720 systemd[1]: Started cri-containerd-cee15409ebce5c6b12b5516b6f843f10a6c4946084243f90eef62c327e997570.scope - libcontainer container cee15409ebce5c6b12b5516b6f843f10a6c4946084243f90eef62c327e997570. Apr 14 01:10:45.113838 systemd[1]: Started cri-containerd-ea4af3e1c5efb041d775f98a62bfb8c4bdf68e1bac2c8d29328cecf32474fb7d.scope - libcontainer container ea4af3e1c5efb041d775f98a62bfb8c4bdf68e1bac2c8d29328cecf32474fb7d. Apr 14 01:10:45.155125 containerd[1453]: time="2026-04-14T01:10:45.155036296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a137125e1e6a73398940436c0b6e6d18,Namespace:kube-system,Attempt:0,} returns sandbox id \"97747223463d0b153a8780b80e7d25b7da60b0f79acc30b036d8efafd76c6840\"" Apr 14 01:10:45.158694 kubelet[2123]: E0414 01:10:45.158674 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:45.159475 containerd[1453]: time="2026-04-14T01:10:45.159446744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dc6a32a2019cd173b38de969cf403b25,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea4af3e1c5efb041d775f98a62bfb8c4bdf68e1bac2c8d29328cecf32474fb7d\"" Apr 14 01:10:45.159957 kubelet[2123]: E0414 01:10:45.159903 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:45.163531 containerd[1453]: time="2026-04-14T01:10:45.163060376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3ef4c7b0b14aacb703d6788ed41a925d,Namespace:kube-system,Attempt:0,} returns sandbox id \"cee15409ebce5c6b12b5516b6f843f10a6c4946084243f90eef62c327e997570\"" Apr 14 01:10:45.163788 kubelet[2123]: E0414 01:10:45.163775 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:45.166957 containerd[1453]: time="2026-04-14T01:10:45.166924596Z" level=info msg="CreateContainer within sandbox \"97747223463d0b153a8780b80e7d25b7da60b0f79acc30b036d8efafd76c6840\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 14 01:10:45.170109 containerd[1453]: time="2026-04-14T01:10:45.169926772Z" level=info msg="CreateContainer within sandbox \"ea4af3e1c5efb041d775f98a62bfb8c4bdf68e1bac2c8d29328cecf32474fb7d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 14 01:10:45.173025 containerd[1453]: time="2026-04-14T01:10:45.172989579Z" level=info msg="CreateContainer within sandbox \"cee15409ebce5c6b12b5516b6f843f10a6c4946084243f90eef62c327e997570\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 14 01:10:45.195664 containerd[1453]: time="2026-04-14T01:10:45.195477379Z" level=info msg="CreateContainer within sandbox \"97747223463d0b153a8780b80e7d25b7da60b0f79acc30b036d8efafd76c6840\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"36cbaf9323fd8ab69507a56d31b196fdf23011baa9bbb947cc83e58cb0435c82\"" Apr 14 01:10:45.196357 containerd[1453]: time="2026-04-14T01:10:45.196336044Z" level=info msg="StartContainer for \"36cbaf9323fd8ab69507a56d31b196fdf23011baa9bbb947cc83e58cb0435c82\"" Apr 14 01:10:45.196859 containerd[1453]: time="2026-04-14T01:10:45.196703751Z" level=info msg="CreateContainer within sandbox \"ea4af3e1c5efb041d775f98a62bfb8c4bdf68e1bac2c8d29328cecf32474fb7d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"64e33218099d117c3de52411ec742ecb6d20c34c5c8c4240c6b872d18331f414\"" Apr 14 01:10:45.197970 containerd[1453]: time="2026-04-14T01:10:45.197925908Z" level=info msg="StartContainer for \"64e33218099d117c3de52411ec742ecb6d20c34c5c8c4240c6b872d18331f414\"" Apr 14 01:10:45.200066 containerd[1453]: time="2026-04-14T01:10:45.200025557Z" level=info msg="CreateContainer within sandbox \"cee15409ebce5c6b12b5516b6f843f10a6c4946084243f90eef62c327e997570\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e07164f04a7fb21ecd8843493e2ccbb62fbc9a4c7bfa3e78a6b50a239ce134f6\"" Apr 14 01:10:45.201241 containerd[1453]: time="2026-04-14T01:10:45.200520440Z" level=info msg="StartContainer for \"e07164f04a7fb21ecd8843493e2ccbb62fbc9a4c7bfa3e78a6b50a239ce134f6\"" Apr 14 01:10:45.219351 systemd[1]: Started cri-containerd-36cbaf9323fd8ab69507a56d31b196fdf23011baa9bbb947cc83e58cb0435c82.scope - libcontainer container 36cbaf9323fd8ab69507a56d31b196fdf23011baa9bbb947cc83e58cb0435c82. Apr 14 01:10:45.244999 systemd[1]: Started cri-containerd-64e33218099d117c3de52411ec742ecb6d20c34c5c8c4240c6b872d18331f414.scope - libcontainer container 64e33218099d117c3de52411ec742ecb6d20c34c5c8c4240c6b872d18331f414. Apr 14 01:10:45.247730 systemd[1]: Started cri-containerd-e07164f04a7fb21ecd8843493e2ccbb62fbc9a4c7bfa3e78a6b50a239ce134f6.scope - libcontainer container e07164f04a7fb21ecd8843493e2ccbb62fbc9a4c7bfa3e78a6b50a239ce134f6. Apr 14 01:10:45.259367 containerd[1453]: time="2026-04-14T01:10:45.259337742Z" level=info msg="StartContainer for \"36cbaf9323fd8ab69507a56d31b196fdf23011baa9bbb947cc83e58cb0435c82\" returns successfully" Apr 14 01:10:45.292958 containerd[1453]: time="2026-04-14T01:10:45.292925739Z" level=info msg="StartContainer for \"64e33218099d117c3de52411ec742ecb6d20c34c5c8c4240c6b872d18331f414\" returns successfully" Apr 14 01:10:45.293904 containerd[1453]: time="2026-04-14T01:10:45.293121586Z" level=info msg="StartContainer for \"e07164f04a7fb21ecd8843493e2ccbb62fbc9a4c7bfa3e78a6b50a239ce134f6\" returns successfully" Apr 14 01:10:45.668637 kubelet[2123]: I0414 01:10:45.668563 2123 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 01:10:46.139565 kubelet[2123]: E0414 01:10:46.139518 2123 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:10:46.139717 kubelet[2123]: E0414 01:10:46.139688 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:46.143404 kubelet[2123]: E0414 01:10:46.143373 2123 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:10:46.143519 kubelet[2123]: E0414 01:10:46.143494 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:46.146295 kubelet[2123]: E0414 01:10:46.146267 2123 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:10:46.148560 kubelet[2123]: E0414 01:10:46.146374 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:46.283615 kubelet[2123]: E0414 01:10:46.283536 2123 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 14 01:10:46.362308 kubelet[2123]: I0414 01:10:46.362250 2123 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 01:10:46.362308 kubelet[2123]: E0414 01:10:46.362297 2123 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 14 01:10:46.384013 kubelet[2123]: E0414 01:10:46.383932 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:46.484907 kubelet[2123]: E0414 01:10:46.484742 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:46.585157 kubelet[2123]: E0414 01:10:46.585063 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:46.686435 kubelet[2123]: E0414 01:10:46.686316 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:46.787630 kubelet[2123]: E0414 01:10:46.787440 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:46.888768 kubelet[2123]: E0414 01:10:46.888662 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:46.990129 kubelet[2123]: E0414 01:10:46.989993 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:47.091842 kubelet[2123]: E0414 01:10:47.090713 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:47.148056 kubelet[2123]: E0414 01:10:47.148007 2123 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:10:47.148265 kubelet[2123]: E0414 01:10:47.148122 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:47.148265 kubelet[2123]: E0414 01:10:47.148128 2123 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:10:47.148265 kubelet[2123]: E0414 01:10:47.148243 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:47.191795 kubelet[2123]: E0414 01:10:47.191716 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:47.293226 kubelet[2123]: E0414 01:10:47.293009 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:47.394150 kubelet[2123]: E0414 01:10:47.394087 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:47.495572 kubelet[2123]: E0414 01:10:47.495152 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:47.595970 kubelet[2123]: E0414 01:10:47.595878 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:47.696556 kubelet[2123]: E0414 01:10:47.696344 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:47.797273 kubelet[2123]: E0414 01:10:47.797152 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:47.893070 kubelet[2123]: E0414 01:10:47.893013 2123 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:10:47.893293 kubelet[2123]: E0414 01:10:47.893279 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:47.898347 kubelet[2123]: E0414 01:10:47.898146 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:47.999661 kubelet[2123]: E0414 01:10:47.998972 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:48.099588 kubelet[2123]: E0414 01:10:48.099530 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:48.216909 kubelet[2123]: E0414 01:10:48.215042 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:48.308367 kubelet[2123]: I0414 01:10:48.307575 2123 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 01:10:48.317190 kubelet[2123]: I0414 01:10:48.317128 2123 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:48.325139 kubelet[2123]: I0414 01:10:48.324917 2123 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:49.099587 kubelet[2123]: I0414 01:10:49.099520 2123 apiserver.go:52] "Watching apiserver" Apr 14 01:10:49.102353 kubelet[2123]: E0414 01:10:49.102292 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:49.102535 kubelet[2123]: E0414 01:10:49.102491 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:49.102743 kubelet[2123]: E0414 01:10:49.102707 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:49.109835 kubelet[2123]: I0414 01:10:49.109556 2123 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 14 01:10:49.256116 systemd[1]: Reloading requested from client PID 2413 ('systemctl') (unit session-7.scope)... Apr 14 01:10:49.256154 systemd[1]: Reloading... Apr 14 01:10:49.331222 zram_generator::config[2451]: No configuration found. Apr 14 01:10:49.418708 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 01:10:49.479796 systemd[1]: Reloading finished in 223 ms. Apr 14 01:10:49.514950 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:10:49.533747 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 01:10:49.534095 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:10:49.534211 systemd[1]: kubelet.service: Consumed 1.183s CPU time, 127.1M memory peak, 0B memory swap peak. Apr 14 01:10:49.544845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:10:49.671016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:10:49.675629 (kubelet)[2497]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 01:10:49.727546 kubelet[2497]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 01:10:49.727546 kubelet[2497]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 01:10:49.727945 kubelet[2497]: I0414 01:10:49.727568 2497 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 01:10:49.735348 kubelet[2497]: I0414 01:10:49.735067 2497 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 14 01:10:49.735348 kubelet[2497]: I0414 01:10:49.735103 2497 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 01:10:49.735680 kubelet[2497]: I0414 01:10:49.735488 2497 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 14 01:10:49.735680 kubelet[2497]: I0414 01:10:49.735532 2497 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 01:10:49.736475 kubelet[2497]: I0414 01:10:49.736161 2497 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 01:10:49.737696 kubelet[2497]: I0414 01:10:49.737619 2497 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 14 01:10:49.742474 kubelet[2497]: I0414 01:10:49.742413 2497 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 01:10:49.748227 kubelet[2497]: E0414 01:10:49.745601 2497 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 01:10:49.748227 kubelet[2497]: I0414 01:10:49.745693 2497 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 14 01:10:49.749992 kubelet[2497]: I0414 01:10:49.749954 2497 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 14 01:10:49.750131 kubelet[2497]: I0414 01:10:49.750094 2497 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 01:10:49.750341 kubelet[2497]: I0414 01:10:49.750129 2497 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 01:10:49.750428 kubelet[2497]: I0414 01:10:49.750344 2497 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 01:10:49.750428 kubelet[2497]: I0414 01:10:49.750351 2497 container_manager_linux.go:306] "Creating device plugin manager" Apr 14 01:10:49.750428 kubelet[2497]: I0414 01:10:49.750368 2497 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 14 01:10:49.750568 kubelet[2497]: I0414 01:10:49.750548 2497 state_mem.go:36] "Initialized new in-memory state store" Apr 14 01:10:49.750748 kubelet[2497]: I0414 01:10:49.750726 2497 kubelet.go:475] "Attempting to sync node with API server" Apr 14 01:10:49.750769 kubelet[2497]: I0414 01:10:49.750764 2497 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 01:10:49.750792 kubelet[2497]: I0414 01:10:49.750782 2497 kubelet.go:387] "Adding apiserver pod source" Apr 14 01:10:49.750792 kubelet[2497]: I0414 01:10:49.750791 2497 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 01:10:49.758207 kubelet[2497]: I0414 01:10:49.756284 2497 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 01:10:49.758207 kubelet[2497]: I0414 01:10:49.757825 2497 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 01:10:49.758207 kubelet[2497]: I0414 01:10:49.757866 2497 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 14 01:10:49.766317 kubelet[2497]: I0414 01:10:49.766293 2497 server.go:1262] "Started kubelet" Apr 14 01:10:49.766874 kubelet[2497]: I0414 01:10:49.766846 2497 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 01:10:49.767314 kubelet[2497]: I0414 01:10:49.767259 2497 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 01:10:49.767350 kubelet[2497]: I0414 01:10:49.767333 2497 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 14 01:10:49.769643 kubelet[2497]: I0414 01:10:49.769604 2497 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 01:10:49.772474 kubelet[2497]: I0414 01:10:49.772417 2497 server.go:310] "Adding debug handlers to kubelet server" Apr 14 01:10:49.775058 kubelet[2497]: I0414 01:10:49.775029 2497 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 01:10:49.775582 kubelet[2497]: I0414 01:10:49.775392 2497 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 01:10:49.777298 kubelet[2497]: I0414 01:10:49.777260 2497 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 14 01:10:49.777463 kubelet[2497]: I0414 01:10:49.777366 2497 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 14 01:10:49.779158 kubelet[2497]: I0414 01:10:49.779103 2497 reconciler.go:29] "Reconciler: start to sync state" Apr 14 01:10:49.779584 kubelet[2497]: E0414 01:10:49.779370 2497 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 01:10:49.782960 kubelet[2497]: I0414 01:10:49.782754 2497 factory.go:223] Registration of the systemd container factory successfully Apr 14 01:10:49.782960 kubelet[2497]: I0414 01:10:49.782882 2497 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 01:10:49.788142 kubelet[2497]: I0414 01:10:49.786285 2497 factory.go:223] Registration of the containerd container factory successfully Apr 14 01:10:49.814899 kubelet[2497]: I0414 01:10:49.814842 2497 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 14 01:10:49.818025 kubelet[2497]: I0414 01:10:49.817908 2497 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 14 01:10:49.818025 kubelet[2497]: I0414 01:10:49.817954 2497 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 14 01:10:49.820716 kubelet[2497]: I0414 01:10:49.820418 2497 kubelet.go:2428] "Starting kubelet main sync loop" Apr 14 01:10:49.820716 kubelet[2497]: E0414 01:10:49.820538 2497 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 01:10:49.834934 kubelet[2497]: I0414 01:10:49.834843 2497 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 01:10:49.834934 kubelet[2497]: I0414 01:10:49.834880 2497 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 01:10:49.834934 kubelet[2497]: I0414 01:10:49.834901 2497 state_mem.go:36] "Initialized new in-memory state store" Apr 14 01:10:49.835154 kubelet[2497]: I0414 01:10:49.835036 2497 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 14 01:10:49.835154 kubelet[2497]: I0414 01:10:49.835046 2497 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 14 01:10:49.835279 kubelet[2497]: I0414 01:10:49.835240 2497 policy_none.go:49] "None policy: Start" Apr 14 01:10:49.835279 kubelet[2497]: I0414 01:10:49.835254 2497 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 14 01:10:49.835279 kubelet[2497]: I0414 01:10:49.835267 2497 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 14 01:10:49.835410 kubelet[2497]: I0414 01:10:49.835379 2497 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 14 01:10:49.835410 kubelet[2497]: I0414 01:10:49.835388 2497 policy_none.go:47] "Start" Apr 14 01:10:49.844305 kubelet[2497]: E0414 01:10:49.844259 2497 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 01:10:49.844766 kubelet[2497]: I0414 01:10:49.844415 2497 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 01:10:49.844766 kubelet[2497]: I0414 01:10:49.844458 2497 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 01:10:49.844766 kubelet[2497]: I0414 01:10:49.844720 2497 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 01:10:49.845654 kubelet[2497]: E0414 01:10:49.845642 2497 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 01:10:49.923849 kubelet[2497]: I0414 01:10:49.923632 2497 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:49.923849 kubelet[2497]: I0414 01:10:49.923711 2497 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:49.923849 kubelet[2497]: I0414 01:10:49.923814 2497 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 01:10:49.936478 kubelet[2497]: E0414 01:10:49.936345 2497 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 14 01:10:49.936478 kubelet[2497]: E0414 01:10:49.936393 2497 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:49.936712 kubelet[2497]: E0414 01:10:49.936346 2497 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:49.950363 kubelet[2497]: I0414 01:10:49.950317 2497 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 01:10:49.958639 kubelet[2497]: I0414 01:10:49.958600 2497 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 14 01:10:49.958797 kubelet[2497]: I0414 01:10:49.958697 2497 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 01:10:49.980911 kubelet[2497]: I0414 01:10:49.980815 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a137125e1e6a73398940436c0b6e6d18-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a137125e1e6a73398940436c0b6e6d18\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:49.980911 kubelet[2497]: I0414 01:10:49.980873 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:49.980911 kubelet[2497]: I0414 01:10:49.980893 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:49.980911 kubelet[2497]: I0414 01:10:49.980917 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a137125e1e6a73398940436c0b6e6d18-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a137125e1e6a73398940436c0b6e6d18\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:49.981209 kubelet[2497]: I0414 01:10:49.980948 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a137125e1e6a73398940436c0b6e6d18-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a137125e1e6a73398940436c0b6e6d18\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:49.981209 kubelet[2497]: I0414 01:10:49.980968 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:49.981209 kubelet[2497]: I0414 01:10:49.981028 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:49.981209 kubelet[2497]: I0414 01:10:49.981094 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:49.981209 kubelet[2497]: I0414 01:10:49.981118 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ef4c7b0b14aacb703d6788ed41a925d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3ef4c7b0b14aacb703d6788ed41a925d\") " pod="kube-system/kube-scheduler-localhost" Apr 14 01:10:50.237223 kubelet[2497]: E0414 01:10:50.237098 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:50.237223 kubelet[2497]: E0414 01:10:50.237214 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:50.237377 kubelet[2497]: E0414 01:10:50.237106 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:50.257016 sudo[2536]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 14 01:10:50.257683 sudo[2536]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 14 01:10:50.752729 kubelet[2497]: I0414 01:10:50.752327 2497 apiserver.go:52] "Watching apiserver" Apr 14 01:10:50.762026 sudo[2536]: pam_unix(sudo:session): session closed for user root Apr 14 01:10:50.778302 kubelet[2497]: I0414 01:10:50.778238 2497 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 14 01:10:50.833868 kubelet[2497]: I0414 01:10:50.833488 2497 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:50.833868 kubelet[2497]: E0414 01:10:50.833766 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:50.834254 kubelet[2497]: E0414 01:10:50.834039 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:50.840699 kubelet[2497]: E0414 01:10:50.840646 2497 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:50.840890 kubelet[2497]: E0414 01:10:50.840868 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:50.876697 kubelet[2497]: I0414 01:10:50.876583 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.87655867 podStartE2EDuration="2.87655867s" podCreationTimestamp="2026-04-14 01:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:10:50.864823774 +0000 UTC m=+1.185556970" watchObservedRunningTime="2026-04-14 01:10:50.87655867 +0000 UTC m=+1.197291878" Apr 14 01:10:50.887374 kubelet[2497]: I0414 01:10:50.886625 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.886605626 podStartE2EDuration="2.886605626s" podCreationTimestamp="2026-04-14 01:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:10:50.877667188 +0000 UTC m=+1.198400400" watchObservedRunningTime="2026-04-14 01:10:50.886605626 +0000 UTC m=+1.207338823" Apr 14 01:10:50.907957 kubelet[2497]: I0414 01:10:50.907888 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.907868641 podStartE2EDuration="2.907868641s" podCreationTimestamp="2026-04-14 01:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:10:50.886738192 +0000 UTC m=+1.207471394" watchObservedRunningTime="2026-04-14 01:10:50.907868641 +0000 UTC m=+1.228601839" Apr 14 01:10:51.834432 kubelet[2497]: E0414 01:10:51.834372 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:51.835005 kubelet[2497]: E0414 01:10:51.834968 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:52.080112 sudo[1631]: pam_unix(sudo:session): session closed for user root Apr 14 01:10:52.081430 sshd[1628]: pam_unix(sshd:session): session closed for user core Apr 14 01:10:52.083428 systemd[1]: sshd@6-10.0.0.9:22-10.0.0.1:38218.service: Deactivated successfully. Apr 14 01:10:52.084739 systemd[1]: session-7.scope: Deactivated successfully. Apr 14 01:10:52.084889 systemd[1]: session-7.scope: Consumed 5.204s CPU time, 159.2M memory peak, 0B memory swap peak. Apr 14 01:10:52.085889 systemd-logind[1434]: Session 7 logged out. Waiting for processes to exit. Apr 14 01:10:52.086685 systemd-logind[1434]: Removed session 7. Apr 14 01:10:55.161974 kubelet[2497]: I0414 01:10:55.161854 2497 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 14 01:10:55.163053 kubelet[2497]: I0414 01:10:55.163019 2497 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 14 01:10:55.163104 containerd[1453]: time="2026-04-14T01:10:55.162710582Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 14 01:10:55.638326 kubelet[2497]: E0414 01:10:55.638284 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:55.846001 kubelet[2497]: E0414 01:10:55.845966 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:56.220302 systemd[1]: Created slice kubepods-besteffort-podb415e710_9a1a_4000_8fae_ce60845ba4c0.slice - libcontainer container kubepods-besteffort-podb415e710_9a1a_4000_8fae_ce60845ba4c0.slice. Apr 14 01:10:56.236019 systemd[1]: Created slice kubepods-burstable-pod5759a036_e80f_4c0b_b00a_328cc881450c.slice - libcontainer container kubepods-burstable-pod5759a036_e80f_4c0b_b00a_328cc881450c.slice. Apr 14 01:10:56.251502 kubelet[2497]: I0414 01:10:56.251415 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-cilium-cgroup\") pod \"cilium-q2qdj\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " pod="kube-system/cilium-q2qdj" Apr 14 01:10:56.251502 kubelet[2497]: I0414 01:10:56.251463 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5759a036-e80f-4c0b-b00a-328cc881450c-clustermesh-secrets\") pod \"cilium-q2qdj\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " pod="kube-system/cilium-q2qdj" Apr 14 01:10:56.251502 kubelet[2497]: I0414 01:10:56.251483 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-cilium-run\") pod \"cilium-q2qdj\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " pod="kube-system/cilium-q2qdj" Apr 14 01:10:56.251502 kubelet[2497]: I0414 01:10:56.251495 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-xtables-lock\") pod \"cilium-q2qdj\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " pod="kube-system/cilium-q2qdj" Apr 14 01:10:56.251502 kubelet[2497]: I0414 01:10:56.251513 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5759a036-e80f-4c0b-b00a-328cc881450c-cilium-config-path\") pod \"cilium-q2qdj\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " pod="kube-system/cilium-q2qdj" Apr 14 01:10:56.251502 kubelet[2497]: I0414 01:10:56.251524 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5759a036-e80f-4c0b-b00a-328cc881450c-hubble-tls\") pod \"cilium-q2qdj\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " pod="kube-system/cilium-q2qdj" Apr 14 01:10:56.252094 kubelet[2497]: I0414 01:10:56.251535 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xps77\" (UniqueName: \"kubernetes.io/projected/5759a036-e80f-4c0b-b00a-328cc881450c-kube-api-access-xps77\") pod \"cilium-q2qdj\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " pod="kube-system/cilium-q2qdj" Apr 14 01:10:56.252094 kubelet[2497]: I0414 01:10:56.251547 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tbd7\" (UniqueName: \"kubernetes.io/projected/b415e710-9a1a-4000-8fae-ce60845ba4c0-kube-api-access-2tbd7\") pod \"kube-proxy-7d2gq\" (UID: \"b415e710-9a1a-4000-8fae-ce60845ba4c0\") " pod="kube-system/kube-proxy-7d2gq" Apr 14 01:10:56.252094 kubelet[2497]: I0414 01:10:56.251557 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-etc-cni-netd\") pod \"cilium-q2qdj\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " pod="kube-system/cilium-q2qdj" Apr 14 01:10:56.252094 kubelet[2497]: I0414 01:10:56.251568 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-lib-modules\") pod \"cilium-q2qdj\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " pod="kube-system/cilium-q2qdj" Apr 14 01:10:56.252094 kubelet[2497]: I0414 01:10:56.251579 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-host-proc-sys-kernel\") pod \"cilium-q2qdj\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " pod="kube-system/cilium-q2qdj" Apr 14 01:10:56.252312 kubelet[2497]: I0414 01:10:56.251590 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b415e710-9a1a-4000-8fae-ce60845ba4c0-xtables-lock\") pod \"kube-proxy-7d2gq\" (UID: \"b415e710-9a1a-4000-8fae-ce60845ba4c0\") " pod="kube-system/kube-proxy-7d2gq" Apr 14 01:10:56.252312 kubelet[2497]: I0414 01:10:56.251778 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b415e710-9a1a-4000-8fae-ce60845ba4c0-lib-modules\") pod \"kube-proxy-7d2gq\" (UID: \"b415e710-9a1a-4000-8fae-ce60845ba4c0\") " pod="kube-system/kube-proxy-7d2gq" Apr 14 01:10:56.252312 kubelet[2497]: I0414 01:10:56.251829 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-cni-path\") pod \"cilium-q2qdj\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " pod="kube-system/cilium-q2qdj" Apr 14 01:10:56.252312 kubelet[2497]: I0414 01:10:56.251844 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-host-proc-sys-net\") pod \"cilium-q2qdj\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " pod="kube-system/cilium-q2qdj" Apr 14 01:10:56.252312 kubelet[2497]: I0414 01:10:56.251864 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b415e710-9a1a-4000-8fae-ce60845ba4c0-kube-proxy\") pod \"kube-proxy-7d2gq\" (UID: \"b415e710-9a1a-4000-8fae-ce60845ba4c0\") " pod="kube-system/kube-proxy-7d2gq" Apr 14 01:10:56.252312 kubelet[2497]: I0414 01:10:56.251876 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-bpf-maps\") pod \"cilium-q2qdj\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " pod="kube-system/cilium-q2qdj" Apr 14 01:10:56.252436 kubelet[2497]: I0414 01:10:56.251885 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-hostproc\") pod \"cilium-q2qdj\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " pod="kube-system/cilium-q2qdj" Apr 14 01:10:56.396515 systemd[1]: Created slice kubepods-besteffort-podee726647_6e20_4c62_be0b_e8e3a4442292.slice - libcontainer container kubepods-besteffort-podee726647_6e20_4c62_be0b_e8e3a4442292.slice. Apr 14 01:10:56.458988 kubelet[2497]: I0414 01:10:56.458880 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvzf5\" (UniqueName: \"kubernetes.io/projected/ee726647-6e20-4c62-be0b-e8e3a4442292-kube-api-access-lvzf5\") pod \"cilium-operator-6f9c7c5859-6262q\" (UID: \"ee726647-6e20-4c62-be0b-e8e3a4442292\") " pod="kube-system/cilium-operator-6f9c7c5859-6262q" Apr 14 01:10:56.458988 kubelet[2497]: I0414 01:10:56.458941 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee726647-6e20-4c62-be0b-e8e3a4442292-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-6262q\" (UID: \"ee726647-6e20-4c62-be0b-e8e3a4442292\") " pod="kube-system/cilium-operator-6f9c7c5859-6262q" Apr 14 01:10:56.536056 kubelet[2497]: E0414 01:10:56.535918 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:56.536899 containerd[1453]: time="2026-04-14T01:10:56.536808907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7d2gq,Uid:b415e710-9a1a-4000-8fae-ce60845ba4c0,Namespace:kube-system,Attempt:0,}" Apr 14 01:10:56.540491 kubelet[2497]: E0414 01:10:56.540462 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:56.541054 containerd[1453]: time="2026-04-14T01:10:56.541022467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q2qdj,Uid:5759a036-e80f-4c0b-b00a-328cc881450c,Namespace:kube-system,Attempt:0,}" Apr 14 01:10:56.571957 containerd[1453]: time="2026-04-14T01:10:56.571875066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:10:56.571957 containerd[1453]: time="2026-04-14T01:10:56.571925910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:10:56.571957 containerd[1453]: time="2026-04-14T01:10:56.571938209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:56.574199 containerd[1453]: time="2026-04-14T01:10:56.573283371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:56.575204 containerd[1453]: time="2026-04-14T01:10:56.575026720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:10:56.575204 containerd[1453]: time="2026-04-14T01:10:56.575076519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:10:56.575204 containerd[1453]: time="2026-04-14T01:10:56.575088620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:56.575204 containerd[1453]: time="2026-04-14T01:10:56.575136622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:56.589370 systemd[1]: Started cri-containerd-43e69e6509ccbbd10906403ec0e9f85c7658315879a0dfc965a24f7b2cfdd64f.scope - libcontainer container 43e69e6509ccbbd10906403ec0e9f85c7658315879a0dfc965a24f7b2cfdd64f. Apr 14 01:10:56.591708 systemd[1]: Started cri-containerd-3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185.scope - libcontainer container 3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185. Apr 14 01:10:56.615382 containerd[1453]: time="2026-04-14T01:10:56.615293456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q2qdj,Uid:5759a036-e80f-4c0b-b00a-328cc881450c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185\"" Apr 14 01:10:56.615510 containerd[1453]: time="2026-04-14T01:10:56.615474095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7d2gq,Uid:b415e710-9a1a-4000-8fae-ce60845ba4c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"43e69e6509ccbbd10906403ec0e9f85c7658315879a0dfc965a24f7b2cfdd64f\"" Apr 14 01:10:56.616404 kubelet[2497]: E0414 01:10:56.616357 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:56.616576 kubelet[2497]: E0414 01:10:56.616556 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:56.618333 containerd[1453]: time="2026-04-14T01:10:56.618233280Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 14 01:10:56.624387 containerd[1453]: time="2026-04-14T01:10:56.624258701Z" level=info msg="CreateContainer within sandbox \"43e69e6509ccbbd10906403ec0e9f85c7658315879a0dfc965a24f7b2cfdd64f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 14 01:10:56.640725 containerd[1453]: time="2026-04-14T01:10:56.640665507Z" level=info msg="CreateContainer within sandbox \"43e69e6509ccbbd10906403ec0e9f85c7658315879a0dfc965a24f7b2cfdd64f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4bc5e09a5513c7f686e3212a91a4aa64ed537a8eba4cf61f45b935d9953557e3\"" Apr 14 01:10:56.641328 containerd[1453]: time="2026-04-14T01:10:56.641301329Z" level=info msg="StartContainer for \"4bc5e09a5513c7f686e3212a91a4aa64ed537a8eba4cf61f45b935d9953557e3\"" Apr 14 01:10:56.670433 systemd[1]: Started cri-containerd-4bc5e09a5513c7f686e3212a91a4aa64ed537a8eba4cf61f45b935d9953557e3.scope - libcontainer container 4bc5e09a5513c7f686e3212a91a4aa64ed537a8eba4cf61f45b935d9953557e3. Apr 14 01:10:56.693824 containerd[1453]: time="2026-04-14T01:10:56.693771154Z" level=info msg="StartContainer for \"4bc5e09a5513c7f686e3212a91a4aa64ed537a8eba4cf61f45b935d9953557e3\" returns successfully" Apr 14 01:10:56.701364 kubelet[2497]: E0414 01:10:56.701294 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:56.702477 containerd[1453]: time="2026-04-14T01:10:56.702422432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-6262q,Uid:ee726647-6e20-4c62-be0b-e8e3a4442292,Namespace:kube-system,Attempt:0,}" Apr 14 01:10:56.735239 containerd[1453]: time="2026-04-14T01:10:56.735073412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:10:56.735792 containerd[1453]: time="2026-04-14T01:10:56.735743904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:10:56.735792 containerd[1453]: time="2026-04-14T01:10:56.735759571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:56.735955 containerd[1453]: time="2026-04-14T01:10:56.735866533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:56.755565 systemd[1]: Started cri-containerd-5bee8ab0773d372f2e6c2134ac15908083df97890fd92b9f164417181c7539eb.scope - libcontainer container 5bee8ab0773d372f2e6c2134ac15908083df97890fd92b9f164417181c7539eb. Apr 14 01:10:56.799297 containerd[1453]: time="2026-04-14T01:10:56.799054002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-6262q,Uid:ee726647-6e20-4c62-be0b-e8e3a4442292,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bee8ab0773d372f2e6c2134ac15908083df97890fd92b9f164417181c7539eb\"" Apr 14 01:10:56.800555 kubelet[2497]: E0414 01:10:56.800154 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:56.850609 kubelet[2497]: E0414 01:10:56.850550 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:57.892237 kubelet[2497]: E0414 01:10:57.892020 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:57.905920 kubelet[2497]: I0414 01:10:57.905853 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7d2gq" podStartSLOduration=1.905833772 podStartE2EDuration="1.905833772s" podCreationTimestamp="2026-04-14 01:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:10:56.862218476 +0000 UTC m=+7.182951667" watchObservedRunningTime="2026-04-14 01:10:57.905833772 +0000 UTC m=+8.226566974" Apr 14 01:10:58.694737 kubelet[2497]: E0414 01:10:58.694675 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:58.854726 kubelet[2497]: E0414 01:10:58.854680 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:58.855287 kubelet[2497]: E0414 01:10:58.855253 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:59.855525 kubelet[2497]: E0414 01:10:59.855502 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:01.359649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1638092874.mount: Deactivated successfully. Apr 14 01:11:02.940021 containerd[1453]: time="2026-04-14T01:11:02.939910651Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:02.940741 containerd[1453]: time="2026-04-14T01:11:02.940684172Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 14 01:11:02.942510 containerd[1453]: time="2026-04-14T01:11:02.942418872Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:02.944024 containerd[1453]: time="2026-04-14T01:11:02.943974937Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.325595163s" Apr 14 01:11:02.944024 containerd[1453]: time="2026-04-14T01:11:02.944013559Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 14 01:11:02.946116 containerd[1453]: time="2026-04-14T01:11:02.945899627Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 14 01:11:02.950209 containerd[1453]: time="2026-04-14T01:11:02.950122668Z" level=info msg="CreateContainer within sandbox \"3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 14 01:11:02.962782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2075289647.mount: Deactivated successfully. Apr 14 01:11:02.965891 containerd[1453]: time="2026-04-14T01:11:02.965776524Z" level=info msg="CreateContainer within sandbox \"3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055\"" Apr 14 01:11:02.967002 containerd[1453]: time="2026-04-14T01:11:02.966363215Z" level=info msg="StartContainer for \"3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055\"" Apr 14 01:11:02.989453 systemd[1]: run-containerd-runc-k8s.io-3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055-runc.q7pBmH.mount: Deactivated successfully. Apr 14 01:11:03.003047 systemd[1]: Started cri-containerd-3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055.scope - libcontainer container 3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055. Apr 14 01:11:03.030058 containerd[1453]: time="2026-04-14T01:11:03.029949465Z" level=info msg="StartContainer for \"3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055\" returns successfully" Apr 14 01:11:03.036415 systemd[1]: cri-containerd-3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055.scope: Deactivated successfully. Apr 14 01:11:03.124340 containerd[1453]: time="2026-04-14T01:11:03.124256090Z" level=info msg="shim disconnected" id=3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055 namespace=k8s.io Apr 14 01:11:03.124563 containerd[1453]: time="2026-04-14T01:11:03.124400697Z" level=warning msg="cleaning up after shim disconnected" id=3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055 namespace=k8s.io Apr 14 01:11:03.124563 containerd[1453]: time="2026-04-14T01:11:03.124413155Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:11:03.875862 kubelet[2497]: E0414 01:11:03.875831 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:03.885837 containerd[1453]: time="2026-04-14T01:11:03.885769474Z" level=info msg="CreateContainer within sandbox \"3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 14 01:11:03.921688 containerd[1453]: time="2026-04-14T01:11:03.921562297Z" level=info msg="CreateContainer within sandbox \"3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e\"" Apr 14 01:11:03.922402 containerd[1453]: time="2026-04-14T01:11:03.922351050Z" level=info msg="StartContainer for \"60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e\"" Apr 14 01:11:03.947361 systemd[1]: Started cri-containerd-60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e.scope - libcontainer container 60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e. Apr 14 01:11:03.962015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055-rootfs.mount: Deactivated successfully. Apr 14 01:11:03.966741 containerd[1453]: time="2026-04-14T01:11:03.966706730Z" level=info msg="StartContainer for \"60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e\" returns successfully" Apr 14 01:11:03.974743 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 01:11:03.974893 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 01:11:03.974938 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 14 01:11:03.979673 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 01:11:03.979863 systemd[1]: cri-containerd-60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e.scope: Deactivated successfully. Apr 14 01:11:03.995814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e-rootfs.mount: Deactivated successfully. Apr 14 01:11:04.003000 containerd[1453]: time="2026-04-14T01:11:04.002935339Z" level=info msg="shim disconnected" id=60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e namespace=k8s.io Apr 14 01:11:04.003196 containerd[1453]: time="2026-04-14T01:11:04.003020796Z" level=warning msg="cleaning up after shim disconnected" id=60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e namespace=k8s.io Apr 14 01:11:04.003196 containerd[1453]: time="2026-04-14T01:11:04.003029306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:11:04.006863 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 01:11:04.254778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1452845277.mount: Deactivated successfully. Apr 14 01:11:04.523582 containerd[1453]: time="2026-04-14T01:11:04.523451174Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:04.524160 containerd[1453]: time="2026-04-14T01:11:04.524125939Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 14 01:11:04.525107 containerd[1453]: time="2026-04-14T01:11:04.525062284Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:04.526818 containerd[1453]: time="2026-04-14T01:11:04.526670298Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.580732236s" Apr 14 01:11:04.526818 containerd[1453]: time="2026-04-14T01:11:04.526810060Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 14 01:11:04.531997 containerd[1453]: time="2026-04-14T01:11:04.531924330Z" level=info msg="CreateContainer within sandbox \"5bee8ab0773d372f2e6c2134ac15908083df97890fd92b9f164417181c7539eb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 14 01:11:04.543211 containerd[1453]: time="2026-04-14T01:11:04.543129871Z" level=info msg="CreateContainer within sandbox \"5bee8ab0773d372f2e6c2134ac15908083df97890fd92b9f164417181c7539eb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f\"" Apr 14 01:11:04.544437 containerd[1453]: time="2026-04-14T01:11:04.544397947Z" level=info msg="StartContainer for \"37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f\"" Apr 14 01:11:04.571483 systemd[1]: Started cri-containerd-37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f.scope - libcontainer container 37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f. Apr 14 01:11:04.603425 containerd[1453]: time="2026-04-14T01:11:04.603385824Z" level=info msg="StartContainer for \"37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f\" returns successfully" Apr 14 01:11:04.879657 kubelet[2497]: E0414 01:11:04.879625 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:04.900986 containerd[1453]: time="2026-04-14T01:11:04.900731398Z" level=info msg="CreateContainer within sandbox \"3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 14 01:11:04.901454 kubelet[2497]: E0414 01:11:04.901324 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:04.956669 containerd[1453]: time="2026-04-14T01:11:04.956405914Z" level=info msg="CreateContainer within sandbox \"3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a\"" Apr 14 01:11:04.963210 containerd[1453]: time="2026-04-14T01:11:04.958453549Z" level=info msg="StartContainer for \"699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a\"" Apr 14 01:11:05.037750 systemd[1]: run-containerd-runc-k8s.io-699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a-runc.Lbs47c.mount: Deactivated successfully. Apr 14 01:11:05.045515 systemd[1]: Started cri-containerd-699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a.scope - libcontainer container 699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a. Apr 14 01:11:05.094802 containerd[1453]: time="2026-04-14T01:11:05.094759081Z" level=info msg="StartContainer for \"699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a\" returns successfully" Apr 14 01:11:05.106807 systemd[1]: cri-containerd-699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a.scope: Deactivated successfully. Apr 14 01:11:05.195149 containerd[1453]: time="2026-04-14T01:11:05.194976628Z" level=info msg="shim disconnected" id=699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a namespace=k8s.io Apr 14 01:11:05.195149 containerd[1453]: time="2026-04-14T01:11:05.195051529Z" level=warning msg="cleaning up after shim disconnected" id=699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a namespace=k8s.io Apr 14 01:11:05.195149 containerd[1453]: time="2026-04-14T01:11:05.195059180Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:11:05.888753 kubelet[2497]: E0414 01:11:05.888671 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:05.889419 kubelet[2497]: E0414 01:11:05.888684 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:05.894316 containerd[1453]: time="2026-04-14T01:11:05.894272642Z" level=info msg="CreateContainer within sandbox \"3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 14 01:11:05.913914 kubelet[2497]: I0414 01:11:05.913809 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-6262q" podStartSLOduration=2.187189521 podStartE2EDuration="9.913776968s" podCreationTimestamp="2026-04-14 01:10:56 +0000 UTC" firstStartedPulling="2026-04-14 01:10:56.801427231 +0000 UTC m=+7.122160423" lastFinishedPulling="2026-04-14 01:11:04.528014675 +0000 UTC m=+14.848747870" observedRunningTime="2026-04-14 01:11:04.955554288 +0000 UTC m=+15.276287478" watchObservedRunningTime="2026-04-14 01:11:05.913776968 +0000 UTC m=+16.234510162" Apr 14 01:11:05.915352 containerd[1453]: time="2026-04-14T01:11:05.915236324Z" level=info msg="CreateContainer within sandbox \"3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3\"" Apr 14 01:11:05.915909 containerd[1453]: time="2026-04-14T01:11:05.915868452Z" level=info msg="StartContainer for \"2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3\"" Apr 14 01:11:05.951560 systemd[1]: Started cri-containerd-2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3.scope - libcontainer container 2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3. Apr 14 01:11:05.962074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a-rootfs.mount: Deactivated successfully. Apr 14 01:11:05.977649 systemd[1]: cri-containerd-2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3.scope: Deactivated successfully. Apr 14 01:11:05.980233 containerd[1453]: time="2026-04-14T01:11:05.980143428Z" level=info msg="StartContainer for \"2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3\" returns successfully" Apr 14 01:11:06.004934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3-rootfs.mount: Deactivated successfully. Apr 14 01:11:06.011004 containerd[1453]: time="2026-04-14T01:11:06.010905745Z" level=info msg="shim disconnected" id=2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3 namespace=k8s.io Apr 14 01:11:06.011260 containerd[1453]: time="2026-04-14T01:11:06.011028554Z" level=warning msg="cleaning up after shim disconnected" id=2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3 namespace=k8s.io Apr 14 01:11:06.011260 containerd[1453]: time="2026-04-14T01:11:06.011038936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:11:06.893790 kubelet[2497]: E0414 01:11:06.893711 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:06.901161 containerd[1453]: time="2026-04-14T01:11:06.901108741Z" level=info msg="CreateContainer within sandbox \"3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 14 01:11:06.919077 containerd[1453]: time="2026-04-14T01:11:06.919026573Z" level=info msg="CreateContainer within sandbox \"3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df\"" Apr 14 01:11:06.919695 containerd[1453]: time="2026-04-14T01:11:06.919668566Z" level=info msg="StartContainer for \"4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df\"" Apr 14 01:11:06.952806 systemd[1]: Started cri-containerd-4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df.scope - libcontainer container 4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df. Apr 14 01:11:06.992218 containerd[1453]: time="2026-04-14T01:11:06.992079670Z" level=info msg="StartContainer for \"4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df\" returns successfully" Apr 14 01:11:07.053417 systemd[1]: run-containerd-runc-k8s.io-4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df-runc.H0VuUu.mount: Deactivated successfully. Apr 14 01:11:07.200970 kubelet[2497]: I0414 01:11:07.200315 2497 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 14 01:11:07.249056 systemd[1]: Created slice kubepods-burstable-pod5a4f3da3_ffc8_4019_a125_69ea3d7d4240.slice - libcontainer container kubepods-burstable-pod5a4f3da3_ffc8_4019_a125_69ea3d7d4240.slice. Apr 14 01:11:07.254824 systemd[1]: Created slice kubepods-burstable-pod233d9fc5_c08c_4def_8e2d_c3a25b45e889.slice - libcontainer container kubepods-burstable-pod233d9fc5_c08c_4def_8e2d_c3a25b45e889.slice. Apr 14 01:11:07.343738 kubelet[2497]: I0414 01:11:07.343641 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v4r4\" (UniqueName: \"kubernetes.io/projected/5a4f3da3-ffc8-4019-a125-69ea3d7d4240-kube-api-access-9v4r4\") pod \"coredns-66bc5c9577-2lh4k\" (UID: \"5a4f3da3-ffc8-4019-a125-69ea3d7d4240\") " pod="kube-system/coredns-66bc5c9577-2lh4k" Apr 14 01:11:07.343738 kubelet[2497]: I0414 01:11:07.343709 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/233d9fc5-c08c-4def-8e2d-c3a25b45e889-config-volume\") pod \"coredns-66bc5c9577-8thjv\" (UID: \"233d9fc5-c08c-4def-8e2d-c3a25b45e889\") " pod="kube-system/coredns-66bc5c9577-8thjv" Apr 14 01:11:07.343738 kubelet[2497]: I0414 01:11:07.343744 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a4f3da3-ffc8-4019-a125-69ea3d7d4240-config-volume\") pod \"coredns-66bc5c9577-2lh4k\" (UID: \"5a4f3da3-ffc8-4019-a125-69ea3d7d4240\") " pod="kube-system/coredns-66bc5c9577-2lh4k" Apr 14 01:11:07.343958 kubelet[2497]: I0414 01:11:07.343759 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59q7j\" (UniqueName: \"kubernetes.io/projected/233d9fc5-c08c-4def-8e2d-c3a25b45e889-kube-api-access-59q7j\") pod \"coredns-66bc5c9577-8thjv\" (UID: \"233d9fc5-c08c-4def-8e2d-c3a25b45e889\") " pod="kube-system/coredns-66bc5c9577-8thjv" Apr 14 01:11:07.556610 kubelet[2497]: E0414 01:11:07.556463 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:07.563028 kubelet[2497]: E0414 01:11:07.562986 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:07.568560 containerd[1453]: time="2026-04-14T01:11:07.568472324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8thjv,Uid:233d9fc5-c08c-4def-8e2d-c3a25b45e889,Namespace:kube-system,Attempt:0,}" Apr 14 01:11:07.569487 containerd[1453]: time="2026-04-14T01:11:07.569076585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2lh4k,Uid:5a4f3da3-ffc8-4019-a125-69ea3d7d4240,Namespace:kube-system,Attempt:0,}" Apr 14 01:11:07.901914 kubelet[2497]: E0414 01:11:07.901831 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:08.543733 systemd-networkd[1380]: cilium_host: Link UP Apr 14 01:11:08.543817 systemd-networkd[1380]: cilium_net: Link UP Apr 14 01:11:08.543905 systemd-networkd[1380]: cilium_net: Gained carrier Apr 14 01:11:08.543988 systemd-networkd[1380]: cilium_host: Gained carrier Apr 14 01:11:08.641603 systemd-networkd[1380]: cilium_net: Gained IPv6LL Apr 14 01:11:08.656292 systemd-networkd[1380]: cilium_vxlan: Link UP Apr 14 01:11:08.656299 systemd-networkd[1380]: cilium_vxlan: Gained carrier Apr 14 01:11:08.729784 systemd-networkd[1380]: cilium_host: Gained IPv6LL Apr 14 01:11:08.895211 kernel: NET: Registered PF_ALG protocol family Apr 14 01:11:08.903879 kubelet[2497]: E0414 01:11:08.903401 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:09.515826 systemd-networkd[1380]: lxc_health: Link UP Apr 14 01:11:09.523309 systemd-networkd[1380]: lxc_health: Gained carrier Apr 14 01:11:09.644637 systemd-networkd[1380]: lxc2926d32c9623: Link UP Apr 14 01:11:09.652295 kernel: eth0: renamed from tmpbd300 Apr 14 01:11:09.658935 systemd-networkd[1380]: lxc2926d32c9623: Gained carrier Apr 14 01:11:09.663697 systemd-networkd[1380]: lxc61d85b97c83d: Link UP Apr 14 01:11:09.674225 kernel: eth0: renamed from tmpd6447 Apr 14 01:11:09.678493 systemd-networkd[1380]: lxc61d85b97c83d: Gained carrier Apr 14 01:11:09.719467 update_engine[1441]: I20260414 01:11:09.719261 1441 update_attempter.cc:509] Updating boot flags... Apr 14 01:11:09.729384 systemd-networkd[1380]: cilium_vxlan: Gained IPv6LL Apr 14 01:11:09.749376 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3362) Apr 14 01:11:09.796241 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3362) Apr 14 01:11:09.905481 kubelet[2497]: E0414 01:11:09.905443 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:10.570643 kubelet[2497]: I0414 01:11:10.570529 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q2qdj" podStartSLOduration=8.243213651 podStartE2EDuration="14.570513861s" podCreationTimestamp="2026-04-14 01:10:56 +0000 UTC" firstStartedPulling="2026-04-14 01:10:56.617782991 +0000 UTC m=+6.938516180" lastFinishedPulling="2026-04-14 01:11:02.9450832 +0000 UTC m=+13.265816390" observedRunningTime="2026-04-14 01:11:07.923709464 +0000 UTC m=+18.244442664" watchObservedRunningTime="2026-04-14 01:11:10.570513861 +0000 UTC m=+20.891247061" Apr 14 01:11:10.909023 kubelet[2497]: E0414 01:11:10.908961 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:11.073589 systemd-networkd[1380]: lxc2926d32c9623: Gained IPv6LL Apr 14 01:11:11.074152 systemd-networkd[1380]: lxc61d85b97c83d: Gained IPv6LL Apr 14 01:11:11.137888 systemd-networkd[1380]: lxc_health: Gained IPv6LL Apr 14 01:11:11.910477 kubelet[2497]: E0414 01:11:11.910402 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:12.912725 kubelet[2497]: E0414 01:11:12.912669 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:13.496385 containerd[1453]: time="2026-04-14T01:11:13.494329845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:11:13.496385 containerd[1453]: time="2026-04-14T01:11:13.494890487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:11:13.496385 containerd[1453]: time="2026-04-14T01:11:13.494901290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:13.496385 containerd[1453]: time="2026-04-14T01:11:13.494981584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:13.516504 systemd[1]: Started cri-containerd-bd3009cf1347593ca3a2a5449d934d4044b07efd114c8c0c13e929927cc17151.scope - libcontainer container bd3009cf1347593ca3a2a5449d934d4044b07efd114c8c0c13e929927cc17151. Apr 14 01:11:13.518979 containerd[1453]: time="2026-04-14T01:11:13.518650956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:11:13.518979 containerd[1453]: time="2026-04-14T01:11:13.518692567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:11:13.518979 containerd[1453]: time="2026-04-14T01:11:13.518714350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:13.518979 containerd[1453]: time="2026-04-14T01:11:13.518790598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:13.527695 systemd-resolved[1383]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:11:13.544545 systemd[1]: Started cri-containerd-d64473b562bb335582e3ceb6f58aca27235b1483e9b9c247499e4d3cdfff09a9.scope - libcontainer container d64473b562bb335582e3ceb6f58aca27235b1483e9b9c247499e4d3cdfff09a9. Apr 14 01:11:13.558031 systemd-resolved[1383]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:11:13.558691 containerd[1453]: time="2026-04-14T01:11:13.558626721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2lh4k,Uid:5a4f3da3-ffc8-4019-a125-69ea3d7d4240,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd3009cf1347593ca3a2a5449d934d4044b07efd114c8c0c13e929927cc17151\"" Apr 14 01:11:13.561633 kubelet[2497]: E0414 01:11:13.561601 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:13.567128 containerd[1453]: time="2026-04-14T01:11:13.567088920Z" level=info msg="CreateContainer within sandbox \"bd3009cf1347593ca3a2a5449d934d4044b07efd114c8c0c13e929927cc17151\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 01:11:13.582042 containerd[1453]: time="2026-04-14T01:11:13.582007415Z" level=info msg="CreateContainer within sandbox \"bd3009cf1347593ca3a2a5449d934d4044b07efd114c8c0c13e929927cc17151\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c63cf104d684a4c68aca2ecb7e5e03a2f172a6bb69b5deb7f2cdd3d51e69ac2e\"" Apr 14 01:11:13.583499 containerd[1453]: time="2026-04-14T01:11:13.583468459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8thjv,Uid:233d9fc5-c08c-4def-8e2d-c3a25b45e889,Namespace:kube-system,Attempt:0,} returns sandbox id \"d64473b562bb335582e3ceb6f58aca27235b1483e9b9c247499e4d3cdfff09a9\"" Apr 14 01:11:13.584775 containerd[1453]: time="2026-04-14T01:11:13.584732801Z" level=info msg="StartContainer for \"c63cf104d684a4c68aca2ecb7e5e03a2f172a6bb69b5deb7f2cdd3d51e69ac2e\"" Apr 14 01:11:13.584826 kubelet[2497]: E0414 01:11:13.584803 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:13.590976 containerd[1453]: time="2026-04-14T01:11:13.590943682Z" level=info msg="CreateContainer within sandbox \"d64473b562bb335582e3ceb6f58aca27235b1483e9b9c247499e4d3cdfff09a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 01:11:13.612048 containerd[1453]: time="2026-04-14T01:11:13.611927508Z" level=info msg="CreateContainer within sandbox \"d64473b562bb335582e3ceb6f58aca27235b1483e9b9c247499e4d3cdfff09a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"501ba64e1b0694c27d8f3655e24d7e8a224800b6d1be04057ed72e0b5f274dd8\"" Apr 14 01:11:13.614079 containerd[1453]: time="2026-04-14T01:11:13.614031174Z" level=info msg="StartContainer for \"501ba64e1b0694c27d8f3655e24d7e8a224800b6d1be04057ed72e0b5f274dd8\"" Apr 14 01:11:13.618731 systemd[1]: Started cri-containerd-c63cf104d684a4c68aca2ecb7e5e03a2f172a6bb69b5deb7f2cdd3d51e69ac2e.scope - libcontainer container c63cf104d684a4c68aca2ecb7e5e03a2f172a6bb69b5deb7f2cdd3d51e69ac2e. Apr 14 01:11:13.640381 systemd[1]: Started cri-containerd-501ba64e1b0694c27d8f3655e24d7e8a224800b6d1be04057ed72e0b5f274dd8.scope - libcontainer container 501ba64e1b0694c27d8f3655e24d7e8a224800b6d1be04057ed72e0b5f274dd8. Apr 14 01:11:13.658117 containerd[1453]: time="2026-04-14T01:11:13.657529280Z" level=info msg="StartContainer for \"c63cf104d684a4c68aca2ecb7e5e03a2f172a6bb69b5deb7f2cdd3d51e69ac2e\" returns successfully" Apr 14 01:11:13.673955 containerd[1453]: time="2026-04-14T01:11:13.673793925Z" level=info msg="StartContainer for \"501ba64e1b0694c27d8f3655e24d7e8a224800b6d1be04057ed72e0b5f274dd8\" returns successfully" Apr 14 01:11:13.920427 kubelet[2497]: E0414 01:11:13.920365 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:13.923148 kubelet[2497]: E0414 01:11:13.923080 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:13.940221 kubelet[2497]: I0414 01:11:13.937311 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8thjv" podStartSLOduration=17.937291144 podStartE2EDuration="17.937291144s" podCreationTimestamp="2026-04-14 01:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:11:13.937230029 +0000 UTC m=+24.257963224" watchObservedRunningTime="2026-04-14 01:11:13.937291144 +0000 UTC m=+24.258024335" Apr 14 01:11:13.977389 kubelet[2497]: I0414 01:11:13.977328 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2lh4k" podStartSLOduration=17.977309348 podStartE2EDuration="17.977309348s" podCreationTimestamp="2026-04-14 01:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:11:13.959453914 +0000 UTC m=+24.280187115" watchObservedRunningTime="2026-04-14 01:11:13.977309348 +0000 UTC m=+24.298042549" Apr 14 01:11:14.091900 systemd[1]: Started sshd@7-10.0.0.9:22-10.0.0.1:43766.service - OpenSSH per-connection server daemon (10.0.0.1:43766). Apr 14 01:11:14.148893 sshd[3899]: Accepted publickey for core from 10.0.0.1 port 43766 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:14.152041 sshd[3899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:14.161321 systemd-logind[1434]: New session 8 of user core. Apr 14 01:11:14.171562 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 14 01:11:14.304865 sshd[3899]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:14.308585 systemd[1]: sshd@7-10.0.0.9:22-10.0.0.1:43766.service: Deactivated successfully. Apr 14 01:11:14.310672 systemd[1]: session-8.scope: Deactivated successfully. Apr 14 01:11:14.311575 systemd-logind[1434]: Session 8 logged out. Waiting for processes to exit. Apr 14 01:11:14.312622 systemd-logind[1434]: Removed session 8. Apr 14 01:11:14.925157 kubelet[2497]: E0414 01:11:14.925111 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:14.925593 kubelet[2497]: E0414 01:11:14.925214 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:15.931286 kubelet[2497]: E0414 01:11:15.930872 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:15.933370 kubelet[2497]: E0414 01:11:15.933283 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:19.325710 systemd[1]: Started sshd@8-10.0.0.9:22-10.0.0.1:50314.service - OpenSSH per-connection server daemon (10.0.0.1:50314). Apr 14 01:11:19.369700 sshd[3920]: Accepted publickey for core from 10.0.0.1 port 50314 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:19.371504 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:19.377660 systemd-logind[1434]: New session 9 of user core. Apr 14 01:11:19.386662 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 14 01:11:19.527276 sshd[3920]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:19.531110 systemd[1]: sshd@8-10.0.0.9:22-10.0.0.1:50314.service: Deactivated successfully. Apr 14 01:11:19.533985 systemd[1]: session-9.scope: Deactivated successfully. Apr 14 01:11:19.534748 systemd-logind[1434]: Session 9 logged out. Waiting for processes to exit. Apr 14 01:11:19.535961 systemd-logind[1434]: Removed session 9. Apr 14 01:11:24.540572 systemd[1]: Started sshd@9-10.0.0.9:22-10.0.0.1:50318.service - OpenSSH per-connection server daemon (10.0.0.1:50318). Apr 14 01:11:24.593665 sshd[3936]: Accepted publickey for core from 10.0.0.1 port 50318 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:24.595258 sshd[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:24.600526 systemd-logind[1434]: New session 10 of user core. Apr 14 01:11:24.618640 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 14 01:11:24.747151 sshd[3936]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:24.752503 systemd[1]: sshd@9-10.0.0.9:22-10.0.0.1:50318.service: Deactivated successfully. Apr 14 01:11:24.754669 systemd[1]: session-10.scope: Deactivated successfully. Apr 14 01:11:24.755416 systemd-logind[1434]: Session 10 logged out. Waiting for processes to exit. Apr 14 01:11:24.759829 systemd-logind[1434]: Removed session 10. Apr 14 01:11:29.778121 systemd[1]: Started sshd@10-10.0.0.9:22-10.0.0.1:51194.service - OpenSSH per-connection server daemon (10.0.0.1:51194). Apr 14 01:11:29.822255 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 51194 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:29.824503 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:29.830516 systemd-logind[1434]: New session 11 of user core. Apr 14 01:11:29.838877 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 14 01:11:29.965707 sshd[3953]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:29.976024 systemd[1]: sshd@10-10.0.0.9:22-10.0.0.1:51194.service: Deactivated successfully. Apr 14 01:11:29.977562 systemd[1]: session-11.scope: Deactivated successfully. Apr 14 01:11:29.978626 systemd-logind[1434]: Session 11 logged out. Waiting for processes to exit. Apr 14 01:11:29.988778 systemd[1]: Started sshd@11-10.0.0.9:22-10.0.0.1:51196.service - OpenSSH per-connection server daemon (10.0.0.1:51196). Apr 14 01:11:29.989630 systemd-logind[1434]: Removed session 11. Apr 14 01:11:30.023776 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 51196 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:30.026127 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:30.033868 systemd-logind[1434]: New session 12 of user core. Apr 14 01:11:30.052542 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 14 01:11:30.260433 sshd[3969]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:30.273086 systemd[1]: sshd@11-10.0.0.9:22-10.0.0.1:51196.service: Deactivated successfully. Apr 14 01:11:30.285803 systemd[1]: session-12.scope: Deactivated successfully. Apr 14 01:11:30.291601 systemd-logind[1434]: Session 12 logged out. Waiting for processes to exit. Apr 14 01:11:30.307083 systemd[1]: Started sshd@12-10.0.0.9:22-10.0.0.1:51208.service - OpenSSH per-connection server daemon (10.0.0.1:51208). Apr 14 01:11:30.309008 systemd-logind[1434]: Removed session 12. Apr 14 01:11:30.346943 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 51208 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:30.349301 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:30.358981 systemd-logind[1434]: New session 13 of user core. Apr 14 01:11:30.372765 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 14 01:11:30.493116 sshd[3981]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:30.495919 systemd[1]: sshd@12-10.0.0.9:22-10.0.0.1:51208.service: Deactivated successfully. Apr 14 01:11:30.498564 systemd[1]: session-13.scope: Deactivated successfully. Apr 14 01:11:30.500527 systemd-logind[1434]: Session 13 logged out. Waiting for processes to exit. Apr 14 01:11:30.501663 systemd-logind[1434]: Removed session 13. Apr 14 01:11:35.512818 systemd[1]: Started sshd@13-10.0.0.9:22-10.0.0.1:59508.service - OpenSSH per-connection server daemon (10.0.0.1:59508). Apr 14 01:11:35.560807 sshd[3997]: Accepted publickey for core from 10.0.0.1 port 59508 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:35.562536 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:35.568473 systemd-logind[1434]: New session 14 of user core. Apr 14 01:11:35.575832 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 14 01:11:35.699318 sshd[3997]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:35.706005 systemd[1]: sshd@13-10.0.0.9:22-10.0.0.1:59508.service: Deactivated successfully. Apr 14 01:11:35.707704 systemd[1]: session-14.scope: Deactivated successfully. Apr 14 01:11:35.708525 systemd-logind[1434]: Session 14 logged out. Waiting for processes to exit. Apr 14 01:11:35.709819 systemd-logind[1434]: Removed session 14. Apr 14 01:11:40.708027 systemd[1]: Started sshd@14-10.0.0.9:22-10.0.0.1:59510.service - OpenSSH per-connection server daemon (10.0.0.1:59510). Apr 14 01:11:40.746722 sshd[4011]: Accepted publickey for core from 10.0.0.1 port 59510 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:40.748143 sshd[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:40.753444 systemd-logind[1434]: New session 15 of user core. Apr 14 01:11:40.769496 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 14 01:11:40.894326 sshd[4011]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:40.909091 systemd[1]: sshd@14-10.0.0.9:22-10.0.0.1:59510.service: Deactivated successfully. Apr 14 01:11:40.911271 systemd[1]: session-15.scope: Deactivated successfully. Apr 14 01:11:40.912642 systemd-logind[1434]: Session 15 logged out. Waiting for processes to exit. Apr 14 01:11:40.919833 systemd[1]: Started sshd@15-10.0.0.9:22-10.0.0.1:59522.service - OpenSSH per-connection server daemon (10.0.0.1:59522). Apr 14 01:11:40.920632 systemd-logind[1434]: Removed session 15. Apr 14 01:11:40.956041 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 59522 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:40.958881 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:40.963147 systemd-logind[1434]: New session 16 of user core. Apr 14 01:11:40.967417 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 14 01:11:41.209465 sshd[4025]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:41.219386 systemd[1]: Started sshd@16-10.0.0.9:22-10.0.0.1:59530.service - OpenSSH per-connection server daemon (10.0.0.1:59530). Apr 14 01:11:41.222751 systemd-logind[1434]: Session 16 logged out. Waiting for processes to exit. Apr 14 01:11:41.225386 systemd[1]: sshd@15-10.0.0.9:22-10.0.0.1:59522.service: Deactivated successfully. Apr 14 01:11:41.243471 systemd[1]: session-16.scope: Deactivated successfully. Apr 14 01:11:41.246041 systemd-logind[1434]: Removed session 16. Apr 14 01:11:41.316570 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 59530 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:41.321150 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:41.334389 systemd-logind[1434]: New session 17 of user core. Apr 14 01:11:41.340052 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 14 01:11:41.937002 sshd[4035]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:41.943826 systemd[1]: sshd@16-10.0.0.9:22-10.0.0.1:59530.service: Deactivated successfully. Apr 14 01:11:41.947127 systemd[1]: session-17.scope: Deactivated successfully. Apr 14 01:11:41.948465 systemd-logind[1434]: Session 17 logged out. Waiting for processes to exit. Apr 14 01:11:41.958603 systemd[1]: Started sshd@17-10.0.0.9:22-10.0.0.1:59540.service - OpenSSH per-connection server daemon (10.0.0.1:59540). Apr 14 01:11:41.960429 systemd-logind[1434]: Removed session 17. Apr 14 01:11:42.001020 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 59540 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:42.005668 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:42.014252 systemd-logind[1434]: New session 18 of user core. Apr 14 01:11:42.035766 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 14 01:11:42.340891 sshd[4055]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:42.351685 systemd[1]: sshd@17-10.0.0.9:22-10.0.0.1:59540.service: Deactivated successfully. Apr 14 01:11:42.354689 systemd[1]: session-18.scope: Deactivated successfully. Apr 14 01:11:42.356405 systemd-logind[1434]: Session 18 logged out. Waiting for processes to exit. Apr 14 01:11:42.366642 systemd[1]: Started sshd@18-10.0.0.9:22-10.0.0.1:59542.service - OpenSSH per-connection server daemon (10.0.0.1:59542). Apr 14 01:11:42.367894 systemd-logind[1434]: Removed session 18. Apr 14 01:11:42.401073 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 59542 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:42.402584 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:42.407026 systemd-logind[1434]: New session 19 of user core. Apr 14 01:11:42.423755 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 14 01:11:42.572555 sshd[4068]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:42.576892 systemd[1]: sshd@18-10.0.0.9:22-10.0.0.1:59542.service: Deactivated successfully. Apr 14 01:11:42.579665 systemd[1]: session-19.scope: Deactivated successfully. Apr 14 01:11:42.580656 systemd-logind[1434]: Session 19 logged out. Waiting for processes to exit. Apr 14 01:11:42.582836 systemd-logind[1434]: Removed session 19. Apr 14 01:11:47.261356 kernel: hrtimer: interrupt took 6357330 ns Apr 14 01:11:47.595442 systemd[1]: Started sshd@19-10.0.0.9:22-10.0.0.1:56376.service - OpenSSH per-connection server daemon (10.0.0.1:56376). Apr 14 01:11:47.662263 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 56376 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:47.670635 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:47.679062 systemd-logind[1434]: New session 20 of user core. Apr 14 01:11:47.688383 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 14 01:11:47.852628 sshd[4088]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:47.857541 systemd[1]: sshd@19-10.0.0.9:22-10.0.0.1:56376.service: Deactivated successfully. Apr 14 01:11:47.861233 systemd[1]: session-20.scope: Deactivated successfully. Apr 14 01:11:47.862090 systemd-logind[1434]: Session 20 logged out. Waiting for processes to exit. Apr 14 01:11:47.863118 systemd-logind[1434]: Removed session 20. Apr 14 01:11:52.876106 systemd[1]: Started sshd@20-10.0.0.9:22-10.0.0.1:56388.service - OpenSSH per-connection server daemon (10.0.0.1:56388). Apr 14 01:11:52.914664 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 56388 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:52.916984 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:52.926618 systemd-logind[1434]: New session 21 of user core. Apr 14 01:11:52.932289 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 14 01:11:53.065702 sshd[4104]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:53.076117 systemd[1]: sshd@20-10.0.0.9:22-10.0.0.1:56388.service: Deactivated successfully. Apr 14 01:11:53.078023 systemd[1]: session-21.scope: Deactivated successfully. Apr 14 01:11:53.079770 systemd-logind[1434]: Session 21 logged out. Waiting for processes to exit. Apr 14 01:11:53.086647 systemd[1]: Started sshd@21-10.0.0.9:22-10.0.0.1:56398.service - OpenSSH per-connection server daemon (10.0.0.1:56398). Apr 14 01:11:53.088114 systemd-logind[1434]: Removed session 21. Apr 14 01:11:53.128886 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 56398 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:53.130429 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:53.137327 systemd-logind[1434]: New session 22 of user core. Apr 14 01:11:53.151436 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 14 01:11:54.537109 containerd[1453]: time="2026-04-14T01:11:54.536958743Z" level=info msg="StopContainer for \"37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f\" with timeout 30 (s)" Apr 14 01:11:54.537643 containerd[1453]: time="2026-04-14T01:11:54.537494418Z" level=info msg="Stop container \"37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f\" with signal terminated" Apr 14 01:11:54.564810 systemd[1]: run-containerd-runc-k8s.io-4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df-runc.fNjeVk.mount: Deactivated successfully. Apr 14 01:11:54.565863 systemd[1]: cri-containerd-37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f.scope: Deactivated successfully. Apr 14 01:11:54.594067 containerd[1453]: time="2026-04-14T01:11:54.593944735Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 01:11:54.596550 containerd[1453]: time="2026-04-14T01:11:54.596516920Z" level=info msg="StopContainer for \"4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df\" with timeout 2 (s)" Apr 14 01:11:54.596769 containerd[1453]: time="2026-04-14T01:11:54.596736436Z" level=info msg="Stop container \"4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df\" with signal terminated" Apr 14 01:11:54.598419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f-rootfs.mount: Deactivated successfully. Apr 14 01:11:54.612883 systemd-networkd[1380]: lxc_health: Link DOWN Apr 14 01:11:54.612894 systemd-networkd[1380]: lxc_health: Lost carrier Apr 14 01:11:54.616051 containerd[1453]: time="2026-04-14T01:11:54.615986173Z" level=info msg="shim disconnected" id=37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f namespace=k8s.io Apr 14 01:11:54.616051 containerd[1453]: time="2026-04-14T01:11:54.616055098Z" level=warning msg="cleaning up after shim disconnected" id=37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f namespace=k8s.io Apr 14 01:11:54.616292 containerd[1453]: time="2026-04-14T01:11:54.616062595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:11:54.632585 systemd[1]: cri-containerd-4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df.scope: Deactivated successfully. Apr 14 01:11:54.633247 systemd[1]: cri-containerd-4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df.scope: Consumed 6.738s CPU time. Apr 14 01:11:54.639775 containerd[1453]: time="2026-04-14T01:11:54.639718150Z" level=info msg="StopContainer for \"37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f\" returns successfully" Apr 14 01:11:54.641463 containerd[1453]: time="2026-04-14T01:11:54.641411827Z" level=info msg="StopPodSandbox for \"5bee8ab0773d372f2e6c2134ac15908083df97890fd92b9f164417181c7539eb\"" Apr 14 01:11:54.641609 containerd[1453]: time="2026-04-14T01:11:54.641486848Z" level=info msg="Container to stop \"37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 01:11:54.644767 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5bee8ab0773d372f2e6c2134ac15908083df97890fd92b9f164417181c7539eb-shm.mount: Deactivated successfully. Apr 14 01:11:54.653914 systemd[1]: cri-containerd-5bee8ab0773d372f2e6c2134ac15908083df97890fd92b9f164417181c7539eb.scope: Deactivated successfully. Apr 14 01:11:54.658682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df-rootfs.mount: Deactivated successfully. Apr 14 01:11:54.665688 containerd[1453]: time="2026-04-14T01:11:54.665605203Z" level=info msg="shim disconnected" id=4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df namespace=k8s.io Apr 14 01:11:54.665688 containerd[1453]: time="2026-04-14T01:11:54.665678760Z" level=warning msg="cleaning up after shim disconnected" id=4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df namespace=k8s.io Apr 14 01:11:54.665875 containerd[1453]: time="2026-04-14T01:11:54.665708443Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:11:54.683756 containerd[1453]: time="2026-04-14T01:11:54.683707544Z" level=info msg="StopContainer for \"4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df\" returns successfully" Apr 14 01:11:54.684408 containerd[1453]: time="2026-04-14T01:11:54.684376615Z" level=info msg="StopPodSandbox for \"3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185\"" Apr 14 01:11:54.684408 containerd[1453]: time="2026-04-14T01:11:54.684416568Z" level=info msg="Container to stop \"60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 01:11:54.684408 containerd[1453]: time="2026-04-14T01:11:54.684426705Z" level=info msg="Container to stop \"699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 01:11:54.684543 containerd[1453]: time="2026-04-14T01:11:54.684433145Z" level=info msg="Container to stop \"2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 01:11:54.684543 containerd[1453]: time="2026-04-14T01:11:54.684440400Z" level=info msg="Container to stop \"4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 01:11:54.684543 containerd[1453]: time="2026-04-14T01:11:54.684448428Z" level=info msg="Container to stop \"3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 01:11:54.684794 containerd[1453]: time="2026-04-14T01:11:54.684763117Z" level=info msg="shim disconnected" id=5bee8ab0773d372f2e6c2134ac15908083df97890fd92b9f164417181c7539eb namespace=k8s.io Apr 14 01:11:54.684794 containerd[1453]: time="2026-04-14T01:11:54.684792046Z" level=warning msg="cleaning up after shim disconnected" id=5bee8ab0773d372f2e6c2134ac15908083df97890fd92b9f164417181c7539eb namespace=k8s.io Apr 14 01:11:54.684882 containerd[1453]: time="2026-04-14T01:11:54.684797998Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:11:54.691704 systemd[1]: cri-containerd-3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185.scope: Deactivated successfully. Apr 14 01:11:54.699453 containerd[1453]: time="2026-04-14T01:11:54.699376568Z" level=info msg="TearDown network for sandbox \"5bee8ab0773d372f2e6c2134ac15908083df97890fd92b9f164417181c7539eb\" successfully" Apr 14 01:11:54.699453 containerd[1453]: time="2026-04-14T01:11:54.699420065Z" level=info msg="StopPodSandbox for \"5bee8ab0773d372f2e6c2134ac15908083df97890fd92b9f164417181c7539eb\" returns successfully" Apr 14 01:11:54.722123 containerd[1453]: time="2026-04-14T01:11:54.722057211Z" level=info msg="shim disconnected" id=3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185 namespace=k8s.io Apr 14 01:11:54.722390 containerd[1453]: time="2026-04-14T01:11:54.722194786Z" level=warning msg="cleaning up after shim disconnected" id=3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185 namespace=k8s.io Apr 14 01:11:54.722390 containerd[1453]: time="2026-04-14T01:11:54.722222700Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:11:54.742153 containerd[1453]: time="2026-04-14T01:11:54.742098584Z" level=info msg="TearDown network for sandbox \"3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185\" successfully" Apr 14 01:11:54.742153 containerd[1453]: time="2026-04-14T01:11:54.742132802Z" level=info msg="StopPodSandbox for \"3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185\" returns successfully" Apr 14 01:11:54.782102 kubelet[2497]: I0414 01:11:54.782009 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5759a036-e80f-4c0b-b00a-328cc881450c-clustermesh-secrets\") pod \"5759a036-e80f-4c0b-b00a-328cc881450c\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " Apr 14 01:11:54.782102 kubelet[2497]: I0414 01:11:54.782099 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-xtables-lock\") pod \"5759a036-e80f-4c0b-b00a-328cc881450c\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " Apr 14 01:11:54.782102 kubelet[2497]: I0414 01:11:54.782120 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5759a036-e80f-4c0b-b00a-328cc881450c-cilium-config-path\") pod \"5759a036-e80f-4c0b-b00a-328cc881450c\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " Apr 14 01:11:54.782636 kubelet[2497]: I0414 01:11:54.782134 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-host-proc-sys-kernel\") pod \"5759a036-e80f-4c0b-b00a-328cc881450c\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " Apr 14 01:11:54.782636 kubelet[2497]: I0414 01:11:54.782150 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-bpf-maps\") pod \"5759a036-e80f-4c0b-b00a-328cc881450c\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " Apr 14 01:11:54.782636 kubelet[2497]: I0414 01:11:54.782221 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-cilium-cgroup\") pod \"5759a036-e80f-4c0b-b00a-328cc881450c\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " Apr 14 01:11:54.782636 kubelet[2497]: I0414 01:11:54.782236 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-lib-modules\") pod \"5759a036-e80f-4c0b-b00a-328cc881450c\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " Apr 14 01:11:54.782636 kubelet[2497]: I0414 01:11:54.782252 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvzf5\" (UniqueName: \"kubernetes.io/projected/ee726647-6e20-4c62-be0b-e8e3a4442292-kube-api-access-lvzf5\") pod \"ee726647-6e20-4c62-be0b-e8e3a4442292\" (UID: \"ee726647-6e20-4c62-be0b-e8e3a4442292\") " Apr 14 01:11:54.782636 kubelet[2497]: I0414 01:11:54.782264 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-etc-cni-netd\") pod \"5759a036-e80f-4c0b-b00a-328cc881450c\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " Apr 14 01:11:54.782764 kubelet[2497]: I0414 01:11:54.782280 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-host-proc-sys-net\") pod \"5759a036-e80f-4c0b-b00a-328cc881450c\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " Apr 14 01:11:54.782764 kubelet[2497]: I0414 01:11:54.782296 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-cilium-run\") pod \"5759a036-e80f-4c0b-b00a-328cc881450c\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " Apr 14 01:11:54.782764 kubelet[2497]: I0414 01:11:54.782308 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-cni-path\") pod \"5759a036-e80f-4c0b-b00a-328cc881450c\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " Apr 14 01:11:54.782764 kubelet[2497]: I0414 01:11:54.782321 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-hostproc\") pod \"5759a036-e80f-4c0b-b00a-328cc881450c\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " Apr 14 01:11:54.782764 kubelet[2497]: I0414 01:11:54.782335 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xps77\" (UniqueName: \"kubernetes.io/projected/5759a036-e80f-4c0b-b00a-328cc881450c-kube-api-access-xps77\") pod \"5759a036-e80f-4c0b-b00a-328cc881450c\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " Apr 14 01:11:54.782764 kubelet[2497]: I0414 01:11:54.782353 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee726647-6e20-4c62-be0b-e8e3a4442292-cilium-config-path\") pod \"ee726647-6e20-4c62-be0b-e8e3a4442292\" (UID: \"ee726647-6e20-4c62-be0b-e8e3a4442292\") " Apr 14 01:11:54.782881 kubelet[2497]: I0414 01:11:54.782372 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5759a036-e80f-4c0b-b00a-328cc881450c-hubble-tls\") pod \"5759a036-e80f-4c0b-b00a-328cc881450c\" (UID: \"5759a036-e80f-4c0b-b00a-328cc881450c\") " Apr 14 01:11:54.782881 kubelet[2497]: I0414 01:11:54.782671 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5759a036-e80f-4c0b-b00a-328cc881450c" (UID: "5759a036-e80f-4c0b-b00a-328cc881450c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 01:11:54.783006 kubelet[2497]: I0414 01:11:54.782952 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-cni-path" (OuterVolumeSpecName: "cni-path") pod "5759a036-e80f-4c0b-b00a-328cc881450c" (UID: "5759a036-e80f-4c0b-b00a-328cc881450c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 01:11:54.783006 kubelet[2497]: I0414 01:11:54.783012 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5759a036-e80f-4c0b-b00a-328cc881450c" (UID: "5759a036-e80f-4c0b-b00a-328cc881450c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 01:11:54.783006 kubelet[2497]: I0414 01:11:54.783026 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5759a036-e80f-4c0b-b00a-328cc881450c" (UID: "5759a036-e80f-4c0b-b00a-328cc881450c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 01:11:54.783006 kubelet[2497]: I0414 01:11:54.783038 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5759a036-e80f-4c0b-b00a-328cc881450c" (UID: "5759a036-e80f-4c0b-b00a-328cc881450c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 01:11:54.783223 kubelet[2497]: I0414 01:11:54.783052 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5759a036-e80f-4c0b-b00a-328cc881450c" (UID: "5759a036-e80f-4c0b-b00a-328cc881450c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 01:11:54.783223 kubelet[2497]: I0414 01:11:54.783065 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5759a036-e80f-4c0b-b00a-328cc881450c" (UID: "5759a036-e80f-4c0b-b00a-328cc881450c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 01:11:54.783223 kubelet[2497]: I0414 01:11:54.783076 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5759a036-e80f-4c0b-b00a-328cc881450c" (UID: "5759a036-e80f-4c0b-b00a-328cc881450c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 01:11:54.783470 kubelet[2497]: I0414 01:11:54.783446 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5759a036-e80f-4c0b-b00a-328cc881450c" (UID: "5759a036-e80f-4c0b-b00a-328cc881450c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 01:11:54.786442 kubelet[2497]: I0414 01:11:54.784841 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5759a036-e80f-4c0b-b00a-328cc881450c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5759a036-e80f-4c0b-b00a-328cc881450c" (UID: "5759a036-e80f-4c0b-b00a-328cc881450c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 01:11:54.786442 kubelet[2497]: I0414 01:11:54.785898 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-hostproc" (OuterVolumeSpecName: "hostproc") pod "5759a036-e80f-4c0b-b00a-328cc881450c" (UID: "5759a036-e80f-4c0b-b00a-328cc881450c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 01:11:54.786442 kubelet[2497]: I0414 01:11:54.786416 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee726647-6e20-4c62-be0b-e8e3a4442292-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ee726647-6e20-4c62-be0b-e8e3a4442292" (UID: "ee726647-6e20-4c62-be0b-e8e3a4442292"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 01:11:54.787061 kubelet[2497]: I0414 01:11:54.787017 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee726647-6e20-4c62-be0b-e8e3a4442292-kube-api-access-lvzf5" (OuterVolumeSpecName: "kube-api-access-lvzf5") pod "ee726647-6e20-4c62-be0b-e8e3a4442292" (UID: "ee726647-6e20-4c62-be0b-e8e3a4442292"). InnerVolumeSpecName "kube-api-access-lvzf5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 01:11:54.787061 kubelet[2497]: I0414 01:11:54.787071 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5759a036-e80f-4c0b-b00a-328cc881450c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5759a036-e80f-4c0b-b00a-328cc881450c" (UID: "5759a036-e80f-4c0b-b00a-328cc881450c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 01:11:54.787279 kubelet[2497]: I0414 01:11:54.787222 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5759a036-e80f-4c0b-b00a-328cc881450c-kube-api-access-xps77" (OuterVolumeSpecName: "kube-api-access-xps77") pod "5759a036-e80f-4c0b-b00a-328cc881450c" (UID: "5759a036-e80f-4c0b-b00a-328cc881450c"). InnerVolumeSpecName "kube-api-access-xps77". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 01:11:54.787544 kubelet[2497]: I0414 01:11:54.787516 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5759a036-e80f-4c0b-b00a-328cc881450c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5759a036-e80f-4c0b-b00a-328cc881450c" (UID: "5759a036-e80f-4c0b-b00a-328cc881450c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 14 01:11:54.873852 kubelet[2497]: E0414 01:11:54.873799 2497 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 01:11:54.884066 kubelet[2497]: I0414 01:11:54.883941 2497 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:54.884066 kubelet[2497]: I0414 01:11:54.883999 2497 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:54.884066 kubelet[2497]: I0414 01:11:54.884010 2497 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:54.884066 kubelet[2497]: I0414 01:11:54.884017 2497 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:54.884066 kubelet[2497]: I0414 01:11:54.884023 2497 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:54.884066 kubelet[2497]: I0414 01:11:54.884030 2497 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xps77\" (UniqueName: \"kubernetes.io/projected/5759a036-e80f-4c0b-b00a-328cc881450c-kube-api-access-xps77\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:54.884066 kubelet[2497]: I0414 01:11:54.884039 2497 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee726647-6e20-4c62-be0b-e8e3a4442292-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:54.884066 kubelet[2497]: I0414 01:11:54.884050 2497 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5759a036-e80f-4c0b-b00a-328cc881450c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:54.884642 kubelet[2497]: I0414 01:11:54.884061 2497 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5759a036-e80f-4c0b-b00a-328cc881450c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:54.884642 kubelet[2497]: I0414 01:11:54.884068 2497 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:54.884642 kubelet[2497]: I0414 01:11:54.884080 2497 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5759a036-e80f-4c0b-b00a-328cc881450c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:54.884642 kubelet[2497]: I0414 01:11:54.884087 2497 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:54.884642 kubelet[2497]: I0414 01:11:54.884093 2497 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:54.884642 kubelet[2497]: I0414 01:11:54.884100 2497 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:54.884642 kubelet[2497]: I0414 01:11:54.884106 2497 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5759a036-e80f-4c0b-b00a-328cc881450c-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:54.884642 kubelet[2497]: I0414 01:11:54.884112 2497 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lvzf5\" (UniqueName: \"kubernetes.io/projected/ee726647-6e20-4c62-be0b-e8e3a4442292-kube-api-access-lvzf5\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:55.059301 kubelet[2497]: I0414 01:11:55.059103 2497 scope.go:117] "RemoveContainer" containerID="37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f" Apr 14 01:11:55.061365 containerd[1453]: time="2026-04-14T01:11:55.060875603Z" level=info msg="RemoveContainer for \"37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f\"" Apr 14 01:11:55.063991 systemd[1]: Removed slice kubepods-besteffort-podee726647_6e20_4c62_be0b_e8e3a4442292.slice - libcontainer container kubepods-besteffort-podee726647_6e20_4c62_be0b_e8e3a4442292.slice. Apr 14 01:11:55.067667 containerd[1453]: time="2026-04-14T01:11:55.067625562Z" level=info msg="RemoveContainer for \"37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f\" returns successfully" Apr 14 01:11:55.068360 kubelet[2497]: I0414 01:11:55.068341 2497 scope.go:117] "RemoveContainer" containerID="37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f" Apr 14 01:11:55.069935 systemd[1]: Removed slice kubepods-burstable-pod5759a036_e80f_4c0b_b00a_328cc881450c.slice - libcontainer container kubepods-burstable-pod5759a036_e80f_4c0b_b00a_328cc881450c.slice. Apr 14 01:11:55.070005 systemd[1]: kubepods-burstable-pod5759a036_e80f_4c0b_b00a_328cc881450c.slice: Consumed 6.825s CPU time. Apr 14 01:11:55.081334 containerd[1453]: time="2026-04-14T01:11:55.080900773Z" level=error msg="ContainerStatus for \"37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f\": not found" Apr 14 01:11:55.092700 kubelet[2497]: E0414 01:11:55.092629 2497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f\": not found" containerID="37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f" Apr 14 01:11:55.092700 kubelet[2497]: I0414 01:11:55.092685 2497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f"} err="failed to get container status \"37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f\": rpc error: code = NotFound desc = an error occurred when try to find container \"37025d8289f3215bce528dd4445081e7c7856af09e0cae491ec37b17b2e4447f\": not found" Apr 14 01:11:55.092894 kubelet[2497]: I0414 01:11:55.092733 2497 scope.go:117] "RemoveContainer" containerID="4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df" Apr 14 01:11:55.095013 containerd[1453]: time="2026-04-14T01:11:55.094892776Z" level=info msg="RemoveContainer for \"4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df\"" Apr 14 01:11:55.099402 containerd[1453]: time="2026-04-14T01:11:55.099330906Z" level=info msg="RemoveContainer for \"4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df\" returns successfully" Apr 14 01:11:55.099750 kubelet[2497]: I0414 01:11:55.099600 2497 scope.go:117] "RemoveContainer" containerID="2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3" Apr 14 01:11:55.101114 containerd[1453]: time="2026-04-14T01:11:55.101008591Z" level=info msg="RemoveContainer for \"2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3\"" Apr 14 01:11:55.103627 containerd[1453]: time="2026-04-14T01:11:55.103604696Z" level=info msg="RemoveContainer for \"2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3\" returns successfully" Apr 14 01:11:55.103859 kubelet[2497]: I0414 01:11:55.103745 2497 scope.go:117] "RemoveContainer" containerID="699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a" Apr 14 01:11:55.113054 containerd[1453]: time="2026-04-14T01:11:55.113012166Z" level=info msg="RemoveContainer for \"699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a\"" Apr 14 01:11:55.115757 containerd[1453]: time="2026-04-14T01:11:55.115722148Z" level=info msg="RemoveContainer for \"699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a\" returns successfully" Apr 14 01:11:55.115989 kubelet[2497]: I0414 01:11:55.115960 2497 scope.go:117] "RemoveContainer" containerID="60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e" Apr 14 01:11:55.117009 containerd[1453]: time="2026-04-14T01:11:55.116984578Z" level=info msg="RemoveContainer for \"60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e\"" Apr 14 01:11:55.122572 containerd[1453]: time="2026-04-14T01:11:55.122533206Z" level=info msg="RemoveContainer for \"60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e\" returns successfully" Apr 14 01:11:55.122760 kubelet[2497]: I0414 01:11:55.122710 2497 scope.go:117] "RemoveContainer" containerID="3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055" Apr 14 01:11:55.123700 containerd[1453]: time="2026-04-14T01:11:55.123650433Z" level=info msg="RemoveContainer for \"3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055\"" Apr 14 01:11:55.127230 containerd[1453]: time="2026-04-14T01:11:55.127151514Z" level=info msg="RemoveContainer for \"3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055\" returns successfully" Apr 14 01:11:55.127543 kubelet[2497]: I0414 01:11:55.127511 2497 scope.go:117] "RemoveContainer" containerID="4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df" Apr 14 01:11:55.127807 containerd[1453]: time="2026-04-14T01:11:55.127774686Z" level=error msg="ContainerStatus for \"4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df\": not found" Apr 14 01:11:55.128007 kubelet[2497]: E0414 01:11:55.127982 2497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df\": not found" containerID="4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df" Apr 14 01:11:55.128058 kubelet[2497]: I0414 01:11:55.128012 2497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df"} err="failed to get container status \"4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a5deeb0195da9f47f92749f6625600b9e7e68b7e11f22df93d5c3f3bfa662df\": not found" Apr 14 01:11:55.128058 kubelet[2497]: I0414 01:11:55.128030 2497 scope.go:117] "RemoveContainer" containerID="2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3" Apr 14 01:11:55.128290 containerd[1453]: time="2026-04-14T01:11:55.128222007Z" level=error msg="ContainerStatus for \"2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3\": not found" Apr 14 01:11:55.128421 kubelet[2497]: E0414 01:11:55.128395 2497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3\": not found" containerID="2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3" Apr 14 01:11:55.128461 kubelet[2497]: I0414 01:11:55.128423 2497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3"} err="failed to get container status \"2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c61e731dfb71f8f3871610d98f5f070f1b9269312bf82690aeda8501b7628d3\": not found" Apr 14 01:11:55.128461 kubelet[2497]: I0414 01:11:55.128436 2497 scope.go:117] "RemoveContainer" containerID="699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a" Apr 14 01:11:55.128648 containerd[1453]: time="2026-04-14T01:11:55.128615580Z" level=error msg="ContainerStatus for \"699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a\": not found" Apr 14 01:11:55.128744 kubelet[2497]: E0414 01:11:55.128724 2497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a\": not found" containerID="699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a" Apr 14 01:11:55.128774 kubelet[2497]: I0414 01:11:55.128745 2497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a"} err="failed to get container status \"699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a\": rpc error: code = NotFound desc = an error occurred when try to find container \"699ffa91616c46591e3e02293f1da1c94ffce53b53f5e51f817b5c489092356a\": not found" Apr 14 01:11:55.128774 kubelet[2497]: I0414 01:11:55.128754 2497 scope.go:117] "RemoveContainer" containerID="60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e" Apr 14 01:11:55.128961 containerd[1453]: time="2026-04-14T01:11:55.128930429Z" level=error msg="ContainerStatus for \"60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e\": not found" Apr 14 01:11:55.129073 kubelet[2497]: E0414 01:11:55.129048 2497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e\": not found" containerID="60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e" Apr 14 01:11:55.129101 kubelet[2497]: I0414 01:11:55.129083 2497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e"} err="failed to get container status \"60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e\": rpc error: code = NotFound desc = an error occurred when try to find container \"60af8d22e54008b27e36745210bb80f9a53ea6d5da4007d6c209c9db81534b8e\": not found" Apr 14 01:11:55.129120 kubelet[2497]: I0414 01:11:55.129101 2497 scope.go:117] "RemoveContainer" containerID="3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055" Apr 14 01:11:55.129373 containerd[1453]: time="2026-04-14T01:11:55.129333539Z" level=error msg="ContainerStatus for \"3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055\": not found" Apr 14 01:11:55.129440 kubelet[2497]: E0414 01:11:55.129409 2497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055\": not found" containerID="3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055" Apr 14 01:11:55.129440 kubelet[2497]: I0414 01:11:55.129436 2497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055"} err="failed to get container status \"3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ca70590ccd493e780190c5bd6a8df3593465e7e3e31142e460b5b89f9ab4055\": not found" Apr 14 01:11:55.560123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bee8ab0773d372f2e6c2134ac15908083df97890fd92b9f164417181c7539eb-rootfs.mount: Deactivated successfully. Apr 14 01:11:55.560319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185-rootfs.mount: Deactivated successfully. Apr 14 01:11:55.560380 systemd[1]: var-lib-kubelet-pods-ee726647\x2d6e20\x2d4c62\x2dbe0b\x2de8e3a4442292-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlvzf5.mount: Deactivated successfully. Apr 14 01:11:55.560434 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3eab92b511f2afd5f3e26cb59343cbb7ce0a49ec7fae33cc6f1329aa8fcc7185-shm.mount: Deactivated successfully. Apr 14 01:11:55.560492 systemd[1]: var-lib-kubelet-pods-5759a036\x2de80f\x2d4c0b\x2db00a\x2d328cc881450c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxps77.mount: Deactivated successfully. Apr 14 01:11:55.560545 systemd[1]: var-lib-kubelet-pods-5759a036\x2de80f\x2d4c0b\x2db00a\x2d328cc881450c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 14 01:11:55.560598 systemd[1]: var-lib-kubelet-pods-5759a036\x2de80f\x2d4c0b\x2db00a\x2d328cc881450c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 14 01:11:55.825445 kubelet[2497]: I0414 01:11:55.825234 2497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5759a036-e80f-4c0b-b00a-328cc881450c" path="/var/lib/kubelet/pods/5759a036-e80f-4c0b-b00a-328cc881450c/volumes" Apr 14 01:11:55.826834 kubelet[2497]: I0414 01:11:55.826234 2497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee726647-6e20-4c62-be0b-e8e3a4442292" path="/var/lib/kubelet/pods/ee726647-6e20-4c62-be0b-e8e3a4442292/volumes" Apr 14 01:11:56.481537 sshd[4118]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:56.489665 systemd[1]: sshd@21-10.0.0.9:22-10.0.0.1:56398.service: Deactivated successfully. Apr 14 01:11:56.491772 systemd[1]: session-22.scope: Deactivated successfully. Apr 14 01:11:56.493760 systemd-logind[1434]: Session 22 logged out. Waiting for processes to exit. Apr 14 01:11:56.499721 systemd[1]: Started sshd@22-10.0.0.9:22-10.0.0.1:42448.service - OpenSSH per-connection server daemon (10.0.0.1:42448). Apr 14 01:11:56.502482 systemd-logind[1434]: Removed session 22. Apr 14 01:11:56.550839 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 42448 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:56.552789 sshd[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:56.565441 systemd-logind[1434]: New session 23 of user core. Apr 14 01:11:56.583331 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 14 01:11:57.699516 sshd[4281]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:57.712017 systemd[1]: sshd@22-10.0.0.9:22-10.0.0.1:42448.service: Deactivated successfully. Apr 14 01:11:57.715130 systemd[1]: session-23.scope: Deactivated successfully. Apr 14 01:11:57.720675 systemd-logind[1434]: Session 23 logged out. Waiting for processes to exit. Apr 14 01:11:57.736816 systemd[1]: Started sshd@23-10.0.0.9:22-10.0.0.1:42454.service - OpenSSH per-connection server daemon (10.0.0.1:42454). Apr 14 01:11:57.746743 systemd-logind[1434]: Removed session 23. Apr 14 01:11:57.799893 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 42454 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:57.805398 sshd[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:57.812505 systemd[1]: Created slice kubepods-burstable-pod938422a1_e67d_4aa8_8280_ba361cb732c6.slice - libcontainer container kubepods-burstable-pod938422a1_e67d_4aa8_8280_ba361cb732c6.slice. Apr 14 01:11:57.824109 systemd-logind[1434]: New session 24 of user core. Apr 14 01:11:57.836212 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 14 01:11:57.929750 kubelet[2497]: I0414 01:11:57.929527 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/938422a1-e67d-4aa8-8280-ba361cb732c6-hostproc\") pod \"cilium-wd8r4\" (UID: \"938422a1-e67d-4aa8-8280-ba361cb732c6\") " pod="kube-system/cilium-wd8r4" Apr 14 01:11:57.929750 kubelet[2497]: I0414 01:11:57.929591 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/938422a1-e67d-4aa8-8280-ba361cb732c6-cni-path\") pod \"cilium-wd8r4\" (UID: \"938422a1-e67d-4aa8-8280-ba361cb732c6\") " pod="kube-system/cilium-wd8r4" Apr 14 01:11:57.929750 kubelet[2497]: I0414 01:11:57.929620 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/938422a1-e67d-4aa8-8280-ba361cb732c6-cilium-config-path\") pod \"cilium-wd8r4\" (UID: \"938422a1-e67d-4aa8-8280-ba361cb732c6\") " pod="kube-system/cilium-wd8r4" Apr 14 01:11:57.929750 kubelet[2497]: I0414 01:11:57.929640 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/938422a1-e67d-4aa8-8280-ba361cb732c6-cilium-ipsec-secrets\") pod \"cilium-wd8r4\" (UID: \"938422a1-e67d-4aa8-8280-ba361cb732c6\") " pod="kube-system/cilium-wd8r4" Apr 14 01:11:57.929750 kubelet[2497]: I0414 01:11:57.929660 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/938422a1-e67d-4aa8-8280-ba361cb732c6-lib-modules\") pod \"cilium-wd8r4\" (UID: \"938422a1-e67d-4aa8-8280-ba361cb732c6\") " pod="kube-system/cilium-wd8r4" Apr 14 01:11:57.929750 kubelet[2497]: I0414 01:11:57.929684 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/938422a1-e67d-4aa8-8280-ba361cb732c6-bpf-maps\") pod \"cilium-wd8r4\" (UID: \"938422a1-e67d-4aa8-8280-ba361cb732c6\") " pod="kube-system/cilium-wd8r4" Apr 14 01:11:57.930449 kubelet[2497]: I0414 01:11:57.929735 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/938422a1-e67d-4aa8-8280-ba361cb732c6-clustermesh-secrets\") pod \"cilium-wd8r4\" (UID: \"938422a1-e67d-4aa8-8280-ba361cb732c6\") " pod="kube-system/cilium-wd8r4" Apr 14 01:11:57.930449 kubelet[2497]: I0414 01:11:57.929804 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/938422a1-e67d-4aa8-8280-ba361cb732c6-host-proc-sys-kernel\") pod \"cilium-wd8r4\" (UID: \"938422a1-e67d-4aa8-8280-ba361cb732c6\") " pod="kube-system/cilium-wd8r4" Apr 14 01:11:57.930449 kubelet[2497]: I0414 01:11:57.929828 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c755d\" (UniqueName: \"kubernetes.io/projected/938422a1-e67d-4aa8-8280-ba361cb732c6-kube-api-access-c755d\") pod \"cilium-wd8r4\" (UID: \"938422a1-e67d-4aa8-8280-ba361cb732c6\") " pod="kube-system/cilium-wd8r4" Apr 14 01:11:57.930449 kubelet[2497]: I0414 01:11:57.929861 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/938422a1-e67d-4aa8-8280-ba361cb732c6-cilium-run\") pod \"cilium-wd8r4\" (UID: \"938422a1-e67d-4aa8-8280-ba361cb732c6\") " pod="kube-system/cilium-wd8r4" Apr 14 01:11:57.930449 kubelet[2497]: I0414 01:11:57.929881 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/938422a1-e67d-4aa8-8280-ba361cb732c6-xtables-lock\") pod \"cilium-wd8r4\" (UID: \"938422a1-e67d-4aa8-8280-ba361cb732c6\") " pod="kube-system/cilium-wd8r4" Apr 14 01:11:57.930620 kubelet[2497]: I0414 01:11:57.929907 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/938422a1-e67d-4aa8-8280-ba361cb732c6-host-proc-sys-net\") pod \"cilium-wd8r4\" (UID: \"938422a1-e67d-4aa8-8280-ba361cb732c6\") " pod="kube-system/cilium-wd8r4" Apr 14 01:11:57.930620 kubelet[2497]: I0414 01:11:57.929925 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/938422a1-e67d-4aa8-8280-ba361cb732c6-hubble-tls\") pod \"cilium-wd8r4\" (UID: \"938422a1-e67d-4aa8-8280-ba361cb732c6\") " pod="kube-system/cilium-wd8r4" Apr 14 01:11:57.930620 kubelet[2497]: I0414 01:11:57.929974 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/938422a1-e67d-4aa8-8280-ba361cb732c6-cilium-cgroup\") pod \"cilium-wd8r4\" (UID: \"938422a1-e67d-4aa8-8280-ba361cb732c6\") " pod="kube-system/cilium-wd8r4" Apr 14 01:11:57.930620 kubelet[2497]: I0414 01:11:57.930001 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/938422a1-e67d-4aa8-8280-ba361cb732c6-etc-cni-netd\") pod \"cilium-wd8r4\" (UID: \"938422a1-e67d-4aa8-8280-ba361cb732c6\") " pod="kube-system/cilium-wd8r4" Apr 14 01:11:57.938249 sshd[4296]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:57.952761 systemd[1]: sshd@23-10.0.0.9:22-10.0.0.1:42454.service: Deactivated successfully. Apr 14 01:11:57.956112 systemd[1]: session-24.scope: Deactivated successfully. Apr 14 01:11:57.959278 systemd-logind[1434]: Session 24 logged out. Waiting for processes to exit. Apr 14 01:11:57.974594 systemd[1]: Started sshd@24-10.0.0.9:22-10.0.0.1:42464.service - OpenSSH per-connection server daemon (10.0.0.1:42464). Apr 14 01:11:57.977604 systemd-logind[1434]: Removed session 24. Apr 14 01:11:58.056572 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 42464 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:58.058950 sshd[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:58.124374 systemd-logind[1434]: New session 25 of user core. Apr 14 01:11:58.130685 containerd[1453]: time="2026-04-14T01:11:58.128244841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wd8r4,Uid:938422a1-e67d-4aa8-8280-ba361cb732c6,Namespace:kube-system,Attempt:0,}" Apr 14 01:11:58.149447 kubelet[2497]: E0414 01:11:58.127030 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:58.149472 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 14 01:11:58.241308 containerd[1453]: time="2026-04-14T01:11:58.236867173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:11:58.241308 containerd[1453]: time="2026-04-14T01:11:58.238124677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:11:58.241308 containerd[1453]: time="2026-04-14T01:11:58.238143493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:58.241308 containerd[1453]: time="2026-04-14T01:11:58.238398799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:58.311673 systemd[1]: Started cri-containerd-70ff67ce08f6afd8544db2fa7d9669ed2e464dd50444f9d47a1fa7fc37c48940.scope - libcontainer container 70ff67ce08f6afd8544db2fa7d9669ed2e464dd50444f9d47a1fa7fc37c48940. Apr 14 01:11:58.462147 containerd[1453]: time="2026-04-14T01:11:58.462090370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wd8r4,Uid:938422a1-e67d-4aa8-8280-ba361cb732c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"70ff67ce08f6afd8544db2fa7d9669ed2e464dd50444f9d47a1fa7fc37c48940\"" Apr 14 01:11:58.469700 kubelet[2497]: E0414 01:11:58.469670 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:58.510323 containerd[1453]: time="2026-04-14T01:11:58.509870292Z" level=info msg="CreateContainer within sandbox \"70ff67ce08f6afd8544db2fa7d9669ed2e464dd50444f9d47a1fa7fc37c48940\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 14 01:11:58.649724 containerd[1453]: time="2026-04-14T01:11:58.645706978Z" level=info msg="CreateContainer within sandbox \"70ff67ce08f6afd8544db2fa7d9669ed2e464dd50444f9d47a1fa7fc37c48940\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"14576d2b37d6c5afd81b3ae5433482906594b9b02278ce76e2b406bffcd68c3d\"" Apr 14 01:11:58.649724 containerd[1453]: time="2026-04-14T01:11:58.646921799Z" level=info msg="StartContainer for \"14576d2b37d6c5afd81b3ae5433482906594b9b02278ce76e2b406bffcd68c3d\"" Apr 14 01:11:58.720034 systemd[1]: Started cri-containerd-14576d2b37d6c5afd81b3ae5433482906594b9b02278ce76e2b406bffcd68c3d.scope - libcontainer container 14576d2b37d6c5afd81b3ae5433482906594b9b02278ce76e2b406bffcd68c3d. Apr 14 01:11:58.944921 containerd[1453]: time="2026-04-14T01:11:58.944827154Z" level=info msg="StartContainer for \"14576d2b37d6c5afd81b3ae5433482906594b9b02278ce76e2b406bffcd68c3d\" returns successfully" Apr 14 01:11:58.973360 systemd[1]: cri-containerd-14576d2b37d6c5afd81b3ae5433482906594b9b02278ce76e2b406bffcd68c3d.scope: Deactivated successfully. Apr 14 01:11:59.107939 containerd[1453]: time="2026-04-14T01:11:59.102144419Z" level=info msg="shim disconnected" id=14576d2b37d6c5afd81b3ae5433482906594b9b02278ce76e2b406bffcd68c3d namespace=k8s.io Apr 14 01:11:59.107939 containerd[1453]: time="2026-04-14T01:11:59.102284450Z" level=warning msg="cleaning up after shim disconnected" id=14576d2b37d6c5afd81b3ae5433482906594b9b02278ce76e2b406bffcd68c3d namespace=k8s.io Apr 14 01:11:59.107939 containerd[1453]: time="2026-04-14T01:11:59.102301221Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:11:59.135601 kubelet[2497]: E0414 01:11:59.135509 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:59.875778 kubelet[2497]: E0414 01:11:59.875712 2497 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 01:12:00.165443 kubelet[2497]: E0414 01:12:00.162625 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:12:00.184229 containerd[1453]: time="2026-04-14T01:12:00.183919764Z" level=info msg="CreateContainer within sandbox \"70ff67ce08f6afd8544db2fa7d9669ed2e464dd50444f9d47a1fa7fc37c48940\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 14 01:12:00.254964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2313938316.mount: Deactivated successfully. Apr 14 01:12:00.263102 containerd[1453]: time="2026-04-14T01:12:00.263002394Z" level=info msg="CreateContainer within sandbox \"70ff67ce08f6afd8544db2fa7d9669ed2e464dd50444f9d47a1fa7fc37c48940\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"db4902b337d25896ee16585e93612b281c973a31ff3a6e9d872d1763823d8a10\"" Apr 14 01:12:00.267378 containerd[1453]: time="2026-04-14T01:12:00.265457938Z" level=info msg="StartContainer for \"db4902b337d25896ee16585e93612b281c973a31ff3a6e9d872d1763823d8a10\"" Apr 14 01:12:00.404601 systemd[1]: Started cri-containerd-db4902b337d25896ee16585e93612b281c973a31ff3a6e9d872d1763823d8a10.scope - libcontainer container db4902b337d25896ee16585e93612b281c973a31ff3a6e9d872d1763823d8a10. Apr 14 01:12:00.553644 containerd[1453]: time="2026-04-14T01:12:00.553422923Z" level=info msg="StartContainer for \"db4902b337d25896ee16585e93612b281c973a31ff3a6e9d872d1763823d8a10\" returns successfully" Apr 14 01:12:00.565117 systemd[1]: cri-containerd-db4902b337d25896ee16585e93612b281c973a31ff3a6e9d872d1763823d8a10.scope: Deactivated successfully. Apr 14 01:12:00.689563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db4902b337d25896ee16585e93612b281c973a31ff3a6e9d872d1763823d8a10-rootfs.mount: Deactivated successfully. Apr 14 01:12:00.756018 containerd[1453]: time="2026-04-14T01:12:00.755786699Z" level=info msg="shim disconnected" id=db4902b337d25896ee16585e93612b281c973a31ff3a6e9d872d1763823d8a10 namespace=k8s.io Apr 14 01:12:00.756346 containerd[1453]: time="2026-04-14T01:12:00.756081248Z" level=warning msg="cleaning up after shim disconnected" id=db4902b337d25896ee16585e93612b281c973a31ff3a6e9d872d1763823d8a10 namespace=k8s.io Apr 14 01:12:00.756346 containerd[1453]: time="2026-04-14T01:12:00.756098799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:12:00.814622 containerd[1453]: time="2026-04-14T01:12:00.814402788Z" level=warning msg="cleanup warnings time=\"2026-04-14T01:12:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 01:12:01.153500 kubelet[2497]: I0414 01:12:01.151412 2497 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-14T01:12:01Z","lastTransitionTime":"2026-04-14T01:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 14 01:12:01.189442 kubelet[2497]: E0414 01:12:01.189351 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:12:01.241034 containerd[1453]: time="2026-04-14T01:12:01.240761932Z" level=info msg="CreateContainer within sandbox \"70ff67ce08f6afd8544db2fa7d9669ed2e464dd50444f9d47a1fa7fc37c48940\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 14 01:12:01.318762 containerd[1453]: time="2026-04-14T01:12:01.317842487Z" level=info msg="CreateContainer within sandbox \"70ff67ce08f6afd8544db2fa7d9669ed2e464dd50444f9d47a1fa7fc37c48940\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7e781cfa1fcc5bae50315349ef333fb43e631f48602d5f37c3e1edc76f3b5ee3\"" Apr 14 01:12:01.323616 containerd[1453]: time="2026-04-14T01:12:01.319451769Z" level=info msg="StartContainer for \"7e781cfa1fcc5bae50315349ef333fb43e631f48602d5f37c3e1edc76f3b5ee3\"" Apr 14 01:12:01.479470 systemd[1]: Started cri-containerd-7e781cfa1fcc5bae50315349ef333fb43e631f48602d5f37c3e1edc76f3b5ee3.scope - libcontainer container 7e781cfa1fcc5bae50315349ef333fb43e631f48602d5f37c3e1edc76f3b5ee3. Apr 14 01:12:01.587888 containerd[1453]: time="2026-04-14T01:12:01.587733158Z" level=info msg="StartContainer for \"7e781cfa1fcc5bae50315349ef333fb43e631f48602d5f37c3e1edc76f3b5ee3\" returns successfully" Apr 14 01:12:01.593271 systemd[1]: cri-containerd-7e781cfa1fcc5bae50315349ef333fb43e631f48602d5f37c3e1edc76f3b5ee3.scope: Deactivated successfully. Apr 14 01:12:01.767801 containerd[1453]: time="2026-04-14T01:12:01.767384584Z" level=info msg="shim disconnected" id=7e781cfa1fcc5bae50315349ef333fb43e631f48602d5f37c3e1edc76f3b5ee3 namespace=k8s.io Apr 14 01:12:01.767801 containerd[1453]: time="2026-04-14T01:12:01.767676674Z" level=warning msg="cleaning up after shim disconnected" id=7e781cfa1fcc5bae50315349ef333fb43e631f48602d5f37c3e1edc76f3b5ee3 namespace=k8s.io Apr 14 01:12:01.767801 containerd[1453]: time="2026-04-14T01:12:01.767694296Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:12:02.222215 kubelet[2497]: E0414 01:12:02.222043 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:12:02.266807 containerd[1453]: time="2026-04-14T01:12:02.266684512Z" level=info msg="CreateContainer within sandbox \"70ff67ce08f6afd8544db2fa7d9669ed2e464dd50444f9d47a1fa7fc37c48940\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 14 01:12:02.299801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e781cfa1fcc5bae50315349ef333fb43e631f48602d5f37c3e1edc76f3b5ee3-rootfs.mount: Deactivated successfully. Apr 14 01:12:02.338885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1541453306.mount: Deactivated successfully. Apr 14 01:12:02.357639 containerd[1453]: time="2026-04-14T01:12:02.357142456Z" level=info msg="CreateContainer within sandbox \"70ff67ce08f6afd8544db2fa7d9669ed2e464dd50444f9d47a1fa7fc37c48940\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5055f7aff3b314fbf39facefb1f7b093e0d1d0bc6bb0cb5a120486a2562c150d\"" Apr 14 01:12:02.364419 containerd[1453]: time="2026-04-14T01:12:02.363128454Z" level=info msg="StartContainer for \"5055f7aff3b314fbf39facefb1f7b093e0d1d0bc6bb0cb5a120486a2562c150d\"" Apr 14 01:12:02.485871 systemd[1]: Started cri-containerd-5055f7aff3b314fbf39facefb1f7b093e0d1d0bc6bb0cb5a120486a2562c150d.scope - libcontainer container 5055f7aff3b314fbf39facefb1f7b093e0d1d0bc6bb0cb5a120486a2562c150d. Apr 14 01:12:02.592398 systemd[1]: cri-containerd-5055f7aff3b314fbf39facefb1f7b093e0d1d0bc6bb0cb5a120486a2562c150d.scope: Deactivated successfully. Apr 14 01:12:02.600494 containerd[1453]: time="2026-04-14T01:12:02.600290425Z" level=info msg="StartContainer for \"5055f7aff3b314fbf39facefb1f7b093e0d1d0bc6bb0cb5a120486a2562c150d\" returns successfully" Apr 14 01:12:02.794376 containerd[1453]: time="2026-04-14T01:12:02.790137541Z" level=info msg="shim disconnected" id=5055f7aff3b314fbf39facefb1f7b093e0d1d0bc6bb0cb5a120486a2562c150d namespace=k8s.io Apr 14 01:12:02.794376 containerd[1453]: time="2026-04-14T01:12:02.790870344Z" level=warning msg="cleaning up after shim disconnected" id=5055f7aff3b314fbf39facefb1f7b093e0d1d0bc6bb0cb5a120486a2562c150d namespace=k8s.io Apr 14 01:12:02.794376 containerd[1453]: time="2026-04-14T01:12:02.790898585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:12:03.242630 kubelet[2497]: E0414 01:12:03.242553 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:12:03.259846 containerd[1453]: time="2026-04-14T01:12:03.259587413Z" level=info msg="CreateContainer within sandbox \"70ff67ce08f6afd8544db2fa7d9669ed2e464dd50444f9d47a1fa7fc37c48940\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 14 01:12:03.299053 systemd[1]: run-containerd-runc-k8s.io-5055f7aff3b314fbf39facefb1f7b093e0d1d0bc6bb0cb5a120486a2562c150d-runc.RxnL5w.mount: Deactivated successfully. Apr 14 01:12:03.300659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5055f7aff3b314fbf39facefb1f7b093e0d1d0bc6bb0cb5a120486a2562c150d-rootfs.mount: Deactivated successfully. Apr 14 01:12:03.326157 containerd[1453]: time="2026-04-14T01:12:03.325974518Z" level=info msg="CreateContainer within sandbox \"70ff67ce08f6afd8544db2fa7d9669ed2e464dd50444f9d47a1fa7fc37c48940\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fcc132ecf485a1248fcaf06982935632c8a44546fb6b058e08a5cad04429f9ad\"" Apr 14 01:12:03.329259 containerd[1453]: time="2026-04-14T01:12:03.328439292Z" level=info msg="StartContainer for \"fcc132ecf485a1248fcaf06982935632c8a44546fb6b058e08a5cad04429f9ad\"" Apr 14 01:12:03.455122 systemd[1]: run-containerd-runc-k8s.io-fcc132ecf485a1248fcaf06982935632c8a44546fb6b058e08a5cad04429f9ad-runc.I9evQO.mount: Deactivated successfully. Apr 14 01:12:03.502973 systemd[1]: Started cri-containerd-fcc132ecf485a1248fcaf06982935632c8a44546fb6b058e08a5cad04429f9ad.scope - libcontainer container fcc132ecf485a1248fcaf06982935632c8a44546fb6b058e08a5cad04429f9ad. Apr 14 01:12:03.699491 containerd[1453]: time="2026-04-14T01:12:03.697897305Z" level=info msg="StartContainer for \"fcc132ecf485a1248fcaf06982935632c8a44546fb6b058e08a5cad04429f9ad\" returns successfully" Apr 14 01:12:04.254710 kubelet[2497]: E0414 01:12:04.254644 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:12:04.316475 kubelet[2497]: I0414 01:12:04.316337 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wd8r4" podStartSLOduration=7.316312729 podStartE2EDuration="7.316312729s" podCreationTimestamp="2026-04-14 01:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:12:04.315993738 +0000 UTC m=+74.636726945" watchObservedRunningTime="2026-04-14 01:12:04.316312729 +0000 UTC m=+74.637045939" Apr 14 01:12:04.421933 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 14 01:12:05.272356 systemd[1]: run-containerd-runc-k8s.io-fcc132ecf485a1248fcaf06982935632c8a44546fb6b058e08a5cad04429f9ad-runc.IekzoD.mount: Deactivated successfully. Apr 14 01:12:06.126752 kubelet[2497]: E0414 01:12:06.126697 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:12:08.826850 kubelet[2497]: E0414 01:12:08.826239 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:12:11.823428 kubelet[2497]: E0414 01:12:11.822413 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:12:12.000351 systemd-networkd[1380]: lxc_health: Link UP Apr 14 01:12:12.010355 systemd-networkd[1380]: lxc_health: Gained carrier Apr 14 01:12:12.135331 kubelet[2497]: E0414 01:12:12.129160 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:12:12.312355 kubelet[2497]: E0414 01:12:12.311605 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:12:13.603419 systemd-networkd[1380]: lxc_health: Gained IPv6LL Apr 14 01:12:14.449631 systemd[1]: run-containerd-runc-k8s.io-fcc132ecf485a1248fcaf06982935632c8a44546fb6b058e08a5cad04429f9ad-runc.Siiq0W.mount: Deactivated successfully. Apr 14 01:12:18.854256 systemd[1]: run-containerd-runc-k8s.io-fcc132ecf485a1248fcaf06982935632c8a44546fb6b058e08a5cad04429f9ad-runc.EqsyBC.mount: Deactivated successfully. Apr 14 01:12:18.972055 sshd[4304]: pam_unix(sshd:session): session closed for user core Apr 14 01:12:18.981992 systemd[1]: sshd@24-10.0.0.9:22-10.0.0.1:42464.service: Deactivated successfully. Apr 14 01:12:18.987932 systemd[1]: session-25.scope: Deactivated successfully. Apr 14 01:12:18.991139 systemd-logind[1434]: Session 25 logged out. Waiting for processes to exit. Apr 14 01:12:18.993861 systemd-logind[1434]: Removed session 25. Apr 14 01:12:19.831807 kubelet[2497]: E0414 01:12:19.831712 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"