Apr 14 00:50:02.169167 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 14 00:50:02.169195 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 00:50:02.169209 kernel: BIOS-provided physical RAM map: Apr 14 00:50:02.169216 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 14 00:50:02.169222 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 14 00:50:02.169229 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 14 00:50:02.169237 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 14 00:50:02.169244 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 14 00:50:02.169250 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 14 00:50:02.169259 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 14 00:50:02.169265 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 14 00:50:02.169272 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 14 00:50:02.169279 kernel: NX (Execute Disable) protection: active Apr 14 00:50:02.169286 kernel: APIC: Static calls initialized Apr 14 00:50:02.169296 kernel: SMBIOS 2.8 present. Apr 14 00:50:02.169305 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 14 00:50:02.169312 kernel: Hypervisor detected: KVM Apr 14 00:50:02.169320 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 14 00:50:02.169328 kernel: kvm-clock: using sched offset of 5922239962 cycles Apr 14 00:50:02.169336 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 14 00:50:02.169344 kernel: tsc: Detected 2793.438 MHz processor Apr 14 00:50:02.169352 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 14 00:50:02.169361 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 14 00:50:02.169369 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 14 00:50:02.169380 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 14 00:50:02.169388 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 14 00:50:02.169619 kernel: Using GB pages for direct mapping Apr 14 00:50:02.169630 kernel: ACPI: Early table checksum verification disabled Apr 14 00:50:02.169638 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 14 00:50:02.169647 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:50:02.169655 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:50:02.169664 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:50:02.169671 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 14 00:50:02.169681 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:50:02.169689 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:50:02.169697 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:50:02.169704 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:50:02.169712 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 14 00:50:02.169720 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 14 00:50:02.169727 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 14 00:50:02.169738 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 14 00:50:02.169748 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 14 00:50:02.169757 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 14 00:50:02.169765 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 14 00:50:02.169772 kernel: No NUMA configuration found Apr 14 00:50:02.169780 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 14 00:50:02.169789 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 14 00:50:02.169799 kernel: Zone ranges: Apr 14 00:50:02.169808 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 14 00:50:02.169817 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 14 00:50:02.169825 kernel: Normal empty Apr 14 00:50:02.169833 kernel: Movable zone start for each node Apr 14 00:50:02.169841 kernel: Early memory node ranges Apr 14 00:50:02.169848 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 14 00:50:02.169895 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 14 00:50:02.169903 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 14 00:50:02.169911 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 14 00:50:02.169922 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 14 00:50:02.169932 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 14 00:50:02.169941 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 14 00:50:02.169951 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 14 00:50:02.169960 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 14 00:50:02.169969 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 14 00:50:02.169978 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 14 00:50:02.169987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 14 00:50:02.169996 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 14 00:50:02.170007 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 14 00:50:02.170016 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 14 00:50:02.170026 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 14 00:50:02.170034 kernel: TSC deadline timer available Apr 14 00:50:02.170043 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 14 00:50:02.170051 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 14 00:50:02.170061 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 14 00:50:02.170070 kernel: kvm-guest: setup PV sched yield Apr 14 00:50:02.170079 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 14 00:50:02.170091 kernel: Booting paravirtualized kernel on KVM Apr 14 00:50:02.170101 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 14 00:50:02.170110 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 14 00:50:02.170119 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 14 00:50:02.170128 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 14 00:50:02.170137 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 14 00:50:02.170146 kernel: kvm-guest: PV spinlocks enabled Apr 14 00:50:02.170156 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 14 00:50:02.170167 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 00:50:02.170178 kernel: random: crng init done Apr 14 00:50:02.170187 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 14 00:50:02.170196 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 14 00:50:02.170205 kernel: Fallback order for Node 0: 0 Apr 14 00:50:02.170214 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 14 00:50:02.170223 kernel: Policy zone: DMA32 Apr 14 00:50:02.170232 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 14 00:50:02.170242 kernel: Memory: 2433648K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137900K reserved, 0K cma-reserved) Apr 14 00:50:02.170254 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 14 00:50:02.170264 kernel: ftrace: allocating 37996 entries in 149 pages Apr 14 00:50:02.170273 kernel: ftrace: allocated 149 pages with 4 groups Apr 14 00:50:02.170283 kernel: Dynamic Preempt: voluntary Apr 14 00:50:02.170291 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 14 00:50:02.170301 kernel: rcu: RCU event tracing is enabled. Apr 14 00:50:02.170310 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 14 00:50:02.170320 kernel: Trampoline variant of Tasks RCU enabled. Apr 14 00:50:02.170329 kernel: Rude variant of Tasks RCU enabled. Apr 14 00:50:02.170341 kernel: Tracing variant of Tasks RCU enabled. Apr 14 00:50:02.170350 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 14 00:50:02.170359 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 14 00:50:02.170369 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 14 00:50:02.170378 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 14 00:50:02.170387 kernel: Console: colour VGA+ 80x25 Apr 14 00:50:02.170467 kernel: printk: console [ttyS0] enabled Apr 14 00:50:02.170477 kernel: ACPI: Core revision 20230628 Apr 14 00:50:02.170487 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 14 00:50:02.170500 kernel: APIC: Switch to symmetric I/O mode setup Apr 14 00:50:02.170509 kernel: x2apic enabled Apr 14 00:50:02.170518 kernel: APIC: Switched APIC routing to: physical x2apic Apr 14 00:50:02.170526 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 14 00:50:02.170536 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 14 00:50:02.170545 kernel: kvm-guest: setup PV IPIs Apr 14 00:50:02.170555 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 14 00:50:02.170564 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 00:50:02.170584 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 14 00:50:02.170593 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 14 00:50:02.170603 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 14 00:50:02.170612 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 14 00:50:02.170624 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 14 00:50:02.170633 kernel: Spectre V2 : Mitigation: Retpolines Apr 14 00:50:02.170643 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 14 00:50:02.170652 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 14 00:50:02.170663 kernel: RETBleed: Vulnerable Apr 14 00:50:02.170671 kernel: Speculative Store Bypass: Vulnerable Apr 14 00:50:02.170680 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 14 00:50:02.170689 kernel: GDS: Unknown: Dependent on hypervisor status Apr 14 00:50:02.170698 kernel: active return thunk: its_return_thunk Apr 14 00:50:02.170709 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 14 00:50:02.170717 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 14 00:50:02.170726 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 14 00:50:02.170735 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 14 00:50:02.170747 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 14 00:50:02.170756 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 14 00:50:02.170766 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 14 00:50:02.170775 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 14 00:50:02.170785 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 14 00:50:02.170796 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 14 00:50:02.170806 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 14 00:50:02.170817 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 14 00:50:02.170827 kernel: Freeing SMP alternatives memory: 32K Apr 14 00:50:02.170839 kernel: pid_max: default: 32768 minimum: 301 Apr 14 00:50:02.170885 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 14 00:50:02.170897 kernel: landlock: Up and running. Apr 14 00:50:02.170908 kernel: SELinux: Initializing. Apr 14 00:50:02.170918 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 00:50:02.170929 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 00:50:02.170940 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 14 00:50:02.170950 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 00:50:02.170961 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 00:50:02.170974 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 00:50:02.170984 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 14 00:50:02.170995 kernel: signal: max sigframe size: 3632 Apr 14 00:50:02.171005 kernel: rcu: Hierarchical SRCU implementation. Apr 14 00:50:02.171016 kernel: rcu: Max phase no-delay instances is 400. Apr 14 00:50:02.171027 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 14 00:50:02.171038 kernel: smp: Bringing up secondary CPUs ... Apr 14 00:50:02.171048 kernel: smpboot: x86: Booting SMP configuration: Apr 14 00:50:02.171059 kernel: .... node #0, CPUs: #1 #2 #3 Apr 14 00:50:02.171072 kernel: smp: Brought up 1 node, 4 CPUs Apr 14 00:50:02.171082 kernel: smpboot: Max logical packages: 1 Apr 14 00:50:02.171092 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 14 00:50:02.171102 kernel: devtmpfs: initialized Apr 14 00:50:02.171113 kernel: x86/mm: Memory block size: 128MB Apr 14 00:50:02.171123 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 14 00:50:02.171134 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 14 00:50:02.171143 kernel: pinctrl core: initialized pinctrl subsystem Apr 14 00:50:02.171153 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 14 00:50:02.171165 kernel: audit: initializing netlink subsys (disabled) Apr 14 00:50:02.171175 kernel: audit: type=2000 audit(1776127800.065:1): state=initialized audit_enabled=0 res=1 Apr 14 00:50:02.171185 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 14 00:50:02.171195 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 14 00:50:02.171206 kernel: cpuidle: using governor menu Apr 14 00:50:02.171215 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 14 00:50:02.171225 kernel: dca service started, version 1.12.1 Apr 14 00:50:02.171235 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 14 00:50:02.171246 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 14 00:50:02.171258 kernel: PCI: Using configuration type 1 for base access Apr 14 00:50:02.171267 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 14 00:50:02.171277 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 14 00:50:02.171286 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 14 00:50:02.171296 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 14 00:50:02.171305 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 14 00:50:02.171314 kernel: ACPI: Added _OSI(Module Device) Apr 14 00:50:02.171323 kernel: ACPI: Added _OSI(Processor Device) Apr 14 00:50:02.171332 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 14 00:50:02.171343 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 14 00:50:02.171353 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 14 00:50:02.171362 kernel: ACPI: Interpreter enabled Apr 14 00:50:02.171371 kernel: ACPI: PM: (supports S0 S3 S5) Apr 14 00:50:02.171380 kernel: ACPI: Using IOAPIC for interrupt routing Apr 14 00:50:02.171389 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 14 00:50:02.171469 kernel: PCI: Using E820 reservations for host bridge windows Apr 14 00:50:02.171479 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 14 00:50:02.171487 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 14 00:50:02.171656 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 14 00:50:02.171756 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 14 00:50:02.171846 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 14 00:50:02.171897 kernel: PCI host bridge to bus 0000:00 Apr 14 00:50:02.171994 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 14 00:50:02.172073 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 14 00:50:02.172155 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 14 00:50:02.172230 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 14 00:50:02.172306 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 14 00:50:02.172386 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 14 00:50:02.172540 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 14 00:50:02.172649 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 14 00:50:02.172750 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 14 00:50:02.172846 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 14 00:50:02.172980 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 14 00:50:02.173066 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 14 00:50:02.173159 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 14 00:50:02.173258 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 14 00:50:02.173386 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 14 00:50:02.173542 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 14 00:50:02.173636 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 14 00:50:02.173728 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 14 00:50:02.173820 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 14 00:50:02.173951 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 14 00:50:02.174042 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 14 00:50:02.174136 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 14 00:50:02.174228 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 14 00:50:02.174319 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 14 00:50:02.174476 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 14 00:50:02.174571 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 14 00:50:02.174667 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 14 00:50:02.174757 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 14 00:50:02.174884 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 14 00:50:02.174981 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 14 00:50:02.175066 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 14 00:50:02.175167 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 14 00:50:02.175252 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 14 00:50:02.175265 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 14 00:50:02.175276 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 14 00:50:02.175285 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 14 00:50:02.175296 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 14 00:50:02.175309 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 14 00:50:02.175318 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 14 00:50:02.175326 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 14 00:50:02.175336 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 14 00:50:02.175346 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 14 00:50:02.175355 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 14 00:50:02.175364 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 14 00:50:02.175375 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 14 00:50:02.175386 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 14 00:50:02.175481 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 14 00:50:02.175493 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 14 00:50:02.175504 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 14 00:50:02.175515 kernel: iommu: Default domain type: Translated Apr 14 00:50:02.175525 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 14 00:50:02.175535 kernel: PCI: Using ACPI for IRQ routing Apr 14 00:50:02.175545 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 14 00:50:02.175554 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 14 00:50:02.175565 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 14 00:50:02.175667 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 14 00:50:02.175757 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 14 00:50:02.175842 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 14 00:50:02.175885 kernel: vgaarb: loaded Apr 14 00:50:02.175896 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 14 00:50:02.175904 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 14 00:50:02.175915 kernel: clocksource: Switched to clocksource kvm-clock Apr 14 00:50:02.175926 kernel: VFS: Disk quotas dquot_6.6.0 Apr 14 00:50:02.175940 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 14 00:50:02.175951 kernel: pnp: PnP ACPI init Apr 14 00:50:02.176044 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 14 00:50:02.176058 kernel: pnp: PnP ACPI: found 6 devices Apr 14 00:50:02.176067 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 14 00:50:02.176077 kernel: NET: Registered PF_INET protocol family Apr 14 00:50:02.176087 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 14 00:50:02.176097 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 14 00:50:02.176110 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 14 00:50:02.176120 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 14 00:50:02.176130 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 14 00:50:02.176139 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 14 00:50:02.176148 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 00:50:02.176158 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 00:50:02.176167 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 14 00:50:02.176177 kernel: NET: Registered PF_XDP protocol family Apr 14 00:50:02.176262 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 14 00:50:02.176346 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 14 00:50:02.176484 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 14 00:50:02.176565 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 14 00:50:02.176634 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 14 00:50:02.176700 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 14 00:50:02.176711 kernel: PCI: CLS 0 bytes, default 64 Apr 14 00:50:02.176720 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 14 00:50:02.176728 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 00:50:02.176741 kernel: Initialise system trusted keyrings Apr 14 00:50:02.176749 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 14 00:50:02.176758 kernel: Key type asymmetric registered Apr 14 00:50:02.176766 kernel: Asymmetric key parser 'x509' registered Apr 14 00:50:02.176775 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 14 00:50:02.176783 kernel: io scheduler mq-deadline registered Apr 14 00:50:02.176791 kernel: io scheduler kyber registered Apr 14 00:50:02.176799 kernel: io scheduler bfq registered Apr 14 00:50:02.176808 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 14 00:50:02.176821 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 14 00:50:02.176829 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 14 00:50:02.176838 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 14 00:50:02.176847 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 14 00:50:02.177270 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 14 00:50:02.177282 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 14 00:50:02.177291 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 14 00:50:02.177300 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 14 00:50:02.177529 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 14 00:50:02.177548 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 14 00:50:02.177615 kernel: rtc_cmos 00:04: registered as rtc0 Apr 14 00:50:02.177679 kernel: rtc_cmos 00:04: setting system clock to 2026-04-14T00:50:01 UTC (1776127801) Apr 14 00:50:02.177773 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 14 00:50:02.177783 kernel: intel_pstate: CPU model not supported Apr 14 00:50:02.177790 kernel: NET: Registered PF_INET6 protocol family Apr 14 00:50:02.177797 kernel: Segment Routing with IPv6 Apr 14 00:50:02.177804 kernel: In-situ OAM (IOAM) with IPv6 Apr 14 00:50:02.177813 kernel: NET: Registered PF_PACKET protocol family Apr 14 00:50:02.177820 kernel: Key type dns_resolver registered Apr 14 00:50:02.177827 kernel: IPI shorthand broadcast: enabled Apr 14 00:50:02.177834 kernel: sched_clock: Marking stable (1572019701, 373684826)->(2121981503, -176276976) Apr 14 00:50:02.177841 kernel: registered taskstats version 1 Apr 14 00:50:02.177848 kernel: Loading compiled-in X.509 certificates Apr 14 00:50:02.178017 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 14 00:50:02.178024 kernel: Key type .fscrypt registered Apr 14 00:50:02.178030 kernel: Key type fscrypt-provisioning registered Apr 14 00:50:02.178044 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 14 00:50:02.178050 kernel: ima: Allocated hash algorithm: sha1 Apr 14 00:50:02.178057 kernel: ima: No architecture policies found Apr 14 00:50:02.178064 kernel: hrtimer: interrupt took 6131508 ns Apr 14 00:50:02.178070 kernel: clk: Disabling unused clocks Apr 14 00:50:02.178077 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 14 00:50:02.178084 kernel: Write protecting the kernel read-only data: 36864k Apr 14 00:50:02.178090 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 14 00:50:02.178097 kernel: Run /init as init process Apr 14 00:50:02.178106 kernel: with arguments: Apr 14 00:50:02.178112 kernel: /init Apr 14 00:50:02.178119 kernel: with environment: Apr 14 00:50:02.178126 kernel: HOME=/ Apr 14 00:50:02.178132 kernel: TERM=linux Apr 14 00:50:02.178141 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 00:50:02.178150 systemd[1]: Detected virtualization kvm. Apr 14 00:50:02.178322 systemd[1]: Detected architecture x86-64. Apr 14 00:50:02.178336 systemd[1]: Running in initrd. Apr 14 00:50:02.178343 systemd[1]: No hostname configured, using default hostname. Apr 14 00:50:02.178350 systemd[1]: Hostname set to . Apr 14 00:50:02.178358 systemd[1]: Initializing machine ID from VM UUID. Apr 14 00:50:02.178365 systemd[1]: Queued start job for default target initrd.target. Apr 14 00:50:02.178372 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 00:50:02.178380 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 00:50:02.178388 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 14 00:50:02.178448 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 00:50:02.178457 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 14 00:50:02.178473 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 14 00:50:02.178484 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 14 00:50:02.178492 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 14 00:50:02.178501 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 00:50:02.178508 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 00:50:02.178515 systemd[1]: Reached target paths.target - Path Units. Apr 14 00:50:02.178523 systemd[1]: Reached target slices.target - Slice Units. Apr 14 00:50:02.178530 systemd[1]: Reached target swap.target - Swaps. Apr 14 00:50:02.178538 systemd[1]: Reached target timers.target - Timer Units. Apr 14 00:50:02.178545 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 00:50:02.178553 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 00:50:02.178562 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 00:50:02.178570 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 00:50:02.178578 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 00:50:02.178585 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 00:50:02.178592 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 00:50:02.178600 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 00:50:02.178607 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 14 00:50:02.178614 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 00:50:02.178622 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 14 00:50:02.178631 systemd[1]: Starting systemd-fsck-usr.service... Apr 14 00:50:02.178639 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 00:50:02.178646 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 00:50:02.178653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:50:02.178829 systemd-journald[195]: Collecting audit messages is disabled. Apr 14 00:50:02.178899 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 14 00:50:02.178908 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 00:50:02.178916 systemd-journald[195]: Journal started Apr 14 00:50:02.178941 systemd-journald[195]: Runtime Journal (/run/log/journal/63323e9279e749df9370cfe26e36309b) is 6.0M, max 48.4M, 42.3M free. Apr 14 00:50:02.185610 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 00:50:02.189170 systemd[1]: Finished systemd-fsck-usr.service. Apr 14 00:50:02.191725 systemd-modules-load[196]: Inserted module 'overlay' Apr 14 00:50:02.387108 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 14 00:50:02.387136 kernel: Bridge firewalling registered Apr 14 00:50:02.200129 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 00:50:02.229656 systemd-modules-load[196]: Inserted module 'br_netfilter' Apr 14 00:50:02.400615 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 00:50:02.404076 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 00:50:02.405341 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:50:02.412365 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 00:50:02.428693 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 00:50:02.435746 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 00:50:02.440698 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 00:50:02.447760 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 00:50:02.512943 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 00:50:02.517130 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:50:02.520850 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:50:02.540916 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 14 00:50:02.547562 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 00:50:02.556138 dracut-cmdline[231]: dracut-dracut-053 Apr 14 00:50:02.560647 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 00:50:02.599019 systemd-resolved[235]: Positive Trust Anchors: Apr 14 00:50:02.599051 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 00:50:02.599076 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 00:50:02.601528 systemd-resolved[235]: Defaulting to hostname 'linux'. Apr 14 00:50:02.602493 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 00:50:02.611622 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 00:50:02.711582 kernel: SCSI subsystem initialized Apr 14 00:50:02.722676 kernel: Loading iSCSI transport class v2.0-870. Apr 14 00:50:02.737561 kernel: iscsi: registered transport (tcp) Apr 14 00:50:02.761070 kernel: iscsi: registered transport (qla4xxx) Apr 14 00:50:02.761187 kernel: QLogic iSCSI HBA Driver Apr 14 00:50:02.809613 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 14 00:50:02.832247 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 14 00:50:02.868108 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 14 00:50:02.868253 kernel: device-mapper: uevent: version 1.0.3 Apr 14 00:50:02.868268 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 14 00:50:02.925636 kernel: raid6: avx512x4 gen() 33955 MB/s Apr 14 00:50:02.944674 kernel: raid6: avx512x2 gen() 30218 MB/s Apr 14 00:50:02.962657 kernel: raid6: avx512x1 gen() 30741 MB/s Apr 14 00:50:02.980698 kernel: raid6: avx2x4 gen() 31973 MB/s Apr 14 00:50:02.998684 kernel: raid6: avx2x2 gen() 21555 MB/s Apr 14 00:50:03.018619 kernel: raid6: avx2x1 gen() 27131 MB/s Apr 14 00:50:03.018806 kernel: raid6: using algorithm avx512x4 gen() 33955 MB/s Apr 14 00:50:03.039012 kernel: raid6: .... xor() 7951 MB/s, rmw enabled Apr 14 00:50:03.039131 kernel: raid6: using avx512x2 recovery algorithm Apr 14 00:50:03.066548 kernel: xor: automatically using best checksumming function avx Apr 14 00:50:03.252634 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 14 00:50:03.266225 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 14 00:50:03.279817 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 00:50:03.293099 systemd-udevd[418]: Using default interface naming scheme 'v255'. Apr 14 00:50:03.297370 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 00:50:03.300687 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 14 00:50:03.328970 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Apr 14 00:50:03.381079 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 00:50:03.402930 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 00:50:03.438838 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 00:50:03.452773 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 14 00:50:03.523541 kernel: cryptd: max_cpu_qlen set to 1000 Apr 14 00:50:03.527788 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 14 00:50:03.532553 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 00:50:03.535107 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 00:50:03.548827 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 14 00:50:03.549537 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 00:50:03.560941 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 14 00:50:03.563630 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 14 00:50:03.586282 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 14 00:50:03.586372 kernel: GPT:9289727 != 19775487 Apr 14 00:50:03.586386 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 14 00:50:03.586435 kernel: GPT:9289727 != 19775487 Apr 14 00:50:03.586443 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 14 00:50:03.586451 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:50:03.584811 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 14 00:50:03.598644 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 00:50:03.598800 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:50:03.610974 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 00:50:03.617974 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 00:50:03.643330 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (469) Apr 14 00:50:03.643357 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (471) Apr 14 00:50:03.618120 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:50:03.618475 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:50:03.657617 kernel: AVX2 version of gcm_enc/dec engaged. Apr 14 00:50:03.657655 kernel: libata version 3.00 loaded. Apr 14 00:50:03.657663 kernel: AES CTR mode by8 optimization enabled Apr 14 00:50:03.663127 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:50:03.675990 kernel: ahci 0000:00:1f.2: version 3.0 Apr 14 00:50:03.676285 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 14 00:50:03.679520 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 14 00:50:03.679684 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 14 00:50:03.680680 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 14 00:50:03.908159 kernel: scsi host0: ahci Apr 14 00:50:03.908494 kernel: scsi host1: ahci Apr 14 00:50:03.908638 kernel: scsi host2: ahci Apr 14 00:50:03.908708 kernel: scsi host3: ahci Apr 14 00:50:03.908774 kernel: scsi host4: ahci Apr 14 00:50:03.908846 kernel: scsi host5: ahci Apr 14 00:50:03.912152 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 14 00:50:03.912164 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 14 00:50:03.912171 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 14 00:50:03.912182 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 14 00:50:03.912190 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 14 00:50:03.912197 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 14 00:50:03.908944 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:50:03.922152 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 14 00:50:03.928344 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 00:50:03.932721 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 14 00:50:03.936235 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 14 00:50:03.955913 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 14 00:50:03.960360 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 00:50:03.968125 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:50:03.968141 disk-uuid[565]: Primary Header is updated. Apr 14 00:50:03.968141 disk-uuid[565]: Secondary Entries is updated. Apr 14 00:50:03.968141 disk-uuid[565]: Secondary Header is updated. Apr 14 00:50:03.971767 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:50:03.976478 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:50:03.990731 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:50:04.002120 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 14 00:50:04.002201 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 14 00:50:04.005483 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 14 00:50:04.008932 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 14 00:50:04.014945 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 14 00:50:04.014978 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 14 00:50:04.018948 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 14 00:50:04.018971 kernel: ata3.00: applying bridge limits Apr 14 00:50:04.021508 kernel: ata3.00: configured for UDMA/100 Apr 14 00:50:04.026766 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 14 00:50:04.084976 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 14 00:50:04.085173 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 14 00:50:04.106501 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 14 00:50:04.979599 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:50:04.980032 disk-uuid[567]: The operation has completed successfully. Apr 14 00:50:05.016906 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 14 00:50:05.017025 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 14 00:50:05.049939 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 14 00:50:05.061120 sh[606]: Success Apr 14 00:50:05.085515 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 14 00:50:05.138801 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 14 00:50:05.208661 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 14 00:50:05.222941 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 14 00:50:05.255603 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 14 00:50:05.255722 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:50:05.255734 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 14 00:50:05.260803 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 14 00:50:05.266358 kernel: BTRFS info (device dm-0): using free space tree Apr 14 00:50:05.279305 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 14 00:50:05.282674 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 14 00:50:05.308389 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 14 00:50:05.316966 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 14 00:50:05.332670 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:50:05.332778 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:50:05.332804 kernel: BTRFS info (device vda6): using free space tree Apr 14 00:50:05.339673 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 00:50:05.356075 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 14 00:50:05.362196 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:50:05.370621 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 14 00:50:05.381477 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 14 00:50:05.529144 ignition[674]: Ignition 2.19.0 Apr 14 00:50:05.531548 ignition[674]: Stage: fetch-offline Apr 14 00:50:05.531614 ignition[674]: no configs at "/usr/lib/ignition/base.d" Apr 14 00:50:05.531624 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:50:05.531736 ignition[674]: parsed url from cmdline: "" Apr 14 00:50:05.531739 ignition[674]: no config URL provided Apr 14 00:50:05.531745 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Apr 14 00:50:05.531752 ignition[674]: no config at "/usr/lib/ignition/user.ign" Apr 14 00:50:05.532045 ignition[674]: op(1): [started] loading QEMU firmware config module Apr 14 00:50:05.532051 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 14 00:50:05.546672 ignition[674]: op(1): [finished] loading QEMU firmware config module Apr 14 00:50:05.546742 ignition[674]: QEMU firmware config was not found. Ignoring... Apr 14 00:50:05.617464 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 00:50:05.634767 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 00:50:05.677162 systemd-networkd[794]: lo: Link UP Apr 14 00:50:05.677697 systemd-networkd[794]: lo: Gained carrier Apr 14 00:50:05.679986 systemd-networkd[794]: Enumeration completed Apr 14 00:50:05.681824 systemd-networkd[794]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:50:05.681828 systemd-networkd[794]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 00:50:05.683676 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 00:50:05.687927 systemd-networkd[794]: eth0: Link UP Apr 14 00:50:05.687931 systemd-networkd[794]: eth0: Gained carrier Apr 14 00:50:05.687942 systemd-networkd[794]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:50:05.690807 systemd[1]: Reached target network.target - Network. Apr 14 00:50:05.728837 systemd-networkd[794]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 00:50:05.866666 ignition[674]: parsing config with SHA512: ac64db9a3ff511194d39b14c7be31ab4251bbc3ce8b4e0b1965d26b3b028ac69d1ca05863d945f611b1a5ea4196c8943f5595061598bd6eb0197c7f426c27225 Apr 14 00:50:05.872271 unknown[674]: fetched base config from "system" Apr 14 00:50:05.872285 unknown[674]: fetched user config from "qemu" Apr 14 00:50:05.881214 ignition[674]: fetch-offline: fetch-offline passed Apr 14 00:50:05.881307 ignition[674]: Ignition finished successfully Apr 14 00:50:05.884194 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 00:50:05.891229 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 14 00:50:05.904042 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 14 00:50:05.925074 ignition[798]: Ignition 2.19.0 Apr 14 00:50:05.925105 ignition[798]: Stage: kargs Apr 14 00:50:05.925240 ignition[798]: no configs at "/usr/lib/ignition/base.d" Apr 14 00:50:05.925247 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:50:05.926304 ignition[798]: kargs: kargs passed Apr 14 00:50:05.934974 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 14 00:50:05.926336 ignition[798]: Ignition finished successfully Apr 14 00:50:05.955754 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 14 00:50:05.978105 ignition[807]: Ignition 2.19.0 Apr 14 00:50:05.978146 ignition[807]: Stage: disks Apr 14 00:50:05.978332 ignition[807]: no configs at "/usr/lib/ignition/base.d" Apr 14 00:50:05.983640 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 14 00:50:05.978343 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:50:05.984258 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 14 00:50:05.979614 ignition[807]: disks: disks passed Apr 14 00:50:05.988940 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 00:50:05.979665 ignition[807]: Ignition finished successfully Apr 14 00:50:05.994151 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 00:50:06.000980 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 00:50:06.008073 systemd[1]: Reached target basic.target - Basic System. Apr 14 00:50:06.034034 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 14 00:50:06.058978 systemd-fsck[818]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 14 00:50:06.066826 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 14 00:50:06.087739 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 14 00:50:06.195664 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 14 00:50:06.196339 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 14 00:50:06.198949 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 14 00:50:06.209773 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 00:50:06.215684 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 14 00:50:06.233719 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (826) Apr 14 00:50:06.233744 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:50:06.233754 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:50:06.233761 kernel: BTRFS info (device vda6): using free space tree Apr 14 00:50:06.222692 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 14 00:50:06.222735 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 14 00:50:06.253512 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 00:50:06.222763 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 00:50:06.235927 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 14 00:50:06.254854 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 00:50:06.270867 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 14 00:50:06.322072 initrd-setup-root[850]: cut: /sysroot/etc/passwd: No such file or directory Apr 14 00:50:06.331825 initrd-setup-root[857]: cut: /sysroot/etc/group: No such file or directory Apr 14 00:50:06.340599 initrd-setup-root[864]: cut: /sysroot/etc/shadow: No such file or directory Apr 14 00:50:06.350481 initrd-setup-root[871]: cut: /sysroot/etc/gshadow: No such file or directory Apr 14 00:50:06.525020 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 14 00:50:06.546799 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 14 00:50:06.551608 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 14 00:50:06.562255 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 14 00:50:06.567815 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:50:06.592645 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 14 00:50:06.608360 ignition[940]: INFO : Ignition 2.19.0 Apr 14 00:50:06.608360 ignition[940]: INFO : Stage: mount Apr 14 00:50:06.608360 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 00:50:06.608360 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:50:06.620388 ignition[940]: INFO : mount: mount passed Apr 14 00:50:06.620388 ignition[940]: INFO : Ignition finished successfully Apr 14 00:50:06.621550 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 14 00:50:06.633819 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 14 00:50:06.641256 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 00:50:06.658506 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (953) Apr 14 00:50:06.658594 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:50:06.663974 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:50:06.664007 kernel: BTRFS info (device vda6): using free space tree Apr 14 00:50:06.673657 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 00:50:06.675601 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 00:50:06.711718 ignition[970]: INFO : Ignition 2.19.0 Apr 14 00:50:06.711718 ignition[970]: INFO : Stage: files Apr 14 00:50:06.711718 ignition[970]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 00:50:06.711718 ignition[970]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:50:06.729041 ignition[970]: DEBUG : files: compiled without relabeling support, skipping Apr 14 00:50:06.729041 ignition[970]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 14 00:50:06.729041 ignition[970]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 14 00:50:06.729041 ignition[970]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 14 00:50:06.751094 ignition[970]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 14 00:50:06.751094 ignition[970]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 14 00:50:06.751094 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 00:50:06.751094 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 14 00:50:06.730712 unknown[970]: wrote ssh authorized keys file for user: core Apr 14 00:50:06.786466 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 14 00:50:06.844264 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 00:50:06.844264 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 14 00:50:06.844264 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 14 00:50:06.937818 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 14 00:50:07.048601 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 14 00:50:07.048601 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 14 00:50:07.048601 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 14 00:50:07.048601 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 14 00:50:07.048601 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 14 00:50:07.048601 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 00:50:07.082703 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 00:50:07.082703 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 00:50:07.082703 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 00:50:07.082703 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 00:50:07.082703 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 00:50:07.082703 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 14 00:50:07.082703 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 14 00:50:07.082703 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 14 00:50:07.082703 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 14 00:50:07.148083 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 14 00:50:07.735035 systemd-networkd[794]: eth0: Gained IPv6LL Apr 14 00:50:07.770089 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 14 00:50:07.770089 ignition[970]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 14 00:50:07.785577 ignition[970]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 00:50:07.785577 ignition[970]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 00:50:07.785577 ignition[970]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 14 00:50:07.785577 ignition[970]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 14 00:50:07.785577 ignition[970]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 00:50:07.821874 ignition[970]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 00:50:07.821874 ignition[970]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 14 00:50:07.821874 ignition[970]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 14 00:50:07.870314 ignition[970]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 00:50:07.886812 ignition[970]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 00:50:07.886812 ignition[970]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 14 00:50:07.886812 ignition[970]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 14 00:50:07.904035 ignition[970]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 14 00:50:07.904035 ignition[970]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 14 00:50:07.904035 ignition[970]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 14 00:50:07.904035 ignition[970]: INFO : files: files passed Apr 14 00:50:07.904035 ignition[970]: INFO : Ignition finished successfully Apr 14 00:50:07.905753 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 14 00:50:07.947340 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 14 00:50:08.011705 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 14 00:50:08.015575 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 14 00:50:08.015706 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 14 00:50:08.062719 initrd-setup-root-after-ignition[998]: grep: /sysroot/oem/oem-release: No such file or directory Apr 14 00:50:08.072683 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 00:50:08.072683 initrd-setup-root-after-ignition[1000]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 14 00:50:08.086710 initrd-setup-root-after-ignition[1004]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 00:50:08.094323 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 00:50:08.105514 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 14 00:50:08.128037 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 14 00:50:08.210195 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 14 00:50:08.210709 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 14 00:50:08.217197 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 14 00:50:08.231115 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 14 00:50:08.233522 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 14 00:50:08.236117 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 14 00:50:08.282323 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 00:50:08.305267 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 14 00:50:08.407277 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 14 00:50:08.408867 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 00:50:08.409213 systemd[1]: Stopped target timers.target - Timer Units. Apr 14 00:50:08.426241 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 14 00:50:08.427340 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 00:50:08.432007 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 14 00:50:08.436754 systemd[1]: Stopped target basic.target - Basic System. Apr 14 00:50:08.443369 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 14 00:50:08.446656 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 00:50:08.449566 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 14 00:50:08.457061 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 14 00:50:08.458348 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 00:50:08.489779 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 14 00:50:08.495799 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 14 00:50:08.500287 systemd[1]: Stopped target swap.target - Swaps. Apr 14 00:50:08.502613 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 14 00:50:08.502800 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 14 00:50:08.525825 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 14 00:50:08.529552 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 00:50:08.533101 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 14 00:50:08.540147 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 00:50:08.551760 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 14 00:50:08.552040 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 14 00:50:08.563159 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 14 00:50:08.563367 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 00:50:08.571714 systemd[1]: Stopped target paths.target - Path Units. Apr 14 00:50:08.575335 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 14 00:50:08.581770 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 00:50:08.587926 systemd[1]: Stopped target slices.target - Slice Units. Apr 14 00:50:08.593738 systemd[1]: Stopped target sockets.target - Socket Units. Apr 14 00:50:08.602126 systemd[1]: iscsid.socket: Deactivated successfully. Apr 14 00:50:08.602945 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 00:50:08.607308 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 14 00:50:08.607601 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 00:50:08.621564 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 14 00:50:08.621986 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 00:50:08.626657 systemd[1]: ignition-files.service: Deactivated successfully. Apr 14 00:50:08.626826 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 14 00:50:08.656575 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 14 00:50:08.657274 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 14 00:50:08.657525 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 00:50:08.672937 ignition[1024]: INFO : Ignition 2.19.0 Apr 14 00:50:08.672937 ignition[1024]: INFO : Stage: umount Apr 14 00:50:08.672937 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 00:50:08.672937 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:50:08.697140 ignition[1024]: INFO : umount: umount passed Apr 14 00:50:08.697140 ignition[1024]: INFO : Ignition finished successfully Apr 14 00:50:08.673754 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 14 00:50:08.677450 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 14 00:50:08.678083 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 00:50:08.683092 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 14 00:50:08.683660 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 00:50:08.698261 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 14 00:50:08.698470 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 14 00:50:08.704828 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 14 00:50:08.704967 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 14 00:50:08.711074 systemd[1]: Stopped target network.target - Network. Apr 14 00:50:08.718634 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 14 00:50:08.719118 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 14 00:50:08.728801 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 14 00:50:08.728943 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 14 00:50:08.732270 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 14 00:50:08.732724 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 14 00:50:08.743169 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 14 00:50:08.743631 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 14 00:50:08.749228 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 14 00:50:08.764067 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 14 00:50:08.767354 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 14 00:50:08.775935 systemd-networkd[794]: eth0: DHCPv6 lease lost Apr 14 00:50:08.782694 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 14 00:50:08.783203 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 14 00:50:08.798856 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 14 00:50:08.798953 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 14 00:50:08.829155 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 14 00:50:08.837695 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 14 00:50:08.837813 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 00:50:08.843067 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 00:50:08.845973 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 14 00:50:08.846098 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 14 00:50:08.848317 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 14 00:50:08.848492 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 14 00:50:08.859039 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 14 00:50:08.859085 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 14 00:50:08.863772 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 00:50:08.863865 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:50:08.865741 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 14 00:50:08.865832 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 14 00:50:08.883353 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 14 00:50:08.883993 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 00:50:08.912161 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 14 00:50:08.912332 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 00:50:08.926154 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 14 00:50:08.926241 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 14 00:50:08.928199 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 14 00:50:08.928238 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 00:50:08.930982 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 14 00:50:08.931104 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 14 00:50:08.942736 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 14 00:50:08.942806 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 14 00:50:08.960828 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 00:50:08.961140 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:50:09.007018 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 14 00:50:09.016464 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 14 00:50:09.016639 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 00:50:09.018747 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 00:50:09.018877 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:50:09.021587 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 14 00:50:09.021670 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 14 00:50:09.037821 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 14 00:50:09.038383 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 14 00:50:09.042805 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 14 00:50:09.078825 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 14 00:50:09.086676 systemd[1]: Switching root. Apr 14 00:50:09.120369 systemd-journald[195]: Journal stopped Apr 14 00:50:10.159988 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Apr 14 00:50:10.160070 kernel: SELinux: policy capability network_peer_controls=1 Apr 14 00:50:10.160146 kernel: SELinux: policy capability open_perms=1 Apr 14 00:50:10.160164 kernel: SELinux: policy capability extended_socket_class=1 Apr 14 00:50:10.160176 kernel: SELinux: policy capability always_check_network=0 Apr 14 00:50:10.160189 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 14 00:50:10.160206 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 14 00:50:10.160220 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 14 00:50:10.160237 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 14 00:50:10.160251 kernel: audit: type=1403 audit(1776127809.250:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 14 00:50:10.160266 systemd[1]: Successfully loaded SELinux policy in 44.748ms. Apr 14 00:50:10.160290 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.605ms. Apr 14 00:50:10.160306 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 00:50:10.160321 systemd[1]: Detected virtualization kvm. Apr 14 00:50:10.160335 systemd[1]: Detected architecture x86-64. Apr 14 00:50:10.160353 systemd[1]: Detected first boot. Apr 14 00:50:10.160366 systemd[1]: Initializing machine ID from VM UUID. Apr 14 00:50:10.160379 zram_generator::config[1068]: No configuration found. Apr 14 00:50:10.160457 systemd[1]: Populated /etc with preset unit settings. Apr 14 00:50:10.160473 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 14 00:50:10.160487 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 14 00:50:10.160507 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 14 00:50:10.160521 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 14 00:50:10.160538 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 14 00:50:10.160553 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 14 00:50:10.160567 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 14 00:50:10.160580 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 14 00:50:10.160595 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 14 00:50:10.160613 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 14 00:50:10.160627 systemd[1]: Created slice user.slice - User and Session Slice. Apr 14 00:50:10.160642 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 00:50:10.160656 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 00:50:10.160672 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 14 00:50:10.160686 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 14 00:50:10.160700 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 14 00:50:10.160715 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 00:50:10.160732 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 14 00:50:10.160746 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 00:50:10.160760 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 14 00:50:10.160773 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 14 00:50:10.160787 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 14 00:50:10.160801 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 14 00:50:10.160814 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 00:50:10.160827 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 00:50:10.160840 systemd[1]: Reached target slices.target - Slice Units. Apr 14 00:50:10.160854 systemd[1]: Reached target swap.target - Swaps. Apr 14 00:50:10.160867 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 14 00:50:10.160881 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 14 00:50:10.160980 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 00:50:10.161006 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 00:50:10.161018 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 00:50:10.161030 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 14 00:50:10.161043 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 14 00:50:10.161056 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 14 00:50:10.161068 systemd[1]: Mounting media.mount - External Media Directory... Apr 14 00:50:10.161084 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:50:10.161097 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 14 00:50:10.161113 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 14 00:50:10.161129 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 14 00:50:10.161144 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 14 00:50:10.161157 systemd[1]: Reached target machines.target - Containers. Apr 14 00:50:10.161169 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 14 00:50:10.161181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:50:10.161194 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 00:50:10.161207 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 14 00:50:10.161220 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:50:10.161236 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 00:50:10.161249 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:50:10.161261 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 14 00:50:10.161274 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:50:10.161288 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 14 00:50:10.161300 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 14 00:50:10.161314 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 14 00:50:10.161327 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 14 00:50:10.161342 systemd[1]: Stopped systemd-fsck-usr.service. Apr 14 00:50:10.161355 kernel: fuse: init (API version 7.39) Apr 14 00:50:10.161367 kernel: loop: module loaded Apr 14 00:50:10.161380 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 00:50:10.161461 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 00:50:10.161476 kernel: ACPI: bus type drm_connector registered Apr 14 00:50:10.161491 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 14 00:50:10.161534 systemd-journald[1152]: Collecting audit messages is disabled. Apr 14 00:50:10.161566 systemd-journald[1152]: Journal started Apr 14 00:50:10.161595 systemd-journald[1152]: Runtime Journal (/run/log/journal/63323e9279e749df9370cfe26e36309b) is 6.0M, max 48.4M, 42.3M free. Apr 14 00:50:09.714970 systemd[1]: Queued start job for default target multi-user.target. Apr 14 00:50:09.743011 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 14 00:50:09.744104 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 14 00:50:09.744599 systemd[1]: systemd-journald.service: Consumed 1.366s CPU time. Apr 14 00:50:10.166984 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 14 00:50:10.172511 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 00:50:10.179862 systemd[1]: verity-setup.service: Deactivated successfully. Apr 14 00:50:10.179951 systemd[1]: Stopped verity-setup.service. Apr 14 00:50:10.182657 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:50:10.193601 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 00:50:10.194033 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 14 00:50:10.196874 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 14 00:50:10.200057 systemd[1]: Mounted media.mount - External Media Directory. Apr 14 00:50:10.203020 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 14 00:50:10.206589 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 14 00:50:10.209700 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 14 00:50:10.212862 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 14 00:50:10.216544 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 00:50:10.221338 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 14 00:50:10.221671 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 14 00:50:10.225182 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:50:10.225688 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:50:10.229571 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 00:50:10.229817 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 00:50:10.233023 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:50:10.233162 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:50:10.236973 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 14 00:50:10.237178 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 14 00:50:10.240738 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:50:10.240972 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:50:10.244514 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 00:50:10.247967 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 14 00:50:10.251872 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 14 00:50:10.255853 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 00:50:10.269225 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 14 00:50:10.282543 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 14 00:50:10.287088 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 14 00:50:10.290032 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 14 00:50:10.290082 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 00:50:10.294052 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 14 00:50:10.298993 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 14 00:50:10.304144 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 14 00:50:10.310559 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:50:10.311930 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 14 00:50:10.317200 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 14 00:50:10.321493 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 00:50:10.322817 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 14 00:50:10.327174 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 00:50:10.329243 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 00:50:10.335714 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 14 00:50:10.343742 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 14 00:50:10.358168 systemd-journald[1152]: Time spent on flushing to /var/log/journal/63323e9279e749df9370cfe26e36309b is 21.594ms for 959 entries. Apr 14 00:50:10.358168 systemd-journald[1152]: System Journal (/var/log/journal/63323e9279e749df9370cfe26e36309b) is 8.0M, max 195.6M, 187.6M free. Apr 14 00:50:10.414479 systemd-journald[1152]: Received client request to flush runtime journal. Apr 14 00:50:10.414526 kernel: loop0: detected capacity change from 0 to 219192 Apr 14 00:50:10.349028 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 14 00:50:10.354847 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 14 00:50:10.362879 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 14 00:50:10.368190 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 14 00:50:10.371660 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 14 00:50:10.383634 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:50:10.392331 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 14 00:50:10.397613 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 14 00:50:10.403659 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 14 00:50:10.419880 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 14 00:50:10.440237 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 14 00:50:10.450237 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 14 00:50:10.499640 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 14 00:50:10.516115 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 14 00:50:10.530538 kernel: loop1: detected capacity change from 0 to 142488 Apr 14 00:50:10.530390 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 00:50:10.584328 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Apr 14 00:50:10.584347 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Apr 14 00:50:10.589880 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 00:50:10.600488 kernel: loop2: detected capacity change from 0 to 140768 Apr 14 00:50:10.653500 kernel: loop3: detected capacity change from 0 to 219192 Apr 14 00:50:10.681696 kernel: loop4: detected capacity change from 0 to 142488 Apr 14 00:50:10.712514 kernel: loop5: detected capacity change from 0 to 140768 Apr 14 00:50:10.743250 (sd-merge)[1206]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 14 00:50:10.744239 (sd-merge)[1206]: Merged extensions into '/usr'. Apr 14 00:50:10.760304 systemd[1]: Reloading requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Apr 14 00:50:10.760334 systemd[1]: Reloading... Apr 14 00:50:10.842566 zram_generator::config[1231]: No configuration found. Apr 14 00:50:11.023562 ldconfig[1178]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 14 00:50:11.067496 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:50:11.133172 systemd[1]: Reloading finished in 372 ms. Apr 14 00:50:11.187030 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 14 00:50:11.192545 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 14 00:50:11.220075 systemd[1]: Starting ensure-sysext.service... Apr 14 00:50:11.226491 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 00:50:11.235950 systemd[1]: Reloading requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... Apr 14 00:50:11.235990 systemd[1]: Reloading... Apr 14 00:50:11.270290 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 00:50:11.271756 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 00:50:11.272688 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 00:50:11.273722 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Apr 14 00:50:11.273821 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Apr 14 00:50:11.279480 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 00:50:11.279513 systemd-tmpfiles[1270]: Skipping /boot Apr 14 00:50:11.295952 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 00:50:11.295968 systemd-tmpfiles[1270]: Skipping /boot Apr 14 00:50:11.316139 zram_generator::config[1297]: No configuration found. Apr 14 00:50:11.539851 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:50:11.603302 systemd[1]: Reloading finished in 366 ms. Apr 14 00:50:11.640087 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 14 00:50:11.663206 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 00:50:11.694387 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 00:50:11.701547 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 14 00:50:11.710216 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 14 00:50:11.721225 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 00:50:11.728585 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 00:50:11.744734 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 14 00:50:11.757818 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:50:11.758200 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:50:11.767767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:50:11.776267 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:50:11.785756 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:50:11.791500 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:50:11.802293 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 14 00:50:11.806976 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:50:11.808805 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 14 00:50:11.810261 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Apr 14 00:50:11.813576 augenrules[1361]: No rules Apr 14 00:50:11.817140 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 00:50:11.823524 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:50:11.823750 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:50:11.829105 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:50:11.829382 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:50:11.836687 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:50:11.837014 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:50:11.903773 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 14 00:50:11.911343 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 14 00:50:11.919865 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 00:50:11.926624 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 14 00:50:11.944713 systemd[1]: Finished ensure-sysext.service. Apr 14 00:50:11.956812 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:50:11.957020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:50:11.962760 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:50:11.977368 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 00:50:11.984772 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:50:11.991059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:50:11.994791 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:50:12.008866 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 00:50:12.018715 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 14 00:50:12.025333 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 14 00:50:12.029585 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 14 00:50:12.029622 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:50:12.030192 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:50:12.030514 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:50:12.035212 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 00:50:12.035444 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 00:50:12.045852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:50:12.046187 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:50:12.052733 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:50:12.052958 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:50:12.062392 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 14 00:50:12.067199 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 14 00:50:12.083607 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1397) Apr 14 00:50:12.095829 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 00:50:12.096023 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 00:50:12.125995 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 00:50:12.132852 systemd-resolved[1346]: Positive Trust Anchors: Apr 14 00:50:12.136252 systemd-resolved[1346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 00:50:12.139597 systemd-resolved[1346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 00:50:12.140714 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 14 00:50:12.149656 systemd-resolved[1346]: Defaulting to hostname 'linux'. Apr 14 00:50:12.151884 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 00:50:12.155850 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 14 00:50:12.156129 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 00:50:12.168714 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 14 00:50:12.177089 kernel: ACPI: button: Power Button [PWRF] Apr 14 00:50:12.180226 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 14 00:50:12.180839 systemd-networkd[1401]: lo: Link UP Apr 14 00:50:12.180846 systemd-networkd[1401]: lo: Gained carrier Apr 14 00:50:12.182520 systemd-networkd[1401]: Enumeration completed Apr 14 00:50:12.184273 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:50:12.184882 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 00:50:12.185650 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 00:50:12.186649 systemd-networkd[1401]: eth0: Link UP Apr 14 00:50:12.186743 systemd-networkd[1401]: eth0: Gained carrier Apr 14 00:50:12.186778 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:50:12.190862 systemd[1]: Reached target network.target - Network. Apr 14 00:50:12.194654 systemd[1]: Reached target time-set.target - System Time Set. Apr 14 00:50:12.220042 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 00:50:12.221660 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 14 00:50:12.726060 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 14 00:50:12.726201 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 14 00:50:12.223129 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Apr 14 00:50:12.715861 systemd-resolved[1346]: Clock change detected. Flushing caches. Apr 14 00:50:12.715918 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 14 00:50:12.715952 systemd-timesyncd[1404]: Initial clock synchronization to Tue 2026-04-14 00:50:12.715770 UTC. Apr 14 00:50:12.730164 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 14 00:50:12.748269 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 14 00:50:12.780557 kernel: mousedev: PS/2 mouse device common for all mice Apr 14 00:50:12.786052 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:50:13.189385 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 14 00:50:13.336834 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:50:13.362263 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 14 00:50:13.376131 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 00:50:13.402261 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 14 00:50:13.408642 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 00:50:13.413932 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 00:50:13.418381 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 14 00:50:13.422934 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 14 00:50:13.429483 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 14 00:50:13.434447 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 14 00:50:13.438956 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 14 00:50:13.492184 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 14 00:50:13.492504 systemd[1]: Reached target paths.target - Path Units. Apr 14 00:50:13.498175 systemd[1]: Reached target timers.target - Timer Units. Apr 14 00:50:13.504254 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 14 00:50:13.509833 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 14 00:50:13.525490 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 14 00:50:13.532711 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 14 00:50:13.536814 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 14 00:50:13.540353 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 00:50:13.543412 systemd[1]: Reached target basic.target - Basic System. Apr 14 00:50:13.546591 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 14 00:50:13.546762 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 14 00:50:13.548103 systemd[1]: Starting containerd.service - containerd container runtime... Apr 14 00:50:13.553335 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 14 00:50:13.557469 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 00:50:13.558407 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 14 00:50:13.563048 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 14 00:50:13.566151 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 14 00:50:13.567697 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 14 00:50:13.574938 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 14 00:50:13.585413 jq[1441]: false Apr 14 00:50:13.591300 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 14 00:50:13.604717 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 14 00:50:13.610874 extend-filesystems[1442]: Found loop3 Apr 14 00:50:13.610874 extend-filesystems[1442]: Found loop4 Apr 14 00:50:13.610874 extend-filesystems[1442]: Found loop5 Apr 14 00:50:13.610874 extend-filesystems[1442]: Found sr0 Apr 14 00:50:13.610874 extend-filesystems[1442]: Found vda Apr 14 00:50:13.610874 extend-filesystems[1442]: Found vda1 Apr 14 00:50:13.610874 extend-filesystems[1442]: Found vda2 Apr 14 00:50:13.610874 extend-filesystems[1442]: Found vda3 Apr 14 00:50:13.610874 extend-filesystems[1442]: Found usr Apr 14 00:50:13.610874 extend-filesystems[1442]: Found vda4 Apr 14 00:50:13.610874 extend-filesystems[1442]: Found vda6 Apr 14 00:50:13.667065 extend-filesystems[1442]: Found vda7 Apr 14 00:50:13.667065 extend-filesystems[1442]: Found vda9 Apr 14 00:50:13.667065 extend-filesystems[1442]: Checking size of /dev/vda9 Apr 14 00:50:13.621436 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 14 00:50:13.628936 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 14 00:50:13.630346 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 14 00:50:13.633340 systemd[1]: Starting update-engine.service - Update Engine... Apr 14 00:50:13.671211 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 14 00:50:13.677653 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 14 00:50:13.688421 dbus-daemon[1440]: [system] SELinux support is enabled Apr 14 00:50:13.688794 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 14 00:50:13.689000 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 14 00:50:13.689280 systemd[1]: motdgen.service: Deactivated successfully. Apr 14 00:50:13.689423 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 14 00:50:13.694265 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 14 00:50:13.700207 jq[1455]: true Apr 14 00:50:13.702769 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 14 00:50:13.702921 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 14 00:50:13.718097 extend-filesystems[1442]: Resized partition /dev/vda9 Apr 14 00:50:13.730116 extend-filesystems[1465]: resize2fs 1.47.1 (20-May-2024) Apr 14 00:50:13.735492 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 14 00:50:13.735754 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 14 00:50:13.747222 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 14 00:50:13.747264 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 14 00:50:13.753732 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 14 00:50:13.766771 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1402) Apr 14 00:50:13.766844 update_engine[1451]: I20260414 00:50:13.766144 1451 main.cc:92] Flatcar Update Engine starting Apr 14 00:50:13.767379 systemd[1]: Started update-engine.service - Update Engine. Apr 14 00:50:13.767871 update_engine[1451]: I20260414 00:50:13.767398 1451 update_check_scheduler.cc:74] Next update check in 7m9s Apr 14 00:50:13.786840 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 14 00:50:13.787117 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 14 00:50:13.804073 tar[1461]: linux-amd64/LICENSE Apr 14 00:50:13.805958 tar[1461]: linux-amd64/helm Apr 14 00:50:13.818067 jq[1462]: true Apr 14 00:50:13.905008 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Apr 14 00:50:13.907236 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 14 00:50:13.910898 systemd-logind[1449]: New seat seat0. Apr 14 00:50:13.950903 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 14 00:50:13.931262 systemd[1]: Started systemd-logind.service - User Login Management. Apr 14 00:50:14.009851 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 14 00:50:14.009851 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 14 00:50:14.009851 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 14 00:50:14.040731 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Apr 14 00:50:14.016912 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 14 00:50:14.018721 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 14 00:50:14.064351 locksmithd[1477]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 14 00:50:14.065471 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Apr 14 00:50:14.066726 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 14 00:50:14.079346 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 14 00:50:14.431956 containerd[1472]: time="2026-04-14T00:50:14.431195449Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 14 00:50:14.498827 systemd-networkd[1401]: eth0: Gained IPv6LL Apr 14 00:50:14.511409 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 14 00:50:14.517731 systemd[1]: Reached target network-online.target - Network is Online. Apr 14 00:50:14.538923 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 14 00:50:14.574872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:50:14.582267 containerd[1472]: time="2026-04-14T00:50:14.582209369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:50:14.589144 containerd[1472]: time="2026-04-14T00:50:14.588870539Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:50:14.589254 containerd[1472]: time="2026-04-14T00:50:14.589240809Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 14 00:50:14.589291 containerd[1472]: time="2026-04-14T00:50:14.589284112Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 14 00:50:14.589462 containerd[1472]: time="2026-04-14T00:50:14.589452476Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 14 00:50:14.590151 containerd[1472]: time="2026-04-14T00:50:14.590086667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 14 00:50:14.590340 containerd[1472]: time="2026-04-14T00:50:14.590322369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:50:14.590396 containerd[1472]: time="2026-04-14T00:50:14.590383334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:50:14.592808 containerd[1472]: time="2026-04-14T00:50:14.592252785Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:50:14.592808 containerd[1472]: time="2026-04-14T00:50:14.592296538Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 14 00:50:14.592808 containerd[1472]: time="2026-04-14T00:50:14.592311215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:50:14.592808 containerd[1472]: time="2026-04-14T00:50:14.592320892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 14 00:50:14.592808 containerd[1472]: time="2026-04-14T00:50:14.592432101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:50:14.594879 containerd[1472]: time="2026-04-14T00:50:14.594286071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:50:14.594879 containerd[1472]: time="2026-04-14T00:50:14.594457328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:50:14.594879 containerd[1472]: time="2026-04-14T00:50:14.594469649Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 14 00:50:14.594879 containerd[1472]: time="2026-04-14T00:50:14.594811864Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 14 00:50:14.594879 containerd[1472]: time="2026-04-14T00:50:14.594843463Z" level=info msg="metadata content store policy set" policy=shared Apr 14 00:50:14.598416 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 14 00:50:14.620411 containerd[1472]: time="2026-04-14T00:50:14.620099879Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 14 00:50:14.624807 containerd[1472]: time="2026-04-14T00:50:14.621937468Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 14 00:50:14.624807 containerd[1472]: time="2026-04-14T00:50:14.622229703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 14 00:50:14.624807 containerd[1472]: time="2026-04-14T00:50:14.622294025Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 14 00:50:14.624807 containerd[1472]: time="2026-04-14T00:50:14.622315600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 14 00:50:14.624807 containerd[1472]: time="2026-04-14T00:50:14.623221330Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 14 00:50:14.628990 containerd[1472]: time="2026-04-14T00:50:14.628918163Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 14 00:50:14.630697 containerd[1472]: time="2026-04-14T00:50:14.630124185Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 14 00:50:14.630697 containerd[1472]: time="2026-04-14T00:50:14.630194207Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 14 00:50:14.630697 containerd[1472]: time="2026-04-14T00:50:14.630219341Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 14 00:50:14.630697 containerd[1472]: time="2026-04-14T00:50:14.630234914Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 14 00:50:14.630697 containerd[1472]: time="2026-04-14T00:50:14.630251779Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 14 00:50:14.630697 containerd[1472]: time="2026-04-14T00:50:14.630268164Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 14 00:50:14.630697 containerd[1472]: time="2026-04-14T00:50:14.630286752Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 14 00:50:14.630697 containerd[1472]: time="2026-04-14T00:50:14.630303862Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 14 00:50:14.630697 containerd[1472]: time="2026-04-14T00:50:14.630319374Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 14 00:50:14.630697 containerd[1472]: time="2026-04-14T00:50:14.630335055Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 14 00:50:14.630697 containerd[1472]: time="2026-04-14T00:50:14.630350717Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 14 00:50:14.630697 containerd[1472]: time="2026-04-14T00:50:14.630373341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.630697 containerd[1472]: time="2026-04-14T00:50:14.630390317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.630697 containerd[1472]: time="2026-04-14T00:50:14.630405799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.631432 containerd[1472]: time="2026-04-14T00:50:14.630420147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.631432 containerd[1472]: time="2026-04-14T00:50:14.630434004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.631432 containerd[1472]: time="2026-04-14T00:50:14.630461399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.631432 containerd[1472]: time="2026-04-14T00:50:14.630477295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.633954 containerd[1472]: time="2026-04-14T00:50:14.630494751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.633954 containerd[1472]: time="2026-04-14T00:50:14.632410531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.633954 containerd[1472]: time="2026-04-14T00:50:14.632437107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.633954 containerd[1472]: time="2026-04-14T00:50:14.632458913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.633954 containerd[1472]: time="2026-04-14T00:50:14.632841441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.633954 containerd[1472]: time="2026-04-14T00:50:14.632900977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.633954 containerd[1472]: time="2026-04-14T00:50:14.632923619Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 14 00:50:14.633954 containerd[1472]: time="2026-04-14T00:50:14.633289080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.633954 containerd[1472]: time="2026-04-14T00:50:14.633354522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.633954 containerd[1472]: time="2026-04-14T00:50:14.633368408Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 14 00:50:14.637699 containerd[1472]: time="2026-04-14T00:50:14.636771190Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 14 00:50:14.637699 containerd[1472]: time="2026-04-14T00:50:14.636819337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 14 00:50:14.637699 containerd[1472]: time="2026-04-14T00:50:14.636832826Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 14 00:50:14.637699 containerd[1472]: time="2026-04-14T00:50:14.636846681Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 14 00:50:14.637699 containerd[1472]: time="2026-04-14T00:50:14.636858769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.637699 containerd[1472]: time="2026-04-14T00:50:14.636877995Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 14 00:50:14.637699 containerd[1472]: time="2026-04-14T00:50:14.636896581Z" level=info msg="NRI interface is disabled by configuration." Apr 14 00:50:14.637699 containerd[1472]: time="2026-04-14T00:50:14.636908917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 14 00:50:14.638650 containerd[1472]: time="2026-04-14T00:50:14.638478771Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 14 00:50:14.639003 containerd[1472]: time="2026-04-14T00:50:14.638984068Z" level=info msg="Connect containerd service" Apr 14 00:50:14.641822 containerd[1472]: time="2026-04-14T00:50:14.640107294Z" level=info msg="using legacy CRI server" Apr 14 00:50:14.641822 containerd[1472]: time="2026-04-14T00:50:14.640180633Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 14 00:50:14.641822 containerd[1472]: time="2026-04-14T00:50:14.640824411Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 14 00:50:14.643271 containerd[1472]: time="2026-04-14T00:50:14.643242495Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 00:50:14.647004 containerd[1472]: time="2026-04-14T00:50:14.646959414Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 14 00:50:14.647594 containerd[1472]: time="2026-04-14T00:50:14.647345623Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 14 00:50:14.659377 containerd[1472]: time="2026-04-14T00:50:14.657188100Z" level=info msg="Start subscribing containerd event" Apr 14 00:50:14.659377 containerd[1472]: time="2026-04-14T00:50:14.657405295Z" level=info msg="Start recovering state" Apr 14 00:50:14.659377 containerd[1472]: time="2026-04-14T00:50:14.657935004Z" level=info msg="Start event monitor" Apr 14 00:50:14.659377 containerd[1472]: time="2026-04-14T00:50:14.657961516Z" level=info msg="Start snapshots syncer" Apr 14 00:50:14.659377 containerd[1472]: time="2026-04-14T00:50:14.657975812Z" level=info msg="Start cni network conf syncer for default" Apr 14 00:50:14.659377 containerd[1472]: time="2026-04-14T00:50:14.657990827Z" level=info msg="Start streaming server" Apr 14 00:50:14.663187 systemd[1]: Started containerd.service - containerd container runtime. Apr 14 00:50:14.666148 containerd[1472]: time="2026-04-14T00:50:14.666072150Z" level=info msg="containerd successfully booted in 0.238692s" Apr 14 00:50:14.731147 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 14 00:50:14.739012 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 14 00:50:14.793368 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 14 00:50:14.809319 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 14 00:50:14.920443 tar[1461]: linux-amd64/README.md Apr 14 00:50:14.956068 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 14 00:50:14.992720 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 14 00:50:15.138720 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 14 00:50:15.164853 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 14 00:50:15.236791 systemd[1]: issuegen.service: Deactivated successfully. Apr 14 00:50:15.239187 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 14 00:50:15.267155 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 14 00:50:15.324313 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 14 00:50:15.395104 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 14 00:50:15.402773 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 14 00:50:15.407291 systemd[1]: Reached target getty.target - Login Prompts. Apr 14 00:50:16.287269 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 14 00:50:16.292164 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:59806.service - OpenSSH per-connection server daemon (10.0.0.1:59806). Apr 14 00:50:16.299451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:50:16.306201 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 14 00:50:16.309806 (kubelet)[1554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:50:16.310624 systemd[1]: Startup finished in 1.815s (kernel) + 7.422s (initrd) + 6.608s (userspace) = 15.847s. Apr 14 00:50:16.382886 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 59806 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:50:16.386632 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:50:16.404601 systemd-logind[1449]: New session 1 of user core. Apr 14 00:50:16.409121 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 14 00:50:16.417374 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 14 00:50:16.437082 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 14 00:50:16.504824 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 14 00:50:16.521316 (systemd)[1568]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 14 00:50:16.711603 systemd[1568]: Queued start job for default target default.target. Apr 14 00:50:16.727432 systemd[1568]: Created slice app.slice - User Application Slice. Apr 14 00:50:16.727457 systemd[1568]: Reached target paths.target - Paths. Apr 14 00:50:16.727469 systemd[1568]: Reached target timers.target - Timers. Apr 14 00:50:16.729399 systemd[1568]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 14 00:50:16.747184 systemd[1568]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 14 00:50:16.747327 systemd[1568]: Reached target sockets.target - Sockets. Apr 14 00:50:16.747339 systemd[1568]: Reached target basic.target - Basic System. Apr 14 00:50:16.747370 systemd[1568]: Reached target default.target - Main User Target. Apr 14 00:50:16.747392 systemd[1568]: Startup finished in 203ms. Apr 14 00:50:16.748271 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 14 00:50:16.754149 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 14 00:50:16.819141 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:59810.service - OpenSSH per-connection server daemon (10.0.0.1:59810). Apr 14 00:50:16.875959 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 59810 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:50:16.878213 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:50:16.891954 systemd-logind[1449]: New session 2 of user core. Apr 14 00:50:16.900940 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 14 00:50:17.007809 sshd[1580]: pam_unix(sshd:session): session closed for user core Apr 14 00:50:17.020857 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:59810.service: Deactivated successfully. Apr 14 00:50:17.023006 systemd[1]: session-2.scope: Deactivated successfully. Apr 14 00:50:17.024493 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Apr 14 00:50:17.029952 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:59818.service - OpenSSH per-connection server daemon (10.0.0.1:59818). Apr 14 00:50:17.031116 systemd-logind[1449]: Removed session 2. Apr 14 00:50:17.047280 kubelet[1554]: E0414 00:50:17.047165 1554 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:50:17.050246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:50:17.050391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:50:17.050685 systemd[1]: kubelet.service: Consumed 1.404s CPU time. Apr 14 00:50:17.067177 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 59818 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:50:17.068311 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:50:17.074131 systemd-logind[1449]: New session 3 of user core. Apr 14 00:50:17.084263 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 14 00:50:17.139999 sshd[1587]: pam_unix(sshd:session): session closed for user core Apr 14 00:50:17.148695 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:59818.service: Deactivated successfully. Apr 14 00:50:17.150683 systemd[1]: session-3.scope: Deactivated successfully. Apr 14 00:50:17.152420 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Apr 14 00:50:17.166653 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:59834.service - OpenSSH per-connection server daemon (10.0.0.1:59834). Apr 14 00:50:17.168250 systemd-logind[1449]: Removed session 3. Apr 14 00:50:17.199236 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 59834 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:50:17.201918 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:50:17.207654 systemd-logind[1449]: New session 4 of user core. Apr 14 00:50:17.222334 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 14 00:50:17.286349 sshd[1595]: pam_unix(sshd:session): session closed for user core Apr 14 00:50:17.303982 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:59834.service: Deactivated successfully. Apr 14 00:50:17.305751 systemd[1]: session-4.scope: Deactivated successfully. Apr 14 00:50:17.307176 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Apr 14 00:50:17.319384 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:59848.service - OpenSSH per-connection server daemon (10.0.0.1:59848). Apr 14 00:50:17.320705 systemd-logind[1449]: Removed session 4. Apr 14 00:50:17.356835 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 59848 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:50:17.358190 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:50:17.372633 systemd-logind[1449]: New session 5 of user core. Apr 14 00:50:17.387119 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 14 00:50:17.462428 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 14 00:50:17.463339 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 00:50:17.492464 sudo[1605]: pam_unix(sudo:session): session closed for user root Apr 14 00:50:17.497654 sshd[1602]: pam_unix(sshd:session): session closed for user core Apr 14 00:50:17.509620 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:59848.service: Deactivated successfully. Apr 14 00:50:17.511384 systemd[1]: session-5.scope: Deactivated successfully. Apr 14 00:50:17.512823 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Apr 14 00:50:17.520998 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:59856.service - OpenSSH per-connection server daemon (10.0.0.1:59856). Apr 14 00:50:17.522135 systemd-logind[1449]: Removed session 5. Apr 14 00:50:17.560930 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 59856 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:50:17.564375 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:50:17.571189 systemd-logind[1449]: New session 6 of user core. Apr 14 00:50:17.585770 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 14 00:50:17.655791 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 14 00:50:17.656415 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 00:50:17.665477 sudo[1614]: pam_unix(sudo:session): session closed for user root Apr 14 00:50:17.673476 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 14 00:50:17.673833 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 00:50:17.697263 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 14 00:50:17.702105 auditctl[1617]: No rules Apr 14 00:50:17.703829 systemd[1]: audit-rules.service: Deactivated successfully. Apr 14 00:50:17.705191 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 14 00:50:17.726463 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 00:50:17.815339 augenrules[1635]: No rules Apr 14 00:50:17.818278 sudo[1613]: pam_unix(sudo:session): session closed for user root Apr 14 00:50:17.817128 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 00:50:17.820894 sshd[1610]: pam_unix(sshd:session): session closed for user core Apr 14 00:50:17.827949 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:59856.service: Deactivated successfully. Apr 14 00:50:17.829755 systemd[1]: session-6.scope: Deactivated successfully. Apr 14 00:50:17.831442 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Apr 14 00:50:17.841895 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:59866.service - OpenSSH per-connection server daemon (10.0.0.1:59866). Apr 14 00:50:17.843890 systemd-logind[1449]: Removed session 6. Apr 14 00:50:17.886675 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 59866 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:50:17.888835 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:50:17.895066 systemd-logind[1449]: New session 7 of user core. Apr 14 00:50:17.905011 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 14 00:50:17.966888 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 14 00:50:17.968025 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 00:50:18.305212 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 14 00:50:18.305363 (dockerd)[1664]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 14 00:50:18.724209 dockerd[1664]: time="2026-04-14T00:50:18.723435153Z" level=info msg="Starting up" Apr 14 00:50:19.021924 dockerd[1664]: time="2026-04-14T00:50:19.021642449Z" level=info msg="Loading containers: start." Apr 14 00:50:19.191594 kernel: Initializing XFRM netlink socket Apr 14 00:50:19.319228 systemd-networkd[1401]: docker0: Link UP Apr 14 00:50:19.360604 dockerd[1664]: time="2026-04-14T00:50:19.360460353Z" level=info msg="Loading containers: done." Apr 14 00:50:19.381643 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1643130216-merged.mount: Deactivated successfully. Apr 14 00:50:19.382422 dockerd[1664]: time="2026-04-14T00:50:19.382018355Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 14 00:50:19.382693 dockerd[1664]: time="2026-04-14T00:50:19.382672744Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 14 00:50:19.382858 dockerd[1664]: time="2026-04-14T00:50:19.382798161Z" level=info msg="Daemon has completed initialization" Apr 14 00:50:19.440033 dockerd[1664]: time="2026-04-14T00:50:19.439874483Z" level=info msg="API listen on /run/docker.sock" Apr 14 00:50:19.440353 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 14 00:50:19.945963 containerd[1472]: time="2026-04-14T00:50:19.945839145Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\"" Apr 14 00:50:20.476107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3942573058.mount: Deactivated successfully. Apr 14 00:50:21.478989 containerd[1472]: time="2026-04-14T00:50:21.478871990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:21.480891 containerd[1472]: time="2026-04-14T00:50:21.480807078Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.6: active requests=0, bytes read=26947180" Apr 14 00:50:21.483226 containerd[1472]: time="2026-04-14T00:50:21.483020543Z" level=info msg="ImageCreate event name:\"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:21.487985 containerd[1472]: time="2026-04-14T00:50:21.487917170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:21.489707 containerd[1472]: time="2026-04-14T00:50:21.489656658Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.6\" with image id \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\", size \"26944341\" in 1.543788767s" Apr 14 00:50:21.489766 containerd[1472]: time="2026-04-14T00:50:21.489709659Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\" returns image reference \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\"" Apr 14 00:50:21.490676 containerd[1472]: time="2026-04-14T00:50:21.490481614Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\"" Apr 14 00:50:22.598271 containerd[1472]: time="2026-04-14T00:50:22.598167570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:22.598996 containerd[1472]: time="2026-04-14T00:50:22.598956311Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.6: active requests=0, bytes read=21165744" Apr 14 00:50:22.600960 containerd[1472]: time="2026-04-14T00:50:22.600891368Z" level=info msg="ImageCreate event name:\"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:22.604556 containerd[1472]: time="2026-04-14T00:50:22.604434619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:22.605597 containerd[1472]: time="2026-04-14T00:50:22.605546900Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.6\" with image id \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\", size \"22695997\" in 1.114649883s" Apr 14 00:50:22.605597 containerd[1472]: time="2026-04-14T00:50:22.605592484Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\" returns image reference \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\"" Apr 14 00:50:22.607222 containerd[1472]: time="2026-04-14T00:50:22.606445351Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\"" Apr 14 00:50:23.434405 containerd[1472]: time="2026-04-14T00:50:23.434295195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:23.435618 containerd[1472]: time="2026-04-14T00:50:23.435471615Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.6: active requests=0, bytes read=15729779" Apr 14 00:50:23.439323 containerd[1472]: time="2026-04-14T00:50:23.439000237Z" level=info msg="ImageCreate event name:\"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:23.459988 containerd[1472]: time="2026-04-14T00:50:23.459251084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:23.472291 containerd[1472]: time="2026-04-14T00:50:23.471015153Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.6\" with image id \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\", size \"17260050\" in 864.383041ms" Apr 14 00:50:23.472291 containerd[1472]: time="2026-04-14T00:50:23.471162284Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\" returns image reference \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\"" Apr 14 00:50:23.473322 containerd[1472]: time="2026-04-14T00:50:23.473253850Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\"" Apr 14 00:50:24.639911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1403216926.mount: Deactivated successfully. Apr 14 00:50:25.118466 containerd[1472]: time="2026-04-14T00:50:25.118031539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:25.120031 containerd[1472]: time="2026-04-14T00:50:25.119855565Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.6: active requests=0, bytes read=25861668" Apr 14 00:50:25.121355 containerd[1472]: time="2026-04-14T00:50:25.121254581Z" level=info msg="ImageCreate event name:\"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:25.125458 containerd[1472]: time="2026-04-14T00:50:25.125350386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:25.128180 containerd[1472]: time="2026-04-14T00:50:25.126805338Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.6\" with image id \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\", size \"25860793\" in 1.653447282s" Apr 14 00:50:25.128180 containerd[1472]: time="2026-04-14T00:50:25.126841911Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\" returns image reference \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\"" Apr 14 00:50:25.131736 containerd[1472]: time="2026-04-14T00:50:25.131658733Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 14 00:50:25.609190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount890408348.mount: Deactivated successfully. Apr 14 00:50:26.732362 containerd[1472]: time="2026-04-14T00:50:26.730956926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:26.734647 containerd[1472]: time="2026-04-14T00:50:26.734358576Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 14 00:50:26.737262 containerd[1472]: time="2026-04-14T00:50:26.737055817Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:26.747858 containerd[1472]: time="2026-04-14T00:50:26.747749920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:26.750677 containerd[1472]: time="2026-04-14T00:50:26.750599948Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.618861079s" Apr 14 00:50:26.750677 containerd[1472]: time="2026-04-14T00:50:26.750657098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 14 00:50:26.751945 containerd[1472]: time="2026-04-14T00:50:26.751790186Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 14 00:50:27.302270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 14 00:50:27.317248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:50:27.332783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4235458855.mount: Deactivated successfully. Apr 14 00:50:27.348959 containerd[1472]: time="2026-04-14T00:50:27.348751602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:27.352448 containerd[1472]: time="2026-04-14T00:50:27.350949479Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 14 00:50:27.353871 containerd[1472]: time="2026-04-14T00:50:27.353790232Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:27.359676 containerd[1472]: time="2026-04-14T00:50:27.359366900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:27.360814 containerd[1472]: time="2026-04-14T00:50:27.360734498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 608.799952ms" Apr 14 00:50:27.360814 containerd[1472]: time="2026-04-14T00:50:27.360767936Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 14 00:50:27.361612 containerd[1472]: time="2026-04-14T00:50:27.361470097Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 14 00:50:27.726049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:50:27.761990 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:50:28.081712 kubelet[1952]: E0414 00:50:28.076175 1952 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:50:28.084053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:50:28.084234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:50:28.131211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2581095626.mount: Deactivated successfully. Apr 14 00:50:31.539684 containerd[1472]: time="2026-04-14T00:50:31.539291691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:31.543909 containerd[1472]: time="2026-04-14T00:50:31.543823528Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22873707" Apr 14 00:50:31.550733 containerd[1472]: time="2026-04-14T00:50:31.550389642Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:31.612973 containerd[1472]: time="2026-04-14T00:50:31.612858971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:50:31.621836 containerd[1472]: time="2026-04-14T00:50:31.621665666Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 4.259711245s" Apr 14 00:50:31.621836 containerd[1472]: time="2026-04-14T00:50:31.621800308Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 14 00:50:36.128718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:50:36.146927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:50:36.183996 systemd[1]: Reloading requested from client PID 2052 ('systemctl') (unit session-7.scope)... Apr 14 00:50:36.184176 systemd[1]: Reloading... Apr 14 00:50:36.275990 zram_generator::config[2087]: No configuration found. Apr 14 00:50:36.494977 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:50:36.639960 systemd[1]: Reloading finished in 455 ms. Apr 14 00:50:36.717232 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 14 00:50:36.717314 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 14 00:50:36.717868 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:50:36.727446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:50:36.920830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:50:36.941100 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 00:50:37.018473 kubelet[2139]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 00:50:37.018473 kubelet[2139]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:50:37.019065 kubelet[2139]: I0414 00:50:37.018690 2139 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 00:50:37.445706 kubelet[2139]: I0414 00:50:37.440420 2139 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 14 00:50:37.445706 kubelet[2139]: I0414 00:50:37.440460 2139 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 00:50:37.445706 kubelet[2139]: I0414 00:50:37.440485 2139 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 14 00:50:37.445706 kubelet[2139]: I0414 00:50:37.440493 2139 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 00:50:37.445706 kubelet[2139]: I0414 00:50:37.443291 2139 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 00:50:37.548606 kubelet[2139]: E0414 00:50:37.548377 2139 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:50:37.552850 kubelet[2139]: I0414 00:50:37.552724 2139 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 00:50:37.561255 kubelet[2139]: E0414 00:50:37.561053 2139 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 00:50:37.561255 kubelet[2139]: I0414 00:50:37.561212 2139 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 14 00:50:37.565350 kubelet[2139]: I0414 00:50:37.565183 2139 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 14 00:50:37.566593 kubelet[2139]: I0414 00:50:37.566290 2139 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 00:50:37.566714 kubelet[2139]: I0414 00:50:37.566436 2139 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 00:50:37.566714 kubelet[2139]: I0414 00:50:37.566635 2139 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 00:50:37.566714 kubelet[2139]: I0414 00:50:37.566645 2139 container_manager_linux.go:306] "Creating device plugin manager" Apr 14 00:50:37.566940 kubelet[2139]: I0414 00:50:37.566764 2139 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 14 00:50:37.569887 kubelet[2139]: I0414 00:50:37.569762 2139 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:50:37.570177 kubelet[2139]: I0414 00:50:37.570079 2139 kubelet.go:475] "Attempting to sync node with API server" Apr 14 00:50:37.570177 kubelet[2139]: I0414 00:50:37.570118 2139 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 00:50:37.570177 kubelet[2139]: I0414 00:50:37.570178 2139 kubelet.go:387] "Adding apiserver pod source" Apr 14 00:50:37.570274 kubelet[2139]: I0414 00:50:37.570193 2139 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 00:50:37.571474 kubelet[2139]: E0414 00:50:37.571339 2139 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 00:50:37.571474 kubelet[2139]: E0414 00:50:37.571462 2139 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 00:50:37.575278 kubelet[2139]: I0414 00:50:37.573438 2139 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 00:50:37.575278 kubelet[2139]: I0414 00:50:37.574470 2139 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 00:50:37.575278 kubelet[2139]: I0414 00:50:37.574607 2139 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 14 00:50:37.575278 kubelet[2139]: W0414 00:50:37.574692 2139 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 14 00:50:37.580654 kubelet[2139]: I0414 00:50:37.580616 2139 server.go:1262] "Started kubelet" Apr 14 00:50:37.582068 kubelet[2139]: I0414 00:50:37.582026 2139 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 00:50:37.582879 kubelet[2139]: I0414 00:50:37.582820 2139 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 00:50:37.584893 kubelet[2139]: I0414 00:50:37.584829 2139 server.go:310] "Adding debug handlers to kubelet server" Apr 14 00:50:37.587750 kubelet[2139]: I0414 00:50:37.587456 2139 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 14 00:50:37.588616 kubelet[2139]: I0414 00:50:37.588217 2139 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 14 00:50:37.588987 kubelet[2139]: I0414 00:50:37.588808 2139 reconciler.go:29] "Reconciler: start to sync state" Apr 14 00:50:37.588987 kubelet[2139]: E0414 00:50:37.587243 2139 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a612dba2aafa82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:50:37.580491394 +0000 UTC m=+0.629752076,LastTimestamp:2026-04-14 00:50:37.580491394 +0000 UTC m=+0.629752076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:50:37.589627 kubelet[2139]: I0414 00:50:37.589452 2139 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 00:50:37.589627 kubelet[2139]: E0414 00:50:37.589490 2139 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 00:50:37.589627 kubelet[2139]: I0414 00:50:37.589562 2139 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 14 00:50:37.590008 kubelet[2139]: I0414 00:50:37.589939 2139 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 00:50:37.590008 kubelet[2139]: E0414 00:50:37.589965 2139 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:50:37.590085 kubelet[2139]: E0414 00:50:37.590025 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms" Apr 14 00:50:37.590085 kubelet[2139]: I0414 00:50:37.590037 2139 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 00:50:37.595616 kubelet[2139]: I0414 00:50:37.593855 2139 factory.go:223] Registration of the systemd container factory successfully Apr 14 00:50:37.595616 kubelet[2139]: I0414 00:50:37.593951 2139 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 00:50:37.595616 kubelet[2139]: E0414 00:50:37.595245 2139 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 00:50:37.595784 kubelet[2139]: I0414 00:50:37.595775 2139 factory.go:223] Registration of the containerd container factory successfully Apr 14 00:50:37.617833 kubelet[2139]: I0414 00:50:37.617713 2139 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 00:50:37.617833 kubelet[2139]: I0414 00:50:37.617750 2139 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 00:50:37.617833 kubelet[2139]: I0414 00:50:37.617764 2139 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:50:37.621217 kubelet[2139]: I0414 00:50:37.621184 2139 policy_none.go:49] "None policy: Start" Apr 14 00:50:37.621217 kubelet[2139]: I0414 00:50:37.621203 2139 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 14 00:50:37.621217 kubelet[2139]: I0414 00:50:37.621213 2139 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 14 00:50:37.623393 kubelet[2139]: I0414 00:50:37.623371 2139 policy_none.go:47] "Start" Apr 14 00:50:37.628023 kubelet[2139]: I0414 00:50:37.627287 2139 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 14 00:50:37.629367 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 14 00:50:37.630857 kubelet[2139]: I0414 00:50:37.630070 2139 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 14 00:50:37.630857 kubelet[2139]: I0414 00:50:37.630091 2139 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 14 00:50:37.631831 kubelet[2139]: I0414 00:50:37.631767 2139 kubelet.go:2428] "Starting kubelet main sync loop" Apr 14 00:50:37.631943 kubelet[2139]: E0414 00:50:37.631846 2139 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 00:50:37.633581 kubelet[2139]: E0414 00:50:37.632697 2139 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 00:50:37.652998 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 14 00:50:37.659922 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 14 00:50:37.676673 kubelet[2139]: E0414 00:50:37.676371 2139 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 00:50:37.676673 kubelet[2139]: I0414 00:50:37.676687 2139 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 00:50:37.676845 kubelet[2139]: I0414 00:50:37.676703 2139 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 00:50:37.676963 kubelet[2139]: I0414 00:50:37.676953 2139 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 00:50:37.679614 kubelet[2139]: E0414 00:50:37.679443 2139 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 00:50:37.679614 kubelet[2139]: E0414 00:50:37.679486 2139 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:50:37.748908 systemd[1]: Created slice kubepods-burstable-pod3ef4c7b0b14aacb703d6788ed41a925d.slice - libcontainer container kubepods-burstable-pod3ef4c7b0b14aacb703d6788ed41a925d.slice. Apr 14 00:50:37.767055 kubelet[2139]: E0414 00:50:37.766950 2139 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:50:37.773914 systemd[1]: Created slice kubepods-burstable-podb23e8122d59dc230b84241dd9f0faca4.slice - libcontainer container kubepods-burstable-podb23e8122d59dc230b84241dd9f0faca4.slice. Apr 14 00:50:37.779379 kubelet[2139]: E0414 00:50:37.778979 2139 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:50:37.780802 kubelet[2139]: I0414 00:50:37.780741 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:50:37.781339 kubelet[2139]: E0414 00:50:37.781291 2139 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Apr 14 00:50:37.784751 systemd[1]: Created slice kubepods-burstable-poddc6a32a2019cd173b38de969cf403b25.slice - libcontainer container kubepods-burstable-poddc6a32a2019cd173b38de969cf403b25.slice. Apr 14 00:50:37.787939 kubelet[2139]: E0414 00:50:37.787845 2139 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:50:37.790608 kubelet[2139]: I0414 00:50:37.790325 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:50:37.790608 kubelet[2139]: I0414 00:50:37.790390 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:50:37.790608 kubelet[2139]: I0414 00:50:37.790414 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b23e8122d59dc230b84241dd9f0faca4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b23e8122d59dc230b84241dd9f0faca4\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:50:37.790608 kubelet[2139]: I0414 00:50:37.790433 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b23e8122d59dc230b84241dd9f0faca4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b23e8122d59dc230b84241dd9f0faca4\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:50:37.790608 kubelet[2139]: I0414 00:50:37.790453 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:50:37.790897 kubelet[2139]: I0414 00:50:37.790470 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:50:37.790897 kubelet[2139]: I0414 00:50:37.790488 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:50:37.790897 kubelet[2139]: I0414 00:50:37.790580 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ef4c7b0b14aacb703d6788ed41a925d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3ef4c7b0b14aacb703d6788ed41a925d\") " pod="kube-system/kube-scheduler-localhost" Apr 14 00:50:37.790897 kubelet[2139]: I0414 00:50:37.790598 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b23e8122d59dc230b84241dd9f0faca4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b23e8122d59dc230b84241dd9f0faca4\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:50:37.791079 kubelet[2139]: E0414 00:50:37.790914 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms" Apr 14 00:50:37.985604 kubelet[2139]: I0414 00:50:37.985207 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:50:37.985749 kubelet[2139]: E0414 00:50:37.985660 2139 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Apr 14 00:50:38.072193 kubelet[2139]: E0414 00:50:38.071738 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:38.073606 containerd[1472]: time="2026-04-14T00:50:38.072620449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3ef4c7b0b14aacb703d6788ed41a925d,Namespace:kube-system,Attempt:0,}" Apr 14 00:50:38.082178 kubelet[2139]: E0414 00:50:38.081969 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:38.082995 containerd[1472]: time="2026-04-14T00:50:38.082936688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b23e8122d59dc230b84241dd9f0faca4,Namespace:kube-system,Attempt:0,}" Apr 14 00:50:38.092541 kubelet[2139]: E0414 00:50:38.092379 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:38.093095 containerd[1472]: time="2026-04-14T00:50:38.093034693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dc6a32a2019cd173b38de969cf403b25,Namespace:kube-system,Attempt:0,}" Apr 14 00:50:38.193440 kubelet[2139]: E0414 00:50:38.193315 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms" Apr 14 00:50:38.390423 kubelet[2139]: I0414 00:50:38.390012 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:50:38.391154 kubelet[2139]: E0414 00:50:38.391097 2139 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Apr 14 00:50:38.580829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1218800956.mount: Deactivated successfully. Apr 14 00:50:38.590489 containerd[1472]: time="2026-04-14T00:50:38.590351845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:50:38.596111 containerd[1472]: time="2026-04-14T00:50:38.596009345Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 14 00:50:38.599112 containerd[1472]: time="2026-04-14T00:50:38.599012243Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:50:38.601039 containerd[1472]: time="2026-04-14T00:50:38.600983739Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 00:50:38.605737 containerd[1472]: time="2026-04-14T00:50:38.605294542Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:50:38.610947 containerd[1472]: time="2026-04-14T00:50:38.610829976Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 00:50:38.611847 containerd[1472]: time="2026-04-14T00:50:38.611769402Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:50:38.618471 containerd[1472]: time="2026-04-14T00:50:38.618392323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:50:38.622289 containerd[1472]: time="2026-04-14T00:50:38.621954538Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 538.852974ms" Apr 14 00:50:38.624474 containerd[1472]: time="2026-04-14T00:50:38.624280355Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 551.591697ms" Apr 14 00:50:38.626837 containerd[1472]: time="2026-04-14T00:50:38.626758031Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 533.524481ms" Apr 14 00:50:38.741972 kubelet[2139]: E0414 00:50:38.741617 2139 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 00:50:38.795220 kubelet[2139]: E0414 00:50:38.795075 2139 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 00:50:38.813819 containerd[1472]: time="2026-04-14T00:50:38.813596265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:50:38.813819 containerd[1472]: time="2026-04-14T00:50:38.813634511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:50:38.813819 containerd[1472]: time="2026-04-14T00:50:38.813652462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:50:38.813819 containerd[1472]: time="2026-04-14T00:50:38.813712153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:50:38.815043 containerd[1472]: time="2026-04-14T00:50:38.814423832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:50:38.815043 containerd[1472]: time="2026-04-14T00:50:38.814457909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:50:38.815043 containerd[1472]: time="2026-04-14T00:50:38.814480175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:50:38.815043 containerd[1472]: time="2026-04-14T00:50:38.814803579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:50:38.817865 containerd[1472]: time="2026-04-14T00:50:38.817719255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:50:38.817865 containerd[1472]: time="2026-04-14T00:50:38.817811079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:50:38.817865 containerd[1472]: time="2026-04-14T00:50:38.817824811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:50:38.818632 containerd[1472]: time="2026-04-14T00:50:38.818447530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:50:38.862916 systemd[1]: Started cri-containerd-b7dcecd07dbdad5d2105346c2456bfcaab7d34c97da273aa0caa488f5c3ddffa.scope - libcontainer container b7dcecd07dbdad5d2105346c2456bfcaab7d34c97da273aa0caa488f5c3ddffa. Apr 14 00:50:38.868254 systemd[1]: Started cri-containerd-3e5260c3548d3d68962cadf36ece39bd9047873b74a948b8b3fa5bcf7a0bec3b.scope - libcontainer container 3e5260c3548d3d68962cadf36ece39bd9047873b74a948b8b3fa5bcf7a0bec3b. Apr 14 00:50:38.870179 systemd[1]: Started cri-containerd-e5a7093dc9e51e7748dac0a4cc88eeec4a5c6f4db846ef7809615a9df8f1b984.scope - libcontainer container e5a7093dc9e51e7748dac0a4cc88eeec4a5c6f4db846ef7809615a9df8f1b984. Apr 14 00:50:38.947248 containerd[1472]: time="2026-04-14T00:50:38.947011481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dc6a32a2019cd173b38de969cf403b25,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7dcecd07dbdad5d2105346c2456bfcaab7d34c97da273aa0caa488f5c3ddffa\"" Apr 14 00:50:38.955662 kubelet[2139]: E0414 00:50:38.952827 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:38.963103 containerd[1472]: time="2026-04-14T00:50:38.962892887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3ef4c7b0b14aacb703d6788ed41a925d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e5260c3548d3d68962cadf36ece39bd9047873b74a948b8b3fa5bcf7a0bec3b\"" Apr 14 00:50:38.963103 containerd[1472]: time="2026-04-14T00:50:38.964435930Z" level=info msg="CreateContainer within sandbox \"b7dcecd07dbdad5d2105346c2456bfcaab7d34c97da273aa0caa488f5c3ddffa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 14 00:50:38.965835 kubelet[2139]: E0414 00:50:38.965790 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:38.979449 containerd[1472]: time="2026-04-14T00:50:38.979203436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b23e8122d59dc230b84241dd9f0faca4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5a7093dc9e51e7748dac0a4cc88eeec4a5c6f4db846ef7809615a9df8f1b984\"" Apr 14 00:50:38.983820 kubelet[2139]: E0414 00:50:38.983222 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:38.990687 containerd[1472]: time="2026-04-14T00:50:38.987831408Z" level=info msg="CreateContainer within sandbox \"3e5260c3548d3d68962cadf36ece39bd9047873b74a948b8b3fa5bcf7a0bec3b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 14 00:50:38.998478 kubelet[2139]: E0414 00:50:38.996077 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="1.6s" Apr 14 00:50:38.998847 containerd[1472]: time="2026-04-14T00:50:38.998743375Z" level=info msg="CreateContainer within sandbox \"e5a7093dc9e51e7748dac0a4cc88eeec4a5c6f4db846ef7809615a9df8f1b984\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 14 00:50:39.024450 kubelet[2139]: E0414 00:50:39.024083 2139 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 00:50:39.045942 containerd[1472]: time="2026-04-14T00:50:39.045818855Z" level=info msg="CreateContainer within sandbox \"b7dcecd07dbdad5d2105346c2456bfcaab7d34c97da273aa0caa488f5c3ddffa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"91d46eff7c4f53de349b77150e4ffd8aed0ec934b2acc0d312c85c165b132387\"" Apr 14 00:50:39.048646 kubelet[2139]: E0414 00:50:39.048566 2139 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 00:50:39.048798 containerd[1472]: time="2026-04-14T00:50:39.048694142Z" level=info msg="StartContainer for \"91d46eff7c4f53de349b77150e4ffd8aed0ec934b2acc0d312c85c165b132387\"" Apr 14 00:50:39.068073 containerd[1472]: time="2026-04-14T00:50:39.067969213Z" level=info msg="CreateContainer within sandbox \"3e5260c3548d3d68962cadf36ece39bd9047873b74a948b8b3fa5bcf7a0bec3b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fa4a710a0f60f12958e3e7e43a27df9493732ebeaff9048689011393b3611787\"" Apr 14 00:50:39.068975 containerd[1472]: time="2026-04-14T00:50:39.068950196Z" level=info msg="StartContainer for \"fa4a710a0f60f12958e3e7e43a27df9493732ebeaff9048689011393b3611787\"" Apr 14 00:50:39.074337 containerd[1472]: time="2026-04-14T00:50:39.072088714Z" level=info msg="CreateContainer within sandbox \"e5a7093dc9e51e7748dac0a4cc88eeec4a5c6f4db846ef7809615a9df8f1b984\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6550399ab4b140918bfc6dd4c3d63a63d3e44df9db46e003abce72fd9cdf17e9\"" Apr 14 00:50:39.076800 containerd[1472]: time="2026-04-14T00:50:39.075263326Z" level=info msg="StartContainer for \"6550399ab4b140918bfc6dd4c3d63a63d3e44df9db46e003abce72fd9cdf17e9\"" Apr 14 00:50:39.130246 systemd[1]: Started cri-containerd-91d46eff7c4f53de349b77150e4ffd8aed0ec934b2acc0d312c85c165b132387.scope - libcontainer container 91d46eff7c4f53de349b77150e4ffd8aed0ec934b2acc0d312c85c165b132387. Apr 14 00:50:39.154098 systemd[1]: Started cri-containerd-6550399ab4b140918bfc6dd4c3d63a63d3e44df9db46e003abce72fd9cdf17e9.scope - libcontainer container 6550399ab4b140918bfc6dd4c3d63a63d3e44df9db46e003abce72fd9cdf17e9. Apr 14 00:50:39.179418 systemd[1]: Started cri-containerd-fa4a710a0f60f12958e3e7e43a27df9493732ebeaff9048689011393b3611787.scope - libcontainer container fa4a710a0f60f12958e3e7e43a27df9493732ebeaff9048689011393b3611787. Apr 14 00:50:39.200709 kubelet[2139]: I0414 00:50:39.200248 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:50:39.200709 kubelet[2139]: E0414 00:50:39.200676 2139 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Apr 14 00:50:39.307308 containerd[1472]: time="2026-04-14T00:50:39.305028250Z" level=info msg="StartContainer for \"91d46eff7c4f53de349b77150e4ffd8aed0ec934b2acc0d312c85c165b132387\" returns successfully" Apr 14 00:50:39.327406 containerd[1472]: time="2026-04-14T00:50:39.327066365Z" level=info msg="StartContainer for \"6550399ab4b140918bfc6dd4c3d63a63d3e44df9db46e003abce72fd9cdf17e9\" returns successfully" Apr 14 00:50:39.358196 containerd[1472]: time="2026-04-14T00:50:39.357735776Z" level=info msg="StartContainer for \"fa4a710a0f60f12958e3e7e43a27df9493732ebeaff9048689011393b3611787\" returns successfully" Apr 14 00:50:39.652831 kubelet[2139]: E0414 00:50:39.652652 2139 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:50:39.652831 kubelet[2139]: E0414 00:50:39.652784 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:39.659447 kubelet[2139]: E0414 00:50:39.659374 2139 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:50:39.660449 kubelet[2139]: E0414 00:50:39.660405 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:39.663745 kubelet[2139]: E0414 00:50:39.663696 2139 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:50:39.663831 kubelet[2139]: E0414 00:50:39.663809 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:40.667067 kubelet[2139]: E0414 00:50:40.666714 2139 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:50:40.667067 kubelet[2139]: E0414 00:50:40.666819 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:40.667067 kubelet[2139]: E0414 00:50:40.666981 2139 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:50:40.667067 kubelet[2139]: E0414 00:50:40.667030 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:40.807307 kubelet[2139]: I0414 00:50:40.807118 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:50:40.949356 kubelet[2139]: E0414 00:50:40.949301 2139 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 14 00:50:41.029492 kubelet[2139]: I0414 00:50:41.029308 2139 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 00:50:41.029492 kubelet[2139]: E0414 00:50:41.029359 2139 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 14 00:50:41.090804 kubelet[2139]: I0414 00:50:41.090592 2139 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 00:50:41.155654 kubelet[2139]: E0414 00:50:41.154669 2139 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 14 00:50:41.155654 kubelet[2139]: I0414 00:50:41.154737 2139 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 00:50:41.158629 kubelet[2139]: E0414 00:50:41.158462 2139 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 14 00:50:41.158629 kubelet[2139]: I0414 00:50:41.158622 2139 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 00:50:41.162218 kubelet[2139]: E0414 00:50:41.162123 2139 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 14 00:50:41.575021 kubelet[2139]: I0414 00:50:41.574876 2139 apiserver.go:52] "Watching apiserver" Apr 14 00:50:41.590164 kubelet[2139]: I0414 00:50:41.590018 2139 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 14 00:50:41.667821 kubelet[2139]: I0414 00:50:41.667730 2139 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 00:50:41.670751 kubelet[2139]: E0414 00:50:41.670685 2139 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 14 00:50:41.671104 kubelet[2139]: E0414 00:50:41.670941 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:43.895907 systemd[1]: Reloading requested from client PID 2428 ('systemctl') (unit session-7.scope)... Apr 14 00:50:43.895944 systemd[1]: Reloading... Apr 14 00:50:43.963624 zram_generator::config[2464]: No configuration found. Apr 14 00:50:44.074844 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:50:44.148028 systemd[1]: Reloading finished in 251 ms. Apr 14 00:50:44.189836 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:50:44.202653 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 00:50:44.203967 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:50:44.204025 systemd[1]: kubelet.service: Consumed 1.403s CPU time, 126.8M memory peak, 0B memory swap peak. Apr 14 00:50:44.216142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:50:44.418778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:50:44.418944 (kubelet)[2511]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 00:50:44.557712 kubelet[2511]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 00:50:44.557712 kubelet[2511]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:50:44.558289 kubelet[2511]: I0414 00:50:44.557787 2511 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 00:50:44.569461 kubelet[2511]: I0414 00:50:44.569269 2511 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 14 00:50:44.569461 kubelet[2511]: I0414 00:50:44.569329 2511 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 00:50:44.569461 kubelet[2511]: I0414 00:50:44.569363 2511 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 14 00:50:44.569461 kubelet[2511]: I0414 00:50:44.569371 2511 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 00:50:44.569792 kubelet[2511]: I0414 00:50:44.569764 2511 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 00:50:44.573969 kubelet[2511]: I0414 00:50:44.573789 2511 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 14 00:50:44.577306 kubelet[2511]: I0414 00:50:44.577233 2511 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 00:50:44.582540 kubelet[2511]: E0414 00:50:44.582338 2511 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 00:50:44.582540 kubelet[2511]: I0414 00:50:44.582423 2511 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 14 00:50:44.589637 kubelet[2511]: I0414 00:50:44.589341 2511 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 14 00:50:44.590278 kubelet[2511]: I0414 00:50:44.590142 2511 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 00:50:44.590454 kubelet[2511]: I0414 00:50:44.590231 2511 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 00:50:44.590454 kubelet[2511]: I0414 00:50:44.590435 2511 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 00:50:44.590454 kubelet[2511]: I0414 00:50:44.590445 2511 container_manager_linux.go:306] "Creating device plugin manager" Apr 14 00:50:44.590720 kubelet[2511]: I0414 00:50:44.590474 2511 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 14 00:50:44.590964 kubelet[2511]: I0414 00:50:44.590881 2511 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:50:44.591260 kubelet[2511]: I0414 00:50:44.591181 2511 kubelet.go:475] "Attempting to sync node with API server" Apr 14 00:50:44.591260 kubelet[2511]: I0414 00:50:44.591218 2511 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 00:50:44.591260 kubelet[2511]: I0414 00:50:44.591247 2511 kubelet.go:387] "Adding apiserver pod source" Apr 14 00:50:44.591355 kubelet[2511]: I0414 00:50:44.591268 2511 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 00:50:44.595778 kubelet[2511]: I0414 00:50:44.595650 2511 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 00:50:44.598452 kubelet[2511]: I0414 00:50:44.596482 2511 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 00:50:44.598452 kubelet[2511]: I0414 00:50:44.596588 2511 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 14 00:50:44.615191 kubelet[2511]: I0414 00:50:44.615094 2511 server.go:1262] "Started kubelet" Apr 14 00:50:44.616246 kubelet[2511]: I0414 00:50:44.615982 2511 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 00:50:44.616246 kubelet[2511]: I0414 00:50:44.616049 2511 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 14 00:50:44.616390 kubelet[2511]: I0414 00:50:44.616361 2511 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 00:50:44.617911 kubelet[2511]: I0414 00:50:44.617771 2511 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 00:50:44.623113 kubelet[2511]: I0414 00:50:44.622859 2511 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 00:50:44.623708 kubelet[2511]: I0414 00:50:44.623629 2511 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 00:50:44.626120 kubelet[2511]: I0414 00:50:44.625778 2511 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 14 00:50:44.627894 kubelet[2511]: I0414 00:50:44.627832 2511 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 14 00:50:44.627999 kubelet[2511]: I0414 00:50:44.627989 2511 reconciler.go:29] "Reconciler: start to sync state" Apr 14 00:50:44.629950 kubelet[2511]: I0414 00:50:44.629862 2511 factory.go:223] Registration of the systemd container factory successfully Apr 14 00:50:44.630050 kubelet[2511]: I0414 00:50:44.630008 2511 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 00:50:44.633109 kubelet[2511]: I0414 00:50:44.632957 2511 server.go:310] "Adding debug handlers to kubelet server" Apr 14 00:50:44.636687 kubelet[2511]: I0414 00:50:44.636290 2511 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 14 00:50:44.641926 kubelet[2511]: I0414 00:50:44.641848 2511 factory.go:223] Registration of the containerd container factory successfully Apr 14 00:50:44.643765 kubelet[2511]: E0414 00:50:44.643710 2511 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 00:50:44.651298 kubelet[2511]: I0414 00:50:44.651242 2511 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 14 00:50:44.651767 kubelet[2511]: I0414 00:50:44.651430 2511 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 14 00:50:44.651767 kubelet[2511]: I0414 00:50:44.651459 2511 kubelet.go:2428] "Starting kubelet main sync loop" Apr 14 00:50:44.651767 kubelet[2511]: E0414 00:50:44.651561 2511 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 00:50:44.712805 kubelet[2511]: I0414 00:50:44.712654 2511 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 00:50:44.712805 kubelet[2511]: I0414 00:50:44.712702 2511 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 00:50:44.712805 kubelet[2511]: I0414 00:50:44.712728 2511 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:50:44.713011 kubelet[2511]: I0414 00:50:44.712843 2511 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 14 00:50:44.713011 kubelet[2511]: I0414 00:50:44.712851 2511 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 14 00:50:44.713011 kubelet[2511]: I0414 00:50:44.712864 2511 policy_none.go:49] "None policy: Start" Apr 14 00:50:44.713011 kubelet[2511]: I0414 00:50:44.712872 2511 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 14 00:50:44.713011 kubelet[2511]: I0414 00:50:44.712879 2511 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 14 00:50:44.713011 kubelet[2511]: I0414 00:50:44.712943 2511 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 14 00:50:44.713011 kubelet[2511]: I0414 00:50:44.712948 2511 policy_none.go:47] "Start" Apr 14 00:50:44.722654 kubelet[2511]: E0414 00:50:44.722600 2511 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 00:50:44.723483 kubelet[2511]: I0414 00:50:44.723094 2511 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 00:50:44.723483 kubelet[2511]: I0414 00:50:44.723109 2511 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 00:50:44.723483 kubelet[2511]: I0414 00:50:44.723407 2511 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 00:50:44.727569 kubelet[2511]: E0414 00:50:44.725294 2511 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 00:50:44.753051 kubelet[2511]: I0414 00:50:44.752884 2511 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 00:50:44.753051 kubelet[2511]: I0414 00:50:44.753011 2511 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 00:50:44.753468 kubelet[2511]: I0414 00:50:44.753432 2511 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 00:50:44.834139 kubelet[2511]: I0414 00:50:44.834050 2511 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:50:44.845930 kubelet[2511]: I0414 00:50:44.845696 2511 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 14 00:50:44.846333 kubelet[2511]: I0414 00:50:44.846144 2511 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 00:50:44.916949 sudo[2554]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 14 00:50:44.917265 sudo[2554]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 14 00:50:44.930003 kubelet[2511]: I0414 00:50:44.929870 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:50:44.930235 kubelet[2511]: I0414 00:50:44.930031 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ef4c7b0b14aacb703d6788ed41a925d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3ef4c7b0b14aacb703d6788ed41a925d\") " pod="kube-system/kube-scheduler-localhost" Apr 14 00:50:44.930235 kubelet[2511]: I0414 00:50:44.930122 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b23e8122d59dc230b84241dd9f0faca4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b23e8122d59dc230b84241dd9f0faca4\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:50:44.930379 kubelet[2511]: I0414 00:50:44.930143 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:50:44.930716 kubelet[2511]: I0414 00:50:44.930392 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b23e8122d59dc230b84241dd9f0faca4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b23e8122d59dc230b84241dd9f0faca4\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:50:44.930716 kubelet[2511]: I0414 00:50:44.930419 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b23e8122d59dc230b84241dd9f0faca4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b23e8122d59dc230b84241dd9f0faca4\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:50:44.930716 kubelet[2511]: I0414 00:50:44.930717 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:50:44.930986 kubelet[2511]: I0414 00:50:44.930736 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:50:44.930986 kubelet[2511]: I0414 00:50:44.930754 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:50:45.064070 kubelet[2511]: E0414 00:50:45.062777 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:45.064070 kubelet[2511]: E0414 00:50:45.062998 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:45.064070 kubelet[2511]: E0414 00:50:45.063037 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:45.593309 kubelet[2511]: I0414 00:50:45.592611 2511 apiserver.go:52] "Watching apiserver" Apr 14 00:50:45.628713 kubelet[2511]: I0414 00:50:45.628617 2511 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 14 00:50:45.673377 kubelet[2511]: E0414 00:50:45.673286 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:45.674988 kubelet[2511]: I0414 00:50:45.674401 2511 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 00:50:45.674988 kubelet[2511]: I0414 00:50:45.674699 2511 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 00:50:45.683831 kubelet[2511]: E0414 00:50:45.683718 2511 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 14 00:50:45.683965 kubelet[2511]: E0414 00:50:45.683897 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:45.685108 kubelet[2511]: E0414 00:50:45.684286 2511 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 00:50:45.685108 kubelet[2511]: E0414 00:50:45.684389 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:45.719010 sudo[2554]: pam_unix(sudo:session): session closed for user root Apr 14 00:50:45.735677 kubelet[2511]: I0414 00:50:45.733117 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.732996457 podStartE2EDuration="1.732996457s" podCreationTimestamp="2026-04-14 00:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:50:45.732886725 +0000 UTC m=+1.304962480" watchObservedRunningTime="2026-04-14 00:50:45.732996457 +0000 UTC m=+1.305072215" Apr 14 00:50:45.735677 kubelet[2511]: I0414 00:50:45.734346 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.734329663 podStartE2EDuration="1.734329663s" podCreationTimestamp="2026-04-14 00:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:50:45.716668493 +0000 UTC m=+1.288744248" watchObservedRunningTime="2026-04-14 00:50:45.734329663 +0000 UTC m=+1.306405404" Apr 14 00:50:45.761062 kubelet[2511]: I0414 00:50:45.760620 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.760605508 podStartE2EDuration="1.760605508s" podCreationTimestamp="2026-04-14 00:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:50:45.758206041 +0000 UTC m=+1.330281800" watchObservedRunningTime="2026-04-14 00:50:45.760605508 +0000 UTC m=+1.332681260" Apr 14 00:50:46.683185 kubelet[2511]: E0414 00:50:46.682898 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:46.683185 kubelet[2511]: E0414 00:50:46.683184 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:47.688319 kubelet[2511]: E0414 00:50:47.687755 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:47.688319 kubelet[2511]: E0414 00:50:47.687779 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:48.288185 sudo[1646]: pam_unix(sudo:session): session closed for user root Apr 14 00:50:48.292340 sshd[1643]: pam_unix(sshd:session): session closed for user core Apr 14 00:50:48.301404 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:59866.service: Deactivated successfully. Apr 14 00:50:48.314995 systemd[1]: session-7.scope: Deactivated successfully. Apr 14 00:50:48.317207 systemd[1]: session-7.scope: Consumed 7.413s CPU time, 159.9M memory peak, 0B memory swap peak. Apr 14 00:50:48.325414 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Apr 14 00:50:48.328200 systemd-logind[1449]: Removed session 7. Apr 14 00:50:49.217290 kubelet[2511]: I0414 00:50:49.216988 2511 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 14 00:50:49.227042 containerd[1472]: time="2026-04-14T00:50:49.226862199Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 14 00:50:49.227426 kubelet[2511]: I0414 00:50:49.227318 2511 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 14 00:50:50.360766 systemd[1]: Created slice kubepods-besteffort-pod78fb0779_b93f_4d67_8883_88d1b1a157e7.slice - libcontainer container kubepods-besteffort-pod78fb0779_b93f_4d67_8883_88d1b1a157e7.slice. Apr 14 00:50:50.389764 systemd[1]: Created slice kubepods-burstable-poda9e5bc40_0425_4113_a407_a1133e43b316.slice - libcontainer container kubepods-burstable-poda9e5bc40_0425_4113_a407_a1133e43b316.slice. Apr 14 00:50:50.428482 kubelet[2511]: I0414 00:50:50.428320 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78fb0779-b93f-4d67-8883-88d1b1a157e7-xtables-lock\") pod \"kube-proxy-6xdmn\" (UID: \"78fb0779-b93f-4d67-8883-88d1b1a157e7\") " pod="kube-system/kube-proxy-6xdmn" Apr 14 00:50:50.428482 kubelet[2511]: I0414 00:50:50.428364 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/78fb0779-b93f-4d67-8883-88d1b1a157e7-kube-proxy\") pod \"kube-proxy-6xdmn\" (UID: \"78fb0779-b93f-4d67-8883-88d1b1a157e7\") " pod="kube-system/kube-proxy-6xdmn" Apr 14 00:50:50.428482 kubelet[2511]: I0414 00:50:50.428375 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78fb0779-b93f-4d67-8883-88d1b1a157e7-lib-modules\") pod \"kube-proxy-6xdmn\" (UID: \"78fb0779-b93f-4d67-8883-88d1b1a157e7\") " pod="kube-system/kube-proxy-6xdmn" Apr 14 00:50:50.428482 kubelet[2511]: I0414 00:50:50.428391 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc8hb\" (UniqueName: \"kubernetes.io/projected/78fb0779-b93f-4d67-8883-88d1b1a157e7-kube-api-access-rc8hb\") pod \"kube-proxy-6xdmn\" (UID: \"78fb0779-b93f-4d67-8883-88d1b1a157e7\") " pod="kube-system/kube-proxy-6xdmn" Apr 14 00:50:50.532337 kubelet[2511]: I0414 00:50:50.529583 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-etc-cni-netd\") pod \"cilium-fqdjb\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " pod="kube-system/cilium-fqdjb" Apr 14 00:50:50.532337 kubelet[2511]: I0414 00:50:50.530657 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-xtables-lock\") pod \"cilium-fqdjb\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " pod="kube-system/cilium-fqdjb" Apr 14 00:50:50.532337 kubelet[2511]: I0414 00:50:50.530716 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-host-proc-sys-kernel\") pod \"cilium-fqdjb\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " pod="kube-system/cilium-fqdjb" Apr 14 00:50:50.532337 kubelet[2511]: I0414 00:50:50.530743 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-hostproc\") pod \"cilium-fqdjb\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " pod="kube-system/cilium-fqdjb" Apr 14 00:50:50.532337 kubelet[2511]: I0414 00:50:50.530763 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-cni-path\") pod \"cilium-fqdjb\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " pod="kube-system/cilium-fqdjb" Apr 14 00:50:50.532337 kubelet[2511]: I0414 00:50:50.530780 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-host-proc-sys-net\") pod \"cilium-fqdjb\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " pod="kube-system/cilium-fqdjb" Apr 14 00:50:50.533385 kubelet[2511]: I0414 00:50:50.530797 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9e5bc40-0425-4113-a407-a1133e43b316-hubble-tls\") pod \"cilium-fqdjb\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " pod="kube-system/cilium-fqdjb" Apr 14 00:50:50.533385 kubelet[2511]: I0414 00:50:50.530816 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr6d4\" (UniqueName: \"kubernetes.io/projected/a9e5bc40-0425-4113-a407-a1133e43b316-kube-api-access-zr6d4\") pod \"cilium-fqdjb\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " pod="kube-system/cilium-fqdjb" Apr 14 00:50:50.533385 kubelet[2511]: I0414 00:50:50.530836 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-cilium-run\") pod \"cilium-fqdjb\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " pod="kube-system/cilium-fqdjb" Apr 14 00:50:50.533385 kubelet[2511]: I0414 00:50:50.530872 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-cilium-cgroup\") pod \"cilium-fqdjb\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " pod="kube-system/cilium-fqdjb" Apr 14 00:50:50.533385 kubelet[2511]: I0414 00:50:50.530901 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9e5bc40-0425-4113-a407-a1133e43b316-clustermesh-secrets\") pod \"cilium-fqdjb\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " pod="kube-system/cilium-fqdjb" Apr 14 00:50:50.533385 kubelet[2511]: I0414 00:50:50.531007 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-bpf-maps\") pod \"cilium-fqdjb\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " pod="kube-system/cilium-fqdjb" Apr 14 00:50:50.533826 kubelet[2511]: I0414 00:50:50.531026 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-lib-modules\") pod \"cilium-fqdjb\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " pod="kube-system/cilium-fqdjb" Apr 14 00:50:50.533826 kubelet[2511]: I0414 00:50:50.531044 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9e5bc40-0425-4113-a407-a1133e43b316-cilium-config-path\") pod \"cilium-fqdjb\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " pod="kube-system/cilium-fqdjb" Apr 14 00:50:50.628460 systemd[1]: Created slice kubepods-besteffort-pod8c3eece2_8891_49ba_9804_7e5ff7463046.slice - libcontainer container kubepods-besteffort-pod8c3eece2_8891_49ba_9804_7e5ff7463046.slice. Apr 14 00:50:50.673290 kubelet[2511]: E0414 00:50:50.673217 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:50.674125 containerd[1472]: time="2026-04-14T00:50:50.674025136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xdmn,Uid:78fb0779-b93f-4d67-8883-88d1b1a157e7,Namespace:kube-system,Attempt:0,}" Apr 14 00:50:50.700628 kubelet[2511]: E0414 00:50:50.700375 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:50.701970 containerd[1472]: time="2026-04-14T00:50:50.701018319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fqdjb,Uid:a9e5bc40-0425-4113-a407-a1133e43b316,Namespace:kube-system,Attempt:0,}" Apr 14 00:50:50.738838 kubelet[2511]: I0414 00:50:50.738379 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rzng\" (UniqueName: \"kubernetes.io/projected/8c3eece2-8891-49ba-9804-7e5ff7463046-kube-api-access-7rzng\") pod \"cilium-operator-6f9c7c5859-l7ctr\" (UID: \"8c3eece2-8891-49ba-9804-7e5ff7463046\") " pod="kube-system/cilium-operator-6f9c7c5859-l7ctr" Apr 14 00:50:50.738838 kubelet[2511]: I0414 00:50:50.738460 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c3eece2-8891-49ba-9804-7e5ff7463046-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-l7ctr\" (UID: \"8c3eece2-8891-49ba-9804-7e5ff7463046\") " pod="kube-system/cilium-operator-6f9c7c5859-l7ctr" Apr 14 00:50:50.765461 containerd[1472]: time="2026-04-14T00:50:50.764626387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:50:50.765461 containerd[1472]: time="2026-04-14T00:50:50.764727578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:50:50.765461 containerd[1472]: time="2026-04-14T00:50:50.764746733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:50:50.765461 containerd[1472]: time="2026-04-14T00:50:50.764814480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:50:50.792278 containerd[1472]: time="2026-04-14T00:50:50.791288623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:50:50.792278 containerd[1472]: time="2026-04-14T00:50:50.791358808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:50:50.792278 containerd[1472]: time="2026-04-14T00:50:50.791418321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:50:50.792278 containerd[1472]: time="2026-04-14T00:50:50.791568500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:50:50.803883 systemd[1]: Started cri-containerd-9a21caf632639e56b706cd4256e0282fdad9ea4c125f1d2ca0166c77a5eb0046.scope - libcontainer container 9a21caf632639e56b706cd4256e0282fdad9ea4c125f1d2ca0166c77a5eb0046. Apr 14 00:50:50.829117 systemd[1]: Started cri-containerd-21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4.scope - libcontainer container 21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4. Apr 14 00:50:50.863769 containerd[1472]: time="2026-04-14T00:50:50.863493349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xdmn,Uid:78fb0779-b93f-4d67-8883-88d1b1a157e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a21caf632639e56b706cd4256e0282fdad9ea4c125f1d2ca0166c77a5eb0046\"" Apr 14 00:50:50.865815 kubelet[2511]: E0414 00:50:50.865714 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:50.872957 containerd[1472]: time="2026-04-14T00:50:50.872381265Z" level=info msg="CreateContainer within sandbox \"9a21caf632639e56b706cd4256e0282fdad9ea4c125f1d2ca0166c77a5eb0046\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 14 00:50:50.884629 containerd[1472]: time="2026-04-14T00:50:50.884355571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fqdjb,Uid:a9e5bc40-0425-4113-a407-a1133e43b316,Namespace:kube-system,Attempt:0,} returns sandbox id \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\"" Apr 14 00:50:50.889203 kubelet[2511]: E0414 00:50:50.888743 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:50.892465 containerd[1472]: time="2026-04-14T00:50:50.891288096Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 14 00:50:50.901356 containerd[1472]: time="2026-04-14T00:50:50.901223931Z" level=info msg="CreateContainer within sandbox \"9a21caf632639e56b706cd4256e0282fdad9ea4c125f1d2ca0166c77a5eb0046\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ef01f2e82221c569c2c980f3a7e193be96af3c7babb7eda0b2dd9d449ea93ac6\"" Apr 14 00:50:50.903380 containerd[1472]: time="2026-04-14T00:50:50.903270668Z" level=info msg="StartContainer for \"ef01f2e82221c569c2c980f3a7e193be96af3c7babb7eda0b2dd9d449ea93ac6\"" Apr 14 00:50:51.001563 kubelet[2511]: E0414 00:50:51.001423 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:51.003706 containerd[1472]: time="2026-04-14T00:50:51.003614716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-l7ctr,Uid:8c3eece2-8891-49ba-9804-7e5ff7463046,Namespace:kube-system,Attempt:0,}" Apr 14 00:50:51.022405 kubelet[2511]: E0414 00:50:51.021387 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:51.054950 systemd[1]: Started cri-containerd-ef01f2e82221c569c2c980f3a7e193be96af3c7babb7eda0b2dd9d449ea93ac6.scope - libcontainer container ef01f2e82221c569c2c980f3a7e193be96af3c7babb7eda0b2dd9d449ea93ac6. Apr 14 00:50:51.090273 containerd[1472]: time="2026-04-14T00:50:51.088931177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:50:51.090273 containerd[1472]: time="2026-04-14T00:50:51.088992188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:50:51.090273 containerd[1472]: time="2026-04-14T00:50:51.089223819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:50:51.097748 containerd[1472]: time="2026-04-14T00:50:51.097667354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:50:51.123954 containerd[1472]: time="2026-04-14T00:50:51.123864214Z" level=info msg="StartContainer for \"ef01f2e82221c569c2c980f3a7e193be96af3c7babb7eda0b2dd9d449ea93ac6\" returns successfully" Apr 14 00:50:51.140934 systemd[1]: Started cri-containerd-7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77.scope - libcontainer container 7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77. Apr 14 00:50:51.221613 containerd[1472]: time="2026-04-14T00:50:51.221389347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-l7ctr,Uid:8c3eece2-8891-49ba-9804-7e5ff7463046,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77\"" Apr 14 00:50:51.225190 kubelet[2511]: E0414 00:50:51.225022 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:51.734137 kubelet[2511]: E0414 00:50:51.732715 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:51.734137 kubelet[2511]: E0414 00:50:51.732926 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:51.772430 kubelet[2511]: I0414 00:50:51.772344 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6xdmn" podStartSLOduration=1.772322138 podStartE2EDuration="1.772322138s" podCreationTimestamp="2026-04-14 00:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:50:51.772196513 +0000 UTC m=+7.344272270" watchObservedRunningTime="2026-04-14 00:50:51.772322138 +0000 UTC m=+7.344397896" Apr 14 00:50:55.554763 kubelet[2511]: E0414 00:50:55.554683 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:55.773822 kubelet[2511]: E0414 00:50:55.770807 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:57.392743 kubelet[2511]: E0414 00:50:57.392291 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:57.781926 kubelet[2511]: E0414 00:50:57.781865 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:50:59.470394 update_engine[1451]: I20260414 00:50:59.469246 1451 update_attempter.cc:509] Updating boot flags... Apr 14 00:50:59.576103 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2905) Apr 14 00:50:59.688874 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2905) Apr 14 00:50:59.875123 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2905) Apr 14 00:51:03.102808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3213990740.mount: Deactivated successfully. Apr 14 00:51:16.042752 containerd[1472]: time="2026-04-14T00:51:16.040926980Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:51:16.058213 containerd[1472]: time="2026-04-14T00:51:16.058104586Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 14 00:51:16.064108 containerd[1472]: time="2026-04-14T00:51:16.064013461Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:51:16.078762 containerd[1472]: time="2026-04-14T00:51:16.078666014Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 25.187331071s" Apr 14 00:51:16.079368 containerd[1472]: time="2026-04-14T00:51:16.079197454Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 14 00:51:16.089307 containerd[1472]: time="2026-04-14T00:51:16.089170986Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 14 00:51:16.140767 containerd[1472]: time="2026-04-14T00:51:16.139732947Z" level=info msg="CreateContainer within sandbox \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 14 00:51:16.494598 containerd[1472]: time="2026-04-14T00:51:16.493746605Z" level=info msg="CreateContainer within sandbox \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4402fbcd6383f635f172547d5ffd4525789fbaf0bd69cdd9c5c5bac4d3e4018b\"" Apr 14 00:51:16.496311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3985398583.mount: Deactivated successfully. Apr 14 00:51:16.499122 containerd[1472]: time="2026-04-14T00:51:16.498964193Z" level=info msg="StartContainer for \"4402fbcd6383f635f172547d5ffd4525789fbaf0bd69cdd9c5c5bac4d3e4018b\"" Apr 14 00:51:16.744167 systemd[1]: Started cri-containerd-4402fbcd6383f635f172547d5ffd4525789fbaf0bd69cdd9c5c5bac4d3e4018b.scope - libcontainer container 4402fbcd6383f635f172547d5ffd4525789fbaf0bd69cdd9c5c5bac4d3e4018b. Apr 14 00:51:17.025884 containerd[1472]: time="2026-04-14T00:51:17.025397904Z" level=info msg="StartContainer for \"4402fbcd6383f635f172547d5ffd4525789fbaf0bd69cdd9c5c5bac4d3e4018b\" returns successfully" Apr 14 00:51:17.092231 systemd[1]: cri-containerd-4402fbcd6383f635f172547d5ffd4525789fbaf0bd69cdd9c5c5bac4d3e4018b.scope: Deactivated successfully. Apr 14 00:51:17.113107 kubelet[2511]: E0414 00:51:17.112425 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:17.299162 containerd[1472]: time="2026-04-14T00:51:17.298482933Z" level=info msg="shim disconnected" id=4402fbcd6383f635f172547d5ffd4525789fbaf0bd69cdd9c5c5bac4d3e4018b namespace=k8s.io Apr 14 00:51:17.301699 containerd[1472]: time="2026-04-14T00:51:17.300713029Z" level=warning msg="cleaning up after shim disconnected" id=4402fbcd6383f635f172547d5ffd4525789fbaf0bd69cdd9c5c5bac4d3e4018b namespace=k8s.io Apr 14 00:51:17.301699 containerd[1472]: time="2026-04-14T00:51:17.300754540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:51:17.394793 containerd[1472]: time="2026-04-14T00:51:17.394665821Z" level=warning msg="cleanup warnings time=\"2026-04-14T00:51:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 00:51:17.487770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4402fbcd6383f635f172547d5ffd4525789fbaf0bd69cdd9c5c5bac4d3e4018b-rootfs.mount: Deactivated successfully. Apr 14 00:51:17.978337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2602178827.mount: Deactivated successfully. Apr 14 00:51:18.205740 kubelet[2511]: E0414 00:51:18.204193 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:18.227788 containerd[1472]: time="2026-04-14T00:51:18.227728929Z" level=info msg="CreateContainer within sandbox \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 14 00:51:18.303230 containerd[1472]: time="2026-04-14T00:51:18.302436200Z" level=info msg="CreateContainer within sandbox \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"affcab1af9a600dd5d9cac5f8133559110e41aefe9147731e99cfb138413ac37\"" Apr 14 00:51:18.332933 containerd[1472]: time="2026-04-14T00:51:18.332493010Z" level=info msg="StartContainer for \"affcab1af9a600dd5d9cac5f8133559110e41aefe9147731e99cfb138413ac37\"" Apr 14 00:51:18.410894 systemd[1]: Started cri-containerd-affcab1af9a600dd5d9cac5f8133559110e41aefe9147731e99cfb138413ac37.scope - libcontainer container affcab1af9a600dd5d9cac5f8133559110e41aefe9147731e99cfb138413ac37. Apr 14 00:51:18.490600 containerd[1472]: time="2026-04-14T00:51:18.490113744Z" level=info msg="StartContainer for \"affcab1af9a600dd5d9cac5f8133559110e41aefe9147731e99cfb138413ac37\" returns successfully" Apr 14 00:51:18.526709 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 00:51:18.528481 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:51:18.530219 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 14 00:51:18.542891 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 00:51:18.546408 systemd[1]: cri-containerd-affcab1af9a600dd5d9cac5f8133559110e41aefe9147731e99cfb138413ac37.scope: Deactivated successfully. Apr 14 00:51:18.630442 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:51:18.655327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-affcab1af9a600dd5d9cac5f8133559110e41aefe9147731e99cfb138413ac37-rootfs.mount: Deactivated successfully. Apr 14 00:51:18.687722 containerd[1472]: time="2026-04-14T00:51:18.687323658Z" level=info msg="shim disconnected" id=affcab1af9a600dd5d9cac5f8133559110e41aefe9147731e99cfb138413ac37 namespace=k8s.io Apr 14 00:51:18.687722 containerd[1472]: time="2026-04-14T00:51:18.687689144Z" level=warning msg="cleaning up after shim disconnected" id=affcab1af9a600dd5d9cac5f8133559110e41aefe9147731e99cfb138413ac37 namespace=k8s.io Apr 14 00:51:18.687722 containerd[1472]: time="2026-04-14T00:51:18.687720229Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:51:19.256296 kubelet[2511]: E0414 00:51:19.255416 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:19.308163 containerd[1472]: time="2026-04-14T00:51:19.306467073Z" level=info msg="CreateContainer within sandbox \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 14 00:51:19.420858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1117918919.mount: Deactivated successfully. Apr 14 00:51:19.435793 containerd[1472]: time="2026-04-14T00:51:19.435430282Z" level=info msg="CreateContainer within sandbox \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"44768989f781f196283ae0e246d4b0f12a068c47a1e9358ae0997da8380d89fc\"" Apr 14 00:51:19.439332 containerd[1472]: time="2026-04-14T00:51:19.438467577Z" level=info msg="StartContainer for \"44768989f781f196283ae0e246d4b0f12a068c47a1e9358ae0997da8380d89fc\"" Apr 14 00:51:19.628981 systemd[1]: Started cri-containerd-44768989f781f196283ae0e246d4b0f12a068c47a1e9358ae0997da8380d89fc.scope - libcontainer container 44768989f781f196283ae0e246d4b0f12a068c47a1e9358ae0997da8380d89fc. Apr 14 00:51:19.702475 containerd[1472]: time="2026-04-14T00:51:19.701889363Z" level=info msg="StartContainer for \"44768989f781f196283ae0e246d4b0f12a068c47a1e9358ae0997da8380d89fc\" returns successfully" Apr 14 00:51:19.702393 systemd[1]: cri-containerd-44768989f781f196283ae0e246d4b0f12a068c47a1e9358ae0997da8380d89fc.scope: Deactivated successfully. Apr 14 00:51:19.780381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44768989f781f196283ae0e246d4b0f12a068c47a1e9358ae0997da8380d89fc-rootfs.mount: Deactivated successfully. Apr 14 00:51:19.825808 containerd[1472]: time="2026-04-14T00:51:19.823365978Z" level=info msg="shim disconnected" id=44768989f781f196283ae0e246d4b0f12a068c47a1e9358ae0997da8380d89fc namespace=k8s.io Apr 14 00:51:19.825808 containerd[1472]: time="2026-04-14T00:51:19.824638196Z" level=warning msg="cleaning up after shim disconnected" id=44768989f781f196283ae0e246d4b0f12a068c47a1e9358ae0997da8380d89fc namespace=k8s.io Apr 14 00:51:19.825808 containerd[1472]: time="2026-04-14T00:51:19.824669550Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:51:20.327152 kubelet[2511]: E0414 00:51:20.327071 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:20.392690 containerd[1472]: time="2026-04-14T00:51:20.390882405Z" level=info msg="CreateContainer within sandbox \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 14 00:51:20.470978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4146721364.mount: Deactivated successfully. Apr 14 00:51:20.492429 containerd[1472]: time="2026-04-14T00:51:20.491400627Z" level=info msg="CreateContainer within sandbox \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aa20769a08eb4640a351435749e1089ae03eb31b27e80b89038dbea270ef2357\"" Apr 14 00:51:20.499881 containerd[1472]: time="2026-04-14T00:51:20.496290017Z" level=info msg="StartContainer for \"aa20769a08eb4640a351435749e1089ae03eb31b27e80b89038dbea270ef2357\"" Apr 14 00:51:20.731235 systemd[1]: run-containerd-runc-k8s.io-aa20769a08eb4640a351435749e1089ae03eb31b27e80b89038dbea270ef2357-runc.ffM9Dm.mount: Deactivated successfully. Apr 14 00:51:20.759485 systemd[1]: Started cri-containerd-aa20769a08eb4640a351435749e1089ae03eb31b27e80b89038dbea270ef2357.scope - libcontainer container aa20769a08eb4640a351435749e1089ae03eb31b27e80b89038dbea270ef2357. Apr 14 00:51:20.915905 systemd[1]: cri-containerd-aa20769a08eb4640a351435749e1089ae03eb31b27e80b89038dbea270ef2357.scope: Deactivated successfully. Apr 14 00:51:20.994861 containerd[1472]: time="2026-04-14T00:51:20.994325479Z" level=info msg="StartContainer for \"aa20769a08eb4640a351435749e1089ae03eb31b27e80b89038dbea270ef2357\" returns successfully" Apr 14 00:51:21.156806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa20769a08eb4640a351435749e1089ae03eb31b27e80b89038dbea270ef2357-rootfs.mount: Deactivated successfully. Apr 14 00:51:21.209622 containerd[1472]: time="2026-04-14T00:51:21.209444190Z" level=info msg="shim disconnected" id=aa20769a08eb4640a351435749e1089ae03eb31b27e80b89038dbea270ef2357 namespace=k8s.io Apr 14 00:51:21.209622 containerd[1472]: time="2026-04-14T00:51:21.209613887Z" level=warning msg="cleaning up after shim disconnected" id=aa20769a08eb4640a351435749e1089ae03eb31b27e80b89038dbea270ef2357 namespace=k8s.io Apr 14 00:51:21.209622 containerd[1472]: time="2026-04-14T00:51:21.209622077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:51:21.218686 containerd[1472]: time="2026-04-14T00:51:21.215363041Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:51:21.218686 containerd[1472]: time="2026-04-14T00:51:21.218365177Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 14 00:51:21.226323 containerd[1472]: time="2026-04-14T00:51:21.226075644Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:51:21.230942 containerd[1472]: time="2026-04-14T00:51:21.230670593Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.139293234s" Apr 14 00:51:21.230942 containerd[1472]: time="2026-04-14T00:51:21.230708989Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 14 00:51:21.258142 containerd[1472]: time="2026-04-14T00:51:21.256979058Z" level=info msg="CreateContainer within sandbox \"7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 14 00:51:21.333626 containerd[1472]: time="2026-04-14T00:51:21.333298458Z" level=info msg="CreateContainer within sandbox \"7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7\"" Apr 14 00:51:21.338365 containerd[1472]: time="2026-04-14T00:51:21.338265632Z" level=info msg="StartContainer for \"b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7\"" Apr 14 00:51:21.464896 kubelet[2511]: E0414 00:51:21.464810 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:21.488784 containerd[1472]: time="2026-04-14T00:51:21.486929282Z" level=info msg="CreateContainer within sandbox \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 14 00:51:21.540355 systemd[1]: Started cri-containerd-b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7.scope - libcontainer container b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7. Apr 14 00:51:21.605970 containerd[1472]: time="2026-04-14T00:51:21.605838325Z" level=info msg="CreateContainer within sandbox \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7\"" Apr 14 00:51:21.611719 containerd[1472]: time="2026-04-14T00:51:21.609375892Z" level=info msg="StartContainer for \"4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7\"" Apr 14 00:51:21.799103 containerd[1472]: time="2026-04-14T00:51:21.797980252Z" level=info msg="StartContainer for \"b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7\" returns successfully" Apr 14 00:51:21.887238 systemd[1]: Started cri-containerd-4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7.scope - libcontainer container 4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7. Apr 14 00:51:21.999954 containerd[1472]: time="2026-04-14T00:51:21.996807251Z" level=info msg="StartContainer for \"4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7\" returns successfully" Apr 14 00:51:22.536216 kubelet[2511]: E0414 00:51:22.536128 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:22.640954 kubelet[2511]: I0414 00:51:22.640651 2511 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 14 00:51:22.886955 kubelet[2511]: I0414 00:51:22.885984 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-l7ctr" podStartSLOduration=2.88601217 podStartE2EDuration="32.885953438s" podCreationTimestamp="2026-04-14 00:50:50 +0000 UTC" firstStartedPulling="2026-04-14 00:50:51.233161023 +0000 UTC m=+6.805236781" lastFinishedPulling="2026-04-14 00:51:21.233102304 +0000 UTC m=+36.805178049" observedRunningTime="2026-04-14 00:51:22.699468093 +0000 UTC m=+38.271543836" watchObservedRunningTime="2026-04-14 00:51:22.885953438 +0000 UTC m=+38.458029193" Apr 14 00:51:22.944345 systemd[1]: Created slice kubepods-burstable-pod5f858833_46ce_40ca_9c58_f91ebed7e6cb.slice - libcontainer container kubepods-burstable-pod5f858833_46ce_40ca_9c58_f91ebed7e6cb.slice. Apr 14 00:51:22.974734 systemd[1]: Created slice kubepods-burstable-podcbea01d0_96bc_479c_b808_ba1a58e219d4.slice - libcontainer container kubepods-burstable-podcbea01d0_96bc_479c_b808_ba1a58e219d4.slice. Apr 14 00:51:23.039237 kubelet[2511]: I0414 00:51:23.038484 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr7lb\" (UniqueName: \"kubernetes.io/projected/5f858833-46ce-40ca-9c58-f91ebed7e6cb-kube-api-access-dr7lb\") pod \"coredns-66bc5c9577-dz4c6\" (UID: \"5f858833-46ce-40ca-9c58-f91ebed7e6cb\") " pod="kube-system/coredns-66bc5c9577-dz4c6" Apr 14 00:51:23.047480 kubelet[2511]: I0414 00:51:23.043679 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9z98\" (UniqueName: \"kubernetes.io/projected/cbea01d0-96bc-479c-b808-ba1a58e219d4-kube-api-access-j9z98\") pod \"coredns-66bc5c9577-r6dzr\" (UID: \"cbea01d0-96bc-479c-b808-ba1a58e219d4\") " pod="kube-system/coredns-66bc5c9577-r6dzr" Apr 14 00:51:23.047480 kubelet[2511]: I0414 00:51:23.045177 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f858833-46ce-40ca-9c58-f91ebed7e6cb-config-volume\") pod \"coredns-66bc5c9577-dz4c6\" (UID: \"5f858833-46ce-40ca-9c58-f91ebed7e6cb\") " pod="kube-system/coredns-66bc5c9577-dz4c6" Apr 14 00:51:23.047480 kubelet[2511]: I0414 00:51:23.045417 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbea01d0-96bc-479c-b808-ba1a58e219d4-config-volume\") pod \"coredns-66bc5c9577-r6dzr\" (UID: \"cbea01d0-96bc-479c-b808-ba1a58e219d4\") " pod="kube-system/coredns-66bc5c9577-r6dzr" Apr 14 00:51:23.592669 kubelet[2511]: E0414 00:51:23.592091 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:23.597291 containerd[1472]: time="2026-04-14T00:51:23.595451429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dz4c6,Uid:5f858833-46ce-40ca-9c58-f91ebed7e6cb,Namespace:kube-system,Attempt:0,}" Apr 14 00:51:23.612437 kubelet[2511]: E0414 00:51:23.611819 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:23.631496 containerd[1472]: time="2026-04-14T00:51:23.631185730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-r6dzr,Uid:cbea01d0-96bc-479c-b808-ba1a58e219d4,Namespace:kube-system,Attempt:0,}" Apr 14 00:51:23.665486 kubelet[2511]: E0414 00:51:23.662952 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:23.667271 kubelet[2511]: E0414 00:51:23.666236 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:24.676367 kubelet[2511]: E0414 00:51:24.676250 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:25.716411 kubelet[2511]: E0414 00:51:25.715369 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:27.550321 systemd-networkd[1401]: cilium_host: Link UP Apr 14 00:51:27.550703 systemd-networkd[1401]: cilium_net: Link UP Apr 14 00:51:27.550903 systemd-networkd[1401]: cilium_net: Gained carrier Apr 14 00:51:27.551102 systemd-networkd[1401]: cilium_host: Gained carrier Apr 14 00:51:27.905945 systemd-networkd[1401]: cilium_net: Gained IPv6LL Apr 14 00:51:27.978363 systemd-networkd[1401]: cilium_vxlan: Link UP Apr 14 00:51:27.978370 systemd-networkd[1401]: cilium_vxlan: Gained carrier Apr 14 00:51:28.549327 systemd-networkd[1401]: cilium_host: Gained IPv6LL Apr 14 00:51:29.179819 kernel: NET: Registered PF_ALG protocol family Apr 14 00:51:29.895438 systemd-networkd[1401]: cilium_vxlan: Gained IPv6LL Apr 14 00:51:34.294790 systemd-networkd[1401]: lxc_health: Link UP Apr 14 00:51:34.324472 systemd-networkd[1401]: lxc_health: Gained carrier Apr 14 00:51:34.726370 kubelet[2511]: E0414 00:51:34.725742 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:34.928708 kubelet[2511]: I0414 00:51:34.928339 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fqdjb" podStartSLOduration=19.735092995 podStartE2EDuration="44.928196938s" podCreationTimestamp="2026-04-14 00:50:50 +0000 UTC" firstStartedPulling="2026-04-14 00:50:50.890745864 +0000 UTC m=+6.462821612" lastFinishedPulling="2026-04-14 00:51:16.08384981 +0000 UTC m=+31.655925555" observedRunningTime="2026-04-14 00:51:24.532695136 +0000 UTC m=+40.104770884" watchObservedRunningTime="2026-04-14 00:51:34.928196938 +0000 UTC m=+50.500272690" Apr 14 00:51:35.046461 kubelet[2511]: E0414 00:51:35.039181 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:35.209013 systemd-networkd[1401]: lxc8895110dcc8b: Link UP Apr 14 00:51:35.226786 kernel: eth0: renamed from tmp968b9 Apr 14 00:51:35.237475 systemd-networkd[1401]: lxc8895110dcc8b: Gained carrier Apr 14 00:51:35.457929 kernel: eth0: renamed from tmp9bf02 Apr 14 00:51:35.464407 systemd-networkd[1401]: lxcce75fc9524ab: Link UP Apr 14 00:51:35.469372 systemd-networkd[1401]: lxcce75fc9524ab: Gained carrier Apr 14 00:51:35.522200 systemd-networkd[1401]: lxc_health: Gained IPv6LL Apr 14 00:51:36.052232 kubelet[2511]: E0414 00:51:36.052092 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:36.380486 systemd-networkd[1401]: lxc8895110dcc8b: Gained IPv6LL Apr 14 00:51:37.058798 systemd-networkd[1401]: lxcce75fc9524ab: Gained IPv6LL Apr 14 00:51:53.655163 kubelet[2511]: E0414 00:51:53.655001 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:53.756625 containerd[1472]: time="2026-04-14T00:51:53.756287152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:51:53.756979 containerd[1472]: time="2026-04-14T00:51:53.756742201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:51:53.756979 containerd[1472]: time="2026-04-14T00:51:53.756781873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:51:53.756979 containerd[1472]: time="2026-04-14T00:51:53.756912909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:51:53.792259 containerd[1472]: time="2026-04-14T00:51:53.791814728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:51:53.792259 containerd[1472]: time="2026-04-14T00:51:53.792143493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:51:53.792259 containerd[1472]: time="2026-04-14T00:51:53.792165576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:51:53.792659 containerd[1472]: time="2026-04-14T00:51:53.792317155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:51:53.798660 systemd[1]: Started cri-containerd-968b91c1ef6d549dc5ca501916fd84df193b741db822b00186eb7e98e2da82cc.scope - libcontainer container 968b91c1ef6d549dc5ca501916fd84df193b741db822b00186eb7e98e2da82cc. Apr 14 00:51:53.819230 systemd[1]: Started cri-containerd-9bf02519ec88ae03bb10743856e8c5ad7aee50f95a88bdebf3f9f3a8369d6f83.scope - libcontainer container 9bf02519ec88ae03bb10743856e8c5ad7aee50f95a88bdebf3f9f3a8369d6f83. Apr 14 00:51:53.827656 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 00:51:53.840769 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 00:51:53.888772 containerd[1472]: time="2026-04-14T00:51:53.888644917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-r6dzr,Uid:cbea01d0-96bc-479c-b808-ba1a58e219d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"968b91c1ef6d549dc5ca501916fd84df193b741db822b00186eb7e98e2da82cc\"" Apr 14 00:51:53.891043 kubelet[2511]: E0414 00:51:53.890989 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:53.919929 containerd[1472]: time="2026-04-14T00:51:53.918942219Z" level=info msg="CreateContainer within sandbox \"968b91c1ef6d549dc5ca501916fd84df193b741db822b00186eb7e98e2da82cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 00:51:53.941149 containerd[1472]: time="2026-04-14T00:51:53.941004071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dz4c6,Uid:5f858833-46ce-40ca-9c58-f91ebed7e6cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bf02519ec88ae03bb10743856e8c5ad7aee50f95a88bdebf3f9f3a8369d6f83\"" Apr 14 00:51:53.974995 kubelet[2511]: E0414 00:51:53.974838 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:53.987818 containerd[1472]: time="2026-04-14T00:51:53.987754361Z" level=info msg="CreateContainer within sandbox \"9bf02519ec88ae03bb10743856e8c5ad7aee50f95a88bdebf3f9f3a8369d6f83\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 00:51:53.999891 containerd[1472]: time="2026-04-14T00:51:53.999788810Z" level=info msg="CreateContainer within sandbox \"968b91c1ef6d549dc5ca501916fd84df193b741db822b00186eb7e98e2da82cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef2e3f73bd0281e8dbad5c03fc143668d917448db8557820464f4e294f9c6c5f\"" Apr 14 00:51:54.005818 containerd[1472]: time="2026-04-14T00:51:54.005749312Z" level=info msg="StartContainer for \"ef2e3f73bd0281e8dbad5c03fc143668d917448db8557820464f4e294f9c6c5f\"" Apr 14 00:51:54.039782 containerd[1472]: time="2026-04-14T00:51:54.039397442Z" level=info msg="CreateContainer within sandbox \"9bf02519ec88ae03bb10743856e8c5ad7aee50f95a88bdebf3f9f3a8369d6f83\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb8800a78b35a8d77eb24986a0a5c35fbf99d66954b667c0165f74ec2b724fe0\"" Apr 14 00:51:54.041603 containerd[1472]: time="2026-04-14T00:51:54.041400698Z" level=info msg="StartContainer for \"cb8800a78b35a8d77eb24986a0a5c35fbf99d66954b667c0165f74ec2b724fe0\"" Apr 14 00:51:54.078780 systemd[1]: Started cri-containerd-ef2e3f73bd0281e8dbad5c03fc143668d917448db8557820464f4e294f9c6c5f.scope - libcontainer container ef2e3f73bd0281e8dbad5c03fc143668d917448db8557820464f4e294f9c6c5f. Apr 14 00:51:54.108120 systemd[1]: Started cri-containerd-cb8800a78b35a8d77eb24986a0a5c35fbf99d66954b667c0165f74ec2b724fe0.scope - libcontainer container cb8800a78b35a8d77eb24986a0a5c35fbf99d66954b667c0165f74ec2b724fe0. Apr 14 00:51:54.155205 containerd[1472]: time="2026-04-14T00:51:54.154929675Z" level=info msg="StartContainer for \"ef2e3f73bd0281e8dbad5c03fc143668d917448db8557820464f4e294f9c6c5f\" returns successfully" Apr 14 00:51:54.179232 containerd[1472]: time="2026-04-14T00:51:54.178962764Z" level=info msg="StartContainer for \"cb8800a78b35a8d77eb24986a0a5c35fbf99d66954b667c0165f74ec2b724fe0\" returns successfully" Apr 14 00:51:54.329478 kubelet[2511]: E0414 00:51:54.329397 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:54.357767 kubelet[2511]: E0414 00:51:54.357658 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:54.395007 kubelet[2511]: I0414 00:51:54.394320 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-r6dzr" podStartSLOduration=64.394298753 podStartE2EDuration="1m4.394298753s" podCreationTimestamp="2026-04-14 00:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:51:54.393155823 +0000 UTC m=+69.965231572" watchObservedRunningTime="2026-04-14 00:51:54.394298753 +0000 UTC m=+69.966374493" Apr 14 00:51:55.368873 kubelet[2511]: E0414 00:51:55.367958 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:55.380632 kubelet[2511]: E0414 00:51:55.380394 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:55.422727 kubelet[2511]: I0414 00:51:55.422112 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dz4c6" podStartSLOduration=65.42209839 podStartE2EDuration="1m5.42209839s" podCreationTimestamp="2026-04-14 00:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:51:54.56204308 +0000 UTC m=+70.134118826" watchObservedRunningTime="2026-04-14 00:51:55.42209839 +0000 UTC m=+70.994174148" Apr 14 00:51:55.871423 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:39824.service - OpenSSH per-connection server daemon (10.0.0.1:39824). Apr 14 00:51:56.105843 sshd[3947]: Accepted publickey for core from 10.0.0.1 port 39824 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:51:56.114464 sshd[3947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:51:56.144023 systemd-logind[1449]: New session 8 of user core. Apr 14 00:51:56.168772 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 14 00:51:56.375671 kubelet[2511]: E0414 00:51:56.375363 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:56.384111 kubelet[2511]: E0414 00:51:56.383270 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:51:56.893928 sshd[3947]: pam_unix(sshd:session): session closed for user core Apr 14 00:51:56.914983 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:39824.service: Deactivated successfully. Apr 14 00:51:56.924165 systemd[1]: session-8.scope: Deactivated successfully. Apr 14 00:51:56.927947 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Apr 14 00:51:56.935075 systemd-logind[1449]: Removed session 8. Apr 14 00:51:59.656314 kubelet[2511]: E0414 00:51:59.656149 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:52:01.993861 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:39836.service - OpenSSH per-connection server daemon (10.0.0.1:39836). Apr 14 00:52:02.167458 sshd[3962]: Accepted publickey for core from 10.0.0.1 port 39836 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:52:02.200662 sshd[3962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:52:02.237693 systemd-logind[1449]: New session 9 of user core. Apr 14 00:52:02.268132 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 14 00:52:02.702291 sshd[3962]: pam_unix(sshd:session): session closed for user core Apr 14 00:52:02.725481 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:39836.service: Deactivated successfully. Apr 14 00:52:02.732022 systemd[1]: session-9.scope: Deactivated successfully. Apr 14 00:52:02.737424 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Apr 14 00:52:02.738954 systemd-logind[1449]: Removed session 9. Apr 14 00:52:06.654717 kubelet[2511]: E0414 00:52:06.654241 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:52:07.806264 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:46412.service - OpenSSH per-connection server daemon (10.0.0.1:46412). Apr 14 00:52:07.914889 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 46412 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:52:07.922024 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:52:07.986572 systemd-logind[1449]: New session 10 of user core. Apr 14 00:52:07.998828 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 14 00:52:08.221887 sshd[3977]: pam_unix(sshd:session): session closed for user core Apr 14 00:52:08.302577 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Apr 14 00:52:08.304463 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:46412.service: Deactivated successfully. Apr 14 00:52:08.312979 systemd[1]: session-10.scope: Deactivated successfully. Apr 14 00:52:08.325163 systemd-logind[1449]: Removed session 10. Apr 14 00:52:11.681275 kubelet[2511]: E0414 00:52:11.677736 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:52:13.275609 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:46426.service - OpenSSH per-connection server daemon (10.0.0.1:46426). Apr 14 00:52:13.397817 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 46426 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:52:13.411742 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:52:13.435158 systemd-logind[1449]: New session 11 of user core. Apr 14 00:52:13.489475 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 14 00:52:13.916863 sshd[3992]: pam_unix(sshd:session): session closed for user core Apr 14 00:52:13.978575 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:46426.service: Deactivated successfully. Apr 14 00:52:13.995695 systemd[1]: session-11.scope: Deactivated successfully. Apr 14 00:52:14.004223 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Apr 14 00:52:14.016404 systemd-logind[1449]: Removed session 11. Apr 14 00:52:19.067702 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:51102.service - OpenSSH per-connection server daemon (10.0.0.1:51102). Apr 14 00:52:19.251064 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 51102 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:52:19.270883 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:52:19.309616 systemd-logind[1449]: New session 12 of user core. Apr 14 00:52:19.342182 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 14 00:52:19.933319 sshd[4007]: pam_unix(sshd:session): session closed for user core Apr 14 00:52:20.008263 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:51102.service: Deactivated successfully. Apr 14 00:52:20.015473 systemd[1]: session-12.scope: Deactivated successfully. Apr 14 00:52:20.019880 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Apr 14 00:52:20.026425 systemd-logind[1449]: Removed session 12. Apr 14 00:52:25.000796 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:51106.service - OpenSSH per-connection server daemon (10.0.0.1:51106). Apr 14 00:52:25.131651 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 51106 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:52:25.139213 sshd[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:52:25.165413 systemd-logind[1449]: New session 13 of user core. Apr 14 00:52:25.185201 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 14 00:52:25.638911 sshd[4024]: pam_unix(sshd:session): session closed for user core Apr 14 00:52:25.650397 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:51106.service: Deactivated successfully. Apr 14 00:52:25.661800 systemd[1]: session-13.scope: Deactivated successfully. Apr 14 00:52:25.666192 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Apr 14 00:52:25.689290 systemd-logind[1449]: Removed session 13. Apr 14 00:52:30.709101 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:36698.service - OpenSSH per-connection server daemon (10.0.0.1:36698). Apr 14 00:52:30.831391 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 36698 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:52:30.833147 sshd[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:52:30.912837 systemd-logind[1449]: New session 14 of user core. Apr 14 00:52:30.991770 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 14 00:52:31.499181 sshd[4039]: pam_unix(sshd:session): session closed for user core Apr 14 00:52:31.544253 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Apr 14 00:52:31.547685 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:36698.service: Deactivated successfully. Apr 14 00:52:31.571346 systemd[1]: session-14.scope: Deactivated successfully. Apr 14 00:52:31.606430 systemd-logind[1449]: Removed session 14. Apr 14 00:52:36.472105 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:50384.service - OpenSSH per-connection server daemon (10.0.0.1:50384). Apr 14 00:52:36.609863 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 50384 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:52:36.616751 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:52:36.634404 systemd-logind[1449]: New session 15 of user core. Apr 14 00:52:36.665248 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 14 00:52:37.096790 sshd[4054]: pam_unix(sshd:session): session closed for user core Apr 14 00:52:37.109842 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:50384.service: Deactivated successfully. Apr 14 00:52:37.115329 systemd[1]: session-15.scope: Deactivated successfully. Apr 14 00:52:37.117228 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Apr 14 00:52:37.123176 systemd-logind[1449]: Removed session 15. Apr 14 00:52:40.680859 kubelet[2511]: E0414 00:52:40.680773 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:52:42.121271 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:50390.service - OpenSSH per-connection server daemon (10.0.0.1:50390). Apr 14 00:52:42.204972 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 50390 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:52:42.212701 sshd[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:52:42.224441 systemd-logind[1449]: New session 16 of user core. Apr 14 00:52:42.235223 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 14 00:52:42.539131 sshd[4070]: pam_unix(sshd:session): session closed for user core Apr 14 00:52:42.547794 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:50390.service: Deactivated successfully. Apr 14 00:52:42.552938 systemd[1]: session-16.scope: Deactivated successfully. Apr 14 00:52:42.556958 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Apr 14 00:52:42.562385 systemd-logind[1449]: Removed session 16. Apr 14 00:52:47.574166 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:49914.service - OpenSSH per-connection server daemon (10.0.0.1:49914). Apr 14 00:52:47.610283 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 49914 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:52:47.611833 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:52:47.619618 systemd-logind[1449]: New session 17 of user core. Apr 14 00:52:47.631299 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 14 00:52:47.873993 sshd[4088]: pam_unix(sshd:session): session closed for user core Apr 14 00:52:47.880838 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:49914.service: Deactivated successfully. Apr 14 00:52:47.885414 systemd[1]: session-17.scope: Deactivated successfully. Apr 14 00:52:47.886949 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Apr 14 00:52:47.890820 systemd-logind[1449]: Removed session 17. Apr 14 00:52:52.905418 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:49928.service - OpenSSH per-connection server daemon (10.0.0.1:49928). Apr 14 00:52:53.039854 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 49928 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:52:53.044832 sshd[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:52:53.050624 systemd-logind[1449]: New session 18 of user core. Apr 14 00:52:53.058165 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 14 00:52:53.230423 sshd[4106]: pam_unix(sshd:session): session closed for user core Apr 14 00:52:53.239258 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:49928.service: Deactivated successfully. Apr 14 00:52:53.241576 systemd[1]: session-18.scope: Deactivated successfully. Apr 14 00:52:53.247719 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Apr 14 00:52:53.267252 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:49944.service - OpenSSH per-connection server daemon (10.0.0.1:49944). Apr 14 00:52:53.273308 systemd-logind[1449]: Removed session 18. Apr 14 00:52:53.328092 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 49944 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:52:53.333293 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:52:53.349692 systemd-logind[1449]: New session 19 of user core. Apr 14 00:52:53.364392 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 14 00:52:53.694006 sshd[4121]: pam_unix(sshd:session): session closed for user core Apr 14 00:52:53.712791 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:49944.service: Deactivated successfully. Apr 14 00:52:53.717673 systemd[1]: session-19.scope: Deactivated successfully. Apr 14 00:52:53.720309 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Apr 14 00:52:53.733491 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:49952.service - OpenSSH per-connection server daemon (10.0.0.1:49952). Apr 14 00:52:53.743076 systemd-logind[1449]: Removed session 19. Apr 14 00:52:53.803306 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 49952 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:52:53.805284 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:52:53.813813 systemd-logind[1449]: New session 20 of user core. Apr 14 00:52:53.831202 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 14 00:52:54.104643 sshd[4134]: pam_unix(sshd:session): session closed for user core Apr 14 00:52:54.115431 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:49952.service: Deactivated successfully. Apr 14 00:52:54.125967 systemd[1]: session-20.scope: Deactivated successfully. Apr 14 00:52:54.131157 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Apr 14 00:52:54.137465 systemd-logind[1449]: Removed session 20. Apr 14 00:52:58.654152 kubelet[2511]: E0414 00:52:58.654002 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:52:59.133681 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:43614.service - OpenSSH per-connection server daemon (10.0.0.1:43614). Apr 14 00:52:59.205910 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 43614 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:52:59.208948 sshd[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:52:59.220212 systemd-logind[1449]: New session 21 of user core. Apr 14 00:52:59.238452 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 14 00:52:59.478606 sshd[4149]: pam_unix(sshd:session): session closed for user core Apr 14 00:52:59.481602 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:43614.service: Deactivated successfully. Apr 14 00:52:59.484636 systemd[1]: session-21.scope: Deactivated successfully. Apr 14 00:52:59.490023 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Apr 14 00:52:59.491264 systemd-logind[1449]: Removed session 21. Apr 14 00:53:03.664443 kubelet[2511]: E0414 00:53:03.662888 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:53:04.534892 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:43620.service - OpenSSH per-connection server daemon (10.0.0.1:43620). Apr 14 00:53:04.655316 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 43620 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:04.671284 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:04.687680 systemd-logind[1449]: New session 22 of user core. Apr 14 00:53:04.705640 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 14 00:53:05.196035 sshd[4165]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:05.215207 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:43620.service: Deactivated successfully. Apr 14 00:53:05.220917 systemd[1]: session-22.scope: Deactivated successfully. Apr 14 00:53:05.223872 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Apr 14 00:53:05.237265 systemd-logind[1449]: Removed session 22. Apr 14 00:53:10.311849 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:36486.service - OpenSSH per-connection server daemon (10.0.0.1:36486). Apr 14 00:53:10.467406 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 36486 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:10.474659 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:10.512404 systemd-logind[1449]: New session 23 of user core. Apr 14 00:53:10.533600 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 14 00:53:11.110412 sshd[4180]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:11.136902 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:36486.service: Deactivated successfully. Apr 14 00:53:11.144279 systemd[1]: session-23.scope: Deactivated successfully. Apr 14 00:53:11.154155 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Apr 14 00:53:11.163796 systemd-logind[1449]: Removed session 23. Apr 14 00:53:11.689138 kubelet[2511]: E0414 00:53:11.688958 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:53:16.198403 systemd[1]: Started sshd@23-10.0.0.55:22-10.0.0.1:35042.service - OpenSSH per-connection server daemon (10.0.0.1:35042). Apr 14 00:53:16.273106 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 35042 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:16.274987 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:16.282705 systemd-logind[1449]: New session 24 of user core. Apr 14 00:53:16.297593 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 14 00:53:16.464919 sshd[4195]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:16.469232 systemd[1]: sshd@23-10.0.0.55:22-10.0.0.1:35042.service: Deactivated successfully. Apr 14 00:53:16.470798 systemd[1]: session-24.scope: Deactivated successfully. Apr 14 00:53:16.471961 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Apr 14 00:53:16.473472 systemd-logind[1449]: Removed session 24. Apr 14 00:53:16.692863 kubelet[2511]: E0414 00:53:16.692489 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:53:21.541400 systemd[1]: Started sshd@24-10.0.0.55:22-10.0.0.1:35058.service - OpenSSH per-connection server daemon (10.0.0.1:35058). Apr 14 00:53:21.622622 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 35058 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:21.627482 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:21.643047 systemd-logind[1449]: New session 25 of user core. Apr 14 00:53:21.659344 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 14 00:53:21.893604 sshd[4210]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:21.911912 systemd[1]: sshd@24-10.0.0.55:22-10.0.0.1:35058.service: Deactivated successfully. Apr 14 00:53:21.916683 systemd[1]: session-25.scope: Deactivated successfully. Apr 14 00:53:21.921976 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Apr 14 00:53:21.936831 systemd[1]: Started sshd@25-10.0.0.55:22-10.0.0.1:35072.service - OpenSSH per-connection server daemon (10.0.0.1:35072). Apr 14 00:53:21.939402 systemd-logind[1449]: Removed session 25. Apr 14 00:53:22.023007 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 35072 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:22.025850 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:22.038224 systemd-logind[1449]: New session 26 of user core. Apr 14 00:53:22.048026 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 14 00:53:22.528777 sshd[4228]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:22.540141 systemd[1]: sshd@25-10.0.0.55:22-10.0.0.1:35072.service: Deactivated successfully. Apr 14 00:53:22.543707 systemd[1]: session-26.scope: Deactivated successfully. Apr 14 00:53:22.544965 systemd-logind[1449]: Session 26 logged out. Waiting for processes to exit. Apr 14 00:53:22.555926 systemd[1]: Started sshd@26-10.0.0.55:22-10.0.0.1:35076.service - OpenSSH per-connection server daemon (10.0.0.1:35076). Apr 14 00:53:22.558925 systemd-logind[1449]: Removed session 26. Apr 14 00:53:22.591915 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 35076 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:22.593986 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:22.600777 systemd-logind[1449]: New session 27 of user core. Apr 14 00:53:22.608728 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 14 00:53:23.634823 sshd[4240]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:23.645023 systemd[1]: sshd@26-10.0.0.55:22-10.0.0.1:35076.service: Deactivated successfully. Apr 14 00:53:23.649688 systemd[1]: session-27.scope: Deactivated successfully. Apr 14 00:53:23.652434 systemd-logind[1449]: Session 27 logged out. Waiting for processes to exit. Apr 14 00:53:23.666026 systemd[1]: Started sshd@27-10.0.0.55:22-10.0.0.1:35082.service - OpenSSH per-connection server daemon (10.0.0.1:35082). Apr 14 00:53:23.671304 systemd-logind[1449]: Removed session 27. Apr 14 00:53:23.739408 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 35082 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:23.742962 sshd[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:23.756375 systemd-logind[1449]: New session 28 of user core. Apr 14 00:53:23.764659 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 14 00:53:24.225484 sshd[4259]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:24.249625 systemd[1]: sshd@27-10.0.0.55:22-10.0.0.1:35082.service: Deactivated successfully. Apr 14 00:53:24.256486 systemd[1]: session-28.scope: Deactivated successfully. Apr 14 00:53:24.264300 systemd-logind[1449]: Session 28 logged out. Waiting for processes to exit. Apr 14 00:53:24.279912 systemd[1]: Started sshd@28-10.0.0.55:22-10.0.0.1:35092.service - OpenSSH per-connection server daemon (10.0.0.1:35092). Apr 14 00:53:24.287013 systemd-logind[1449]: Removed session 28. Apr 14 00:53:24.335641 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 35092 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:24.342843 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:24.359000 systemd-logind[1449]: New session 29 of user core. Apr 14 00:53:24.372252 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 14 00:53:24.704369 sshd[4273]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:24.727435 systemd[1]: sshd@28-10.0.0.55:22-10.0.0.1:35092.service: Deactivated successfully. Apr 14 00:53:24.737164 systemd[1]: session-29.scope: Deactivated successfully. Apr 14 00:53:24.739419 systemd-logind[1449]: Session 29 logged out. Waiting for processes to exit. Apr 14 00:53:24.747914 systemd-logind[1449]: Removed session 29. Apr 14 00:53:25.655004 kubelet[2511]: E0414 00:53:25.654454 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:53:29.754211 systemd[1]: Started sshd@29-10.0.0.55:22-10.0.0.1:57674.service - OpenSSH per-connection server daemon (10.0.0.1:57674). Apr 14 00:53:29.869712 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 57674 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:29.873068 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:29.894000 systemd-logind[1449]: New session 30 of user core. Apr 14 00:53:29.909656 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 14 00:53:30.335284 sshd[4290]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:30.350924 systemd[1]: sshd@29-10.0.0.55:22-10.0.0.1:57674.service: Deactivated successfully. Apr 14 00:53:30.355224 systemd[1]: session-30.scope: Deactivated successfully. Apr 14 00:53:30.360862 systemd-logind[1449]: Session 30 logged out. Waiting for processes to exit. Apr 14 00:53:30.364823 systemd-logind[1449]: Removed session 30. Apr 14 00:53:32.659060 kubelet[2511]: E0414 00:53:32.653987 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:53:33.667717 kubelet[2511]: E0414 00:53:33.667319 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:53:35.393886 systemd[1]: Started sshd@30-10.0.0.55:22-10.0.0.1:32962.service - OpenSSH per-connection server daemon (10.0.0.1:32962). Apr 14 00:53:35.591733 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 32962 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:35.606463 sshd[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:35.653709 systemd-logind[1449]: New session 31 of user core. Apr 14 00:53:35.679453 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 14 00:53:36.162609 sshd[4304]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:36.175005 systemd[1]: sshd@30-10.0.0.55:22-10.0.0.1:32962.service: Deactivated successfully. Apr 14 00:53:36.181871 systemd[1]: session-31.scope: Deactivated successfully. Apr 14 00:53:36.186406 systemd-logind[1449]: Session 31 logged out. Waiting for processes to exit. Apr 14 00:53:36.199978 systemd-logind[1449]: Removed session 31. Apr 14 00:53:41.285259 systemd[1]: Started sshd@31-10.0.0.55:22-10.0.0.1:32964.service - OpenSSH per-connection server daemon (10.0.0.1:32964). Apr 14 00:53:41.452062 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 32964 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:41.474302 sshd[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:41.542211 systemd-logind[1449]: New session 32 of user core. Apr 14 00:53:41.587078 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 14 00:53:42.188088 sshd[4318]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:42.207885 systemd[1]: sshd@31-10.0.0.55:22-10.0.0.1:32964.service: Deactivated successfully. Apr 14 00:53:42.216818 systemd[1]: session-32.scope: Deactivated successfully. Apr 14 00:53:42.234981 systemd-logind[1449]: Session 32 logged out. Waiting for processes to exit. Apr 14 00:53:42.245795 systemd-logind[1449]: Removed session 32. Apr 14 00:53:47.262015 systemd[1]: Started sshd@32-10.0.0.55:22-10.0.0.1:48850.service - OpenSSH per-connection server daemon (10.0.0.1:48850). Apr 14 00:53:47.375781 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 48850 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:47.384484 sshd[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:47.439439 systemd-logind[1449]: New session 33 of user core. Apr 14 00:53:47.498087 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 14 00:53:47.866085 sshd[4335]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:47.883985 systemd[1]: sshd@32-10.0.0.55:22-10.0.0.1:48850.service: Deactivated successfully. Apr 14 00:53:47.894840 systemd[1]: session-33.scope: Deactivated successfully. Apr 14 00:53:47.906821 systemd-logind[1449]: Session 33 logged out. Waiting for processes to exit. Apr 14 00:53:47.915837 systemd-logind[1449]: Removed session 33. Apr 14 00:53:52.965126 systemd[1]: Started sshd@33-10.0.0.55:22-10.0.0.1:48864.service - OpenSSH per-connection server daemon (10.0.0.1:48864). Apr 14 00:53:53.125103 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 48864 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:53.130963 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:53.213050 systemd-logind[1449]: New session 34 of user core. Apr 14 00:53:53.229947 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 14 00:53:53.913053 sshd[4351]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:53.937459 systemd[1]: sshd@33-10.0.0.55:22-10.0.0.1:48864.service: Deactivated successfully. Apr 14 00:53:53.964049 systemd[1]: session-34.scope: Deactivated successfully. Apr 14 00:53:53.981326 systemd-logind[1449]: Session 34 logged out. Waiting for processes to exit. Apr 14 00:53:53.988072 systemd-logind[1449]: Removed session 34. Apr 14 00:53:54.655316 kubelet[2511]: E0414 00:53:54.654657 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:53:58.991831 systemd[1]: Started sshd@34-10.0.0.55:22-10.0.0.1:42530.service - OpenSSH per-connection server daemon (10.0.0.1:42530). Apr 14 00:53:59.133757 sshd[4365]: Accepted publickey for core from 10.0.0.1 port 42530 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:59.145053 sshd[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:59.211362 systemd-logind[1449]: New session 35 of user core. Apr 14 00:53:59.221016 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 14 00:53:59.691882 sshd[4365]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:59.700178 systemd[1]: sshd@34-10.0.0.55:22-10.0.0.1:42530.service: Deactivated successfully. Apr 14 00:53:59.706457 systemd[1]: session-35.scope: Deactivated successfully. Apr 14 00:53:59.709068 systemd-logind[1449]: Session 35 logged out. Waiting for processes to exit. Apr 14 00:53:59.712713 systemd-logind[1449]: Removed session 35. Apr 14 00:54:04.740469 systemd[1]: Started sshd@35-10.0.0.55:22-10.0.0.1:42546.service - OpenSSH per-connection server daemon (10.0.0.1:42546). Apr 14 00:54:04.850169 sshd[4379]: Accepted publickey for core from 10.0.0.1 port 42546 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:54:04.863173 sshd[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:54:04.924370 systemd-logind[1449]: New session 36 of user core. Apr 14 00:54:04.931294 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 14 00:54:05.301175 sshd[4379]: pam_unix(sshd:session): session closed for user core Apr 14 00:54:05.386815 systemd[1]: sshd@35-10.0.0.55:22-10.0.0.1:42546.service: Deactivated successfully. Apr 14 00:54:05.399300 systemd[1]: session-36.scope: Deactivated successfully. Apr 14 00:54:05.407825 systemd-logind[1449]: Session 36 logged out. Waiting for processes to exit. Apr 14 00:54:05.412697 systemd-logind[1449]: Removed session 36. Apr 14 00:54:05.674858 kubelet[2511]: E0414 00:54:05.674239 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:09.655115 kubelet[2511]: E0414 00:54:09.654907 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:10.392890 systemd[1]: Started sshd@36-10.0.0.55:22-10.0.0.1:35796.service - OpenSSH per-connection server daemon (10.0.0.1:35796). Apr 14 00:54:10.445937 sshd[4394]: Accepted publickey for core from 10.0.0.1 port 35796 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:54:10.451148 sshd[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:54:10.462924 systemd-logind[1449]: New session 37 of user core. Apr 14 00:54:10.475250 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 14 00:54:10.776039 sshd[4394]: pam_unix(sshd:session): session closed for user core Apr 14 00:54:10.784937 systemd[1]: sshd@36-10.0.0.55:22-10.0.0.1:35796.service: Deactivated successfully. Apr 14 00:54:10.791742 systemd[1]: session-37.scope: Deactivated successfully. Apr 14 00:54:10.796195 systemd-logind[1449]: Session 37 logged out. Waiting for processes to exit. Apr 14 00:54:10.798950 systemd-logind[1449]: Removed session 37. Apr 14 00:54:15.879643 systemd[1]: Started sshd@37-10.0.0.55:22-10.0.0.1:33502.service - OpenSSH per-connection server daemon (10.0.0.1:33502). Apr 14 00:54:16.114929 sshd[4408]: Accepted publickey for core from 10.0.0.1 port 33502 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:54:16.118792 sshd[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:54:16.148395 systemd-logind[1449]: New session 38 of user core. Apr 14 00:54:16.161340 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 14 00:54:16.426964 sshd[4408]: pam_unix(sshd:session): session closed for user core Apr 14 00:54:16.433194 systemd[1]: sshd@37-10.0.0.55:22-10.0.0.1:33502.service: Deactivated successfully. Apr 14 00:54:16.437255 systemd[1]: session-38.scope: Deactivated successfully. Apr 14 00:54:16.439129 systemd-logind[1449]: Session 38 logged out. Waiting for processes to exit. Apr 14 00:54:16.446724 systemd-logind[1449]: Removed session 38. Apr 14 00:54:19.684646 kubelet[2511]: E0414 00:54:19.684577 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:21.492800 systemd[1]: Started sshd@38-10.0.0.55:22-10.0.0.1:33518.service - OpenSSH per-connection server daemon (10.0.0.1:33518). Apr 14 00:54:21.634593 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 33518 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:54:21.672240 sshd[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:54:21.698975 systemd-logind[1449]: New session 39 of user core. Apr 14 00:54:21.718139 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 14 00:54:22.172423 sshd[4424]: pam_unix(sshd:session): session closed for user core Apr 14 00:54:22.185081 systemd[1]: sshd@38-10.0.0.55:22-10.0.0.1:33518.service: Deactivated successfully. Apr 14 00:54:22.191560 systemd[1]: session-39.scope: Deactivated successfully. Apr 14 00:54:22.196253 systemd-logind[1449]: Session 39 logged out. Waiting for processes to exit. Apr 14 00:54:22.203460 systemd-logind[1449]: Removed session 39. Apr 14 00:54:27.306308 systemd[1]: Started sshd@39-10.0.0.55:22-10.0.0.1:54040.service - OpenSSH per-connection server daemon (10.0.0.1:54040). Apr 14 00:54:27.669204 kubelet[2511]: E0414 00:54:27.664956 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:27.728070 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 54040 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:54:27.748902 sshd[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:54:27.799854 systemd-logind[1449]: New session 40 of user core. Apr 14 00:54:27.854882 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 14 00:54:28.534815 sshd[4440]: pam_unix(sshd:session): session closed for user core Apr 14 00:54:28.580004 systemd[1]: sshd@39-10.0.0.55:22-10.0.0.1:54040.service: Deactivated successfully. Apr 14 00:54:28.610465 systemd[1]: session-40.scope: Deactivated successfully. Apr 14 00:54:28.680152 systemd-logind[1449]: Session 40 logged out. Waiting for processes to exit. Apr 14 00:54:28.711677 systemd-logind[1449]: Removed session 40. Apr 14 00:54:33.644480 systemd[1]: Started sshd@40-10.0.0.55:22-10.0.0.1:54050.service - OpenSSH per-connection server daemon (10.0.0.1:54050). Apr 14 00:54:33.841935 sshd[4455]: Accepted publickey for core from 10.0.0.1 port 54050 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:54:33.853661 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:54:33.909159 systemd-logind[1449]: New session 41 of user core. Apr 14 00:54:33.929319 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 14 00:54:34.579691 sshd[4455]: pam_unix(sshd:session): session closed for user core Apr 14 00:54:34.598849 systemd[1]: sshd@40-10.0.0.55:22-10.0.0.1:54050.service: Deactivated successfully. Apr 14 00:54:34.606412 systemd[1]: session-41.scope: Deactivated successfully. Apr 14 00:54:34.629408 systemd-logind[1449]: Session 41 logged out. Waiting for processes to exit. Apr 14 00:54:34.647594 systemd-logind[1449]: Removed session 41. Apr 14 00:54:39.810074 systemd[1]: Started sshd@41-10.0.0.55:22-10.0.0.1:59334.service - OpenSSH per-connection server daemon (10.0.0.1:59334). Apr 14 00:54:40.100329 sshd[4469]: Accepted publickey for core from 10.0.0.1 port 59334 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:54:40.102425 sshd[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:54:40.236926 systemd-logind[1449]: New session 42 of user core. Apr 14 00:54:40.299121 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 14 00:54:41.419910 sshd[4469]: pam_unix(sshd:session): session closed for user core Apr 14 00:54:41.441364 systemd[1]: sshd@41-10.0.0.55:22-10.0.0.1:59334.service: Deactivated successfully. Apr 14 00:54:41.498672 systemd[1]: session-42.scope: Deactivated successfully. Apr 14 00:54:41.507936 systemd-logind[1449]: Session 42 logged out. Waiting for processes to exit. Apr 14 00:54:41.524242 systemd-logind[1449]: Removed session 42. Apr 14 00:54:46.461227 systemd[1]: Started sshd@42-10.0.0.55:22-10.0.0.1:36410.service - OpenSSH per-connection server daemon (10.0.0.1:36410). Apr 14 00:54:46.602954 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 36410 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:54:46.609660 sshd[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:54:46.718185 systemd-logind[1449]: New session 43 of user core. Apr 14 00:54:46.727825 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 14 00:54:47.444224 sshd[4486]: pam_unix(sshd:session): session closed for user core Apr 14 00:54:47.461952 systemd[1]: sshd@42-10.0.0.55:22-10.0.0.1:36410.service: Deactivated successfully. Apr 14 00:54:47.470298 systemd[1]: session-43.scope: Deactivated successfully. Apr 14 00:54:47.476376 systemd-logind[1449]: Session 43 logged out. Waiting for processes to exit. Apr 14 00:54:47.497476 systemd[1]: Started sshd@43-10.0.0.55:22-10.0.0.1:36414.service - OpenSSH per-connection server daemon (10.0.0.1:36414). Apr 14 00:54:47.506295 systemd-logind[1449]: Removed session 43. Apr 14 00:54:47.688594 sshd[4501]: Accepted publickey for core from 10.0.0.1 port 36414 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:54:47.695113 sshd[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:54:47.739146 systemd-logind[1449]: New session 44 of user core. Apr 14 00:54:47.771404 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 14 00:54:51.131302 containerd[1472]: time="2026-04-14T00:54:51.131229049Z" level=info msg="StopContainer for \"b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7\" with timeout 30 (s)" Apr 14 00:54:51.143254 containerd[1472]: time="2026-04-14T00:54:51.142260727Z" level=info msg="Stop container \"b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7\" with signal terminated" Apr 14 00:54:51.203151 systemd[1]: cri-containerd-b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7.scope: Deactivated successfully. Apr 14 00:54:51.204231 systemd[1]: cri-containerd-b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7.scope: Consumed 2.430s CPU time. Apr 14 00:54:51.220935 containerd[1472]: time="2026-04-14T00:54:51.220803737Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 00:54:51.322367 containerd[1472]: time="2026-04-14T00:54:51.322223156Z" level=info msg="StopContainer for \"4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7\" with timeout 2 (s)" Apr 14 00:54:51.324294 containerd[1472]: time="2026-04-14T00:54:51.323762484Z" level=info msg="Stop container \"4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7\" with signal terminated" Apr 14 00:54:51.438009 systemd-networkd[1401]: lxc_health: Link DOWN Apr 14 00:54:51.438018 systemd-networkd[1401]: lxc_health: Lost carrier Apr 14 00:54:51.511821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7-rootfs.mount: Deactivated successfully. Apr 14 00:54:51.543319 systemd[1]: cri-containerd-4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7.scope: Deactivated successfully. Apr 14 00:54:51.547485 systemd[1]: cri-containerd-4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7.scope: Consumed 28.570s CPU time. Apr 14 00:54:51.783099 containerd[1472]: time="2026-04-14T00:54:51.780333587Z" level=info msg="shim disconnected" id=b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7 namespace=k8s.io Apr 14 00:54:51.783099 containerd[1472]: time="2026-04-14T00:54:51.780885800Z" level=warning msg="cleaning up after shim disconnected" id=b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7 namespace=k8s.io Apr 14 00:54:51.783099 containerd[1472]: time="2026-04-14T00:54:51.780908189Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:54:51.907223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7-rootfs.mount: Deactivated successfully. Apr 14 00:54:51.916327 containerd[1472]: time="2026-04-14T00:54:51.912824557Z" level=info msg="shim disconnected" id=4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7 namespace=k8s.io Apr 14 00:54:51.916327 containerd[1472]: time="2026-04-14T00:54:51.912906861Z" level=warning msg="cleaning up after shim disconnected" id=4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7 namespace=k8s.io Apr 14 00:54:51.916327 containerd[1472]: time="2026-04-14T00:54:51.912915909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:54:51.992096 containerd[1472]: time="2026-04-14T00:54:51.991962434Z" level=info msg="StopContainer for \"b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7\" returns successfully" Apr 14 00:54:52.000483 containerd[1472]: time="2026-04-14T00:54:51.997259729Z" level=info msg="StopPodSandbox for \"7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77\"" Apr 14 00:54:52.000483 containerd[1472]: time="2026-04-14T00:54:51.999697541Z" level=info msg="Container to stop \"b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 00:54:52.009165 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77-shm.mount: Deactivated successfully. Apr 14 00:54:52.035476 systemd[1]: cri-containerd-7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77.scope: Deactivated successfully. Apr 14 00:54:52.125803 containerd[1472]: time="2026-04-14T00:54:52.125188401Z" level=info msg="StopContainer for \"4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7\" returns successfully" Apr 14 00:54:52.135984 containerd[1472]: time="2026-04-14T00:54:52.134174578Z" level=info msg="StopPodSandbox for \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\"" Apr 14 00:54:52.142070 containerd[1472]: time="2026-04-14T00:54:52.141996128Z" level=info msg="Container to stop \"4402fbcd6383f635f172547d5ffd4525789fbaf0bd69cdd9c5c5bac4d3e4018b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 00:54:52.143121 containerd[1472]: time="2026-04-14T00:54:52.142286778Z" level=info msg="Container to stop \"affcab1af9a600dd5d9cac5f8133559110e41aefe9147731e99cfb138413ac37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 00:54:52.145453 containerd[1472]: time="2026-04-14T00:54:52.144970120Z" level=info msg="Container to stop \"44768989f781f196283ae0e246d4b0f12a068c47a1e9358ae0997da8380d89fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 00:54:52.145453 containerd[1472]: time="2026-04-14T00:54:52.145009513Z" level=info msg="Container to stop \"aa20769a08eb4640a351435749e1089ae03eb31b27e80b89038dbea270ef2357\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 00:54:52.145453 containerd[1472]: time="2026-04-14T00:54:52.145024435Z" level=info msg="Container to stop \"4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 00:54:52.152064 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4-shm.mount: Deactivated successfully. Apr 14 00:54:52.216843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77-rootfs.mount: Deactivated successfully. Apr 14 00:54:52.230314 systemd[1]: cri-containerd-21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4.scope: Deactivated successfully. Apr 14 00:54:52.306122 containerd[1472]: time="2026-04-14T00:54:52.304040909Z" level=info msg="shim disconnected" id=7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77 namespace=k8s.io Apr 14 00:54:52.306122 containerd[1472]: time="2026-04-14T00:54:52.304180443Z" level=warning msg="cleaning up after shim disconnected" id=7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77 namespace=k8s.io Apr 14 00:54:52.306122 containerd[1472]: time="2026-04-14T00:54:52.304193820Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:54:52.430273 containerd[1472]: time="2026-04-14T00:54:52.430190135Z" level=info msg="TearDown network for sandbox \"7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77\" successfully" Apr 14 00:54:52.430273 containerd[1472]: time="2026-04-14T00:54:52.430338224Z" level=info msg="StopPodSandbox for \"7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77\" returns successfully" Apr 14 00:54:52.493457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4-rootfs.mount: Deactivated successfully. Apr 14 00:54:52.509409 sshd[4501]: pam_unix(sshd:session): session closed for user core Apr 14 00:54:52.518616 kubelet[2511]: I0414 00:54:52.514333 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77" Apr 14 00:54:52.608846 systemd[1]: sshd@43-10.0.0.55:22-10.0.0.1:36414.service: Deactivated successfully. Apr 14 00:54:52.618302 systemd[1]: session-44.scope: Deactivated successfully. Apr 14 00:54:52.618705 systemd[1]: session-44.scope: Consumed 1.058s CPU time. Apr 14 00:54:52.625676 systemd-logind[1449]: Session 44 logged out. Waiting for processes to exit. Apr 14 00:54:52.635729 kubelet[2511]: I0414 00:54:52.634052 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rzng\" (UniqueName: \"kubernetes.io/projected/8c3eece2-8891-49ba-9804-7e5ff7463046-kube-api-access-7rzng\") pod \"8c3eece2-8891-49ba-9804-7e5ff7463046\" (UID: \"8c3eece2-8891-49ba-9804-7e5ff7463046\") " Apr 14 00:54:52.650288 systemd[1]: Started sshd@44-10.0.0.55:22-10.0.0.1:36422.service - OpenSSH per-connection server daemon (10.0.0.1:36422). Apr 14 00:54:52.657879 containerd[1472]: time="2026-04-14T00:54:52.656042675Z" level=info msg="shim disconnected" id=21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4 namespace=k8s.io Apr 14 00:54:52.657879 containerd[1472]: time="2026-04-14T00:54:52.656256794Z" level=warning msg="cleaning up after shim disconnected" id=21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4 namespace=k8s.io Apr 14 00:54:52.657879 containerd[1472]: time="2026-04-14T00:54:52.656274164Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:54:52.668160 systemd-logind[1449]: Removed session 44. Apr 14 00:54:52.674182 systemd[1]: var-lib-kubelet-pods-8c3eece2\x2d8891\x2d49ba\x2d9804\x2d7e5ff7463046-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7rzng.mount: Deactivated successfully. Apr 14 00:54:52.676153 kubelet[2511]: I0414 00:54:52.676042 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3eece2-8891-49ba-9804-7e5ff7463046-kube-api-access-7rzng" (OuterVolumeSpecName: "kube-api-access-7rzng") pod "8c3eece2-8891-49ba-9804-7e5ff7463046" (UID: "8c3eece2-8891-49ba-9804-7e5ff7463046"). InnerVolumeSpecName "kube-api-access-7rzng". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 00:54:52.737229 kubelet[2511]: I0414 00:54:52.736246 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c3eece2-8891-49ba-9804-7e5ff7463046-cilium-config-path\") pod \"8c3eece2-8891-49ba-9804-7e5ff7463046\" (UID: \"8c3eece2-8891-49ba-9804-7e5ff7463046\") " Apr 14 00:54:52.737229 kubelet[2511]: I0414 00:54:52.736344 2511 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7rzng\" (UniqueName: \"kubernetes.io/projected/8c3eece2-8891-49ba-9804-7e5ff7463046-kube-api-access-7rzng\") on node \"localhost\" DevicePath \"\"" Apr 14 00:54:52.743402 kubelet[2511]: I0414 00:54:52.740369 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c3eece2-8891-49ba-9804-7e5ff7463046-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8c3eece2-8891-49ba-9804-7e5ff7463046" (UID: "8c3eece2-8891-49ba-9804-7e5ff7463046"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 00:54:52.746899 containerd[1472]: time="2026-04-14T00:54:52.745739896Z" level=info msg="TearDown network for sandbox \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\" successfully" Apr 14 00:54:52.746899 containerd[1472]: time="2026-04-14T00:54:52.745812000Z" level=info msg="StopPodSandbox for \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\" returns successfully" Apr 14 00:54:52.842994 kubelet[2511]: I0414 00:54:52.841015 2511 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c3eece2-8891-49ba-9804-7e5ff7463046-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 14 00:54:52.846353 sshd[4649]: Accepted publickey for core from 10.0.0.1 port 36422 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:54:52.881629 sshd[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:54:52.924474 systemd-logind[1449]: New session 45 of user core. Apr 14 00:54:52.997299 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 14 00:54:53.032087 kubelet[2511]: I0414 00:54:53.022347 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-lib-modules\") pod \"a9e5bc40-0425-4113-a407-a1133e43b316\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " Apr 14 00:54:53.032087 kubelet[2511]: I0414 00:54:53.022632 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-host-proc-sys-kernel\") pod \"a9e5bc40-0425-4113-a407-a1133e43b316\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " Apr 14 00:54:53.032087 kubelet[2511]: I0414 00:54:53.022666 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-xtables-lock\") pod \"a9e5bc40-0425-4113-a407-a1133e43b316\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " Apr 14 00:54:53.032087 kubelet[2511]: I0414 00:54:53.022685 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-cilium-cgroup\") pod \"a9e5bc40-0425-4113-a407-a1133e43b316\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " Apr 14 00:54:53.032087 kubelet[2511]: I0414 00:54:53.022707 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-host-proc-sys-net\") pod \"a9e5bc40-0425-4113-a407-a1133e43b316\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " Apr 14 00:54:53.032087 kubelet[2511]: I0414 00:54:53.022729 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-hostproc\") pod \"a9e5bc40-0425-4113-a407-a1133e43b316\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " Apr 14 00:54:53.032474 kubelet[2511]: I0414 00:54:53.022755 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-cilium-run\") pod \"a9e5bc40-0425-4113-a407-a1133e43b316\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " Apr 14 00:54:53.032474 kubelet[2511]: I0414 00:54:53.022788 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9e5bc40-0425-4113-a407-a1133e43b316-hubble-tls\") pod \"a9e5bc40-0425-4113-a407-a1133e43b316\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " Apr 14 00:54:53.032474 kubelet[2511]: I0414 00:54:53.022808 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr6d4\" (UniqueName: \"kubernetes.io/projected/a9e5bc40-0425-4113-a407-a1133e43b316-kube-api-access-zr6d4\") pod \"a9e5bc40-0425-4113-a407-a1133e43b316\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " Apr 14 00:54:53.032474 kubelet[2511]: I0414 00:54:53.022830 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9e5bc40-0425-4113-a407-a1133e43b316-cilium-config-path\") pod \"a9e5bc40-0425-4113-a407-a1133e43b316\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " Apr 14 00:54:53.032474 kubelet[2511]: I0414 00:54:53.022847 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-cni-path\") pod \"a9e5bc40-0425-4113-a407-a1133e43b316\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " Apr 14 00:54:53.032474 kubelet[2511]: I0414 00:54:53.022867 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9e5bc40-0425-4113-a407-a1133e43b316-clustermesh-secrets\") pod \"a9e5bc40-0425-4113-a407-a1133e43b316\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " Apr 14 00:54:53.033302 kubelet[2511]: I0414 00:54:53.022884 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-etc-cni-netd\") pod \"a9e5bc40-0425-4113-a407-a1133e43b316\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " Apr 14 00:54:53.033302 kubelet[2511]: I0414 00:54:53.022903 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-bpf-maps\") pod \"a9e5bc40-0425-4113-a407-a1133e43b316\" (UID: \"a9e5bc40-0425-4113-a407-a1133e43b316\") " Apr 14 00:54:53.033302 kubelet[2511]: I0414 00:54:53.022988 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a9e5bc40-0425-4113-a407-a1133e43b316" (UID: "a9e5bc40-0425-4113-a407-a1133e43b316"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:54:53.033302 kubelet[2511]: I0414 00:54:53.023037 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a9e5bc40-0425-4113-a407-a1133e43b316" (UID: "a9e5bc40-0425-4113-a407-a1133e43b316"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:54:53.033302 kubelet[2511]: I0414 00:54:53.023054 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a9e5bc40-0425-4113-a407-a1133e43b316" (UID: "a9e5bc40-0425-4113-a407-a1133e43b316"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:54:53.034202 kubelet[2511]: I0414 00:54:53.023071 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a9e5bc40-0425-4113-a407-a1133e43b316" (UID: "a9e5bc40-0425-4113-a407-a1133e43b316"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:54:53.034202 kubelet[2511]: I0414 00:54:53.023092 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a9e5bc40-0425-4113-a407-a1133e43b316" (UID: "a9e5bc40-0425-4113-a407-a1133e43b316"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:54:53.034202 kubelet[2511]: I0414 00:54:53.023107 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a9e5bc40-0425-4113-a407-a1133e43b316" (UID: "a9e5bc40-0425-4113-a407-a1133e43b316"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:54:53.034202 kubelet[2511]: I0414 00:54:53.023124 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-hostproc" (OuterVolumeSpecName: "hostproc") pod "a9e5bc40-0425-4113-a407-a1133e43b316" (UID: "a9e5bc40-0425-4113-a407-a1133e43b316"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:54:53.034202 kubelet[2511]: I0414 00:54:53.023139 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a9e5bc40-0425-4113-a407-a1133e43b316" (UID: "a9e5bc40-0425-4113-a407-a1133e43b316"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:54:53.034460 kubelet[2511]: I0414 00:54:53.023155 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-cni-path" (OuterVolumeSpecName: "cni-path") pod "a9e5bc40-0425-4113-a407-a1133e43b316" (UID: "a9e5bc40-0425-4113-a407-a1133e43b316"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:54:53.034901 kubelet[2511]: I0414 00:54:53.034749 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a9e5bc40-0425-4113-a407-a1133e43b316" (UID: "a9e5bc40-0425-4113-a407-a1133e43b316"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 00:54:53.046918 kubelet[2511]: I0414 00:54:53.045371 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9e5bc40-0425-4113-a407-a1133e43b316-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a9e5bc40-0425-4113-a407-a1133e43b316" (UID: "a9e5bc40-0425-4113-a407-a1133e43b316"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 00:54:53.057620 kubelet[2511]: I0414 00:54:53.054262 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e5bc40-0425-4113-a407-a1133e43b316-kube-api-access-zr6d4" (OuterVolumeSpecName: "kube-api-access-zr6d4") pod "a9e5bc40-0425-4113-a407-a1133e43b316" (UID: "a9e5bc40-0425-4113-a407-a1133e43b316"). InnerVolumeSpecName "kube-api-access-zr6d4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 00:54:53.064441 systemd[1]: var-lib-kubelet-pods-a9e5bc40\x2d0425\x2d4113\x2da407\x2da1133e43b316-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzr6d4.mount: Deactivated successfully. Apr 14 00:54:53.079042 kubelet[2511]: I0414 00:54:53.078612 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e5bc40-0425-4113-a407-a1133e43b316-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a9e5bc40-0425-4113-a407-a1133e43b316" (UID: "a9e5bc40-0425-4113-a407-a1133e43b316"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 00:54:53.084742 kubelet[2511]: I0414 00:54:53.084201 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9e5bc40-0425-4113-a407-a1133e43b316-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a9e5bc40-0425-4113-a407-a1133e43b316" (UID: "a9e5bc40-0425-4113-a407-a1133e43b316"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 14 00:54:53.124812 kubelet[2511]: I0414 00:54:53.124051 2511 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 14 00:54:53.124812 kubelet[2511]: I0414 00:54:53.124436 2511 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 14 00:54:53.124812 kubelet[2511]: I0414 00:54:53.124599 2511 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 14 00:54:53.124812 kubelet[2511]: I0414 00:54:53.124613 2511 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9e5bc40-0425-4113-a407-a1133e43b316-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 14 00:54:53.124812 kubelet[2511]: I0414 00:54:53.124627 2511 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zr6d4\" (UniqueName: \"kubernetes.io/projected/a9e5bc40-0425-4113-a407-a1133e43b316-kube-api-access-zr6d4\") on node \"localhost\" DevicePath \"\"" Apr 14 00:54:53.124812 kubelet[2511]: I0414 00:54:53.124647 2511 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9e5bc40-0425-4113-a407-a1133e43b316-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 14 00:54:53.124812 kubelet[2511]: I0414 00:54:53.124656 2511 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 14 00:54:53.124812 kubelet[2511]: I0414 00:54:53.124669 2511 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9e5bc40-0425-4113-a407-a1133e43b316-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 14 00:54:53.125448 kubelet[2511]: I0414 00:54:53.124678 2511 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 14 00:54:53.125448 kubelet[2511]: I0414 00:54:53.124687 2511 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 14 00:54:53.125448 kubelet[2511]: I0414 00:54:53.124695 2511 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 14 00:54:53.125448 kubelet[2511]: I0414 00:54:53.124703 2511 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 14 00:54:53.125448 kubelet[2511]: I0414 00:54:53.124890 2511 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 14 00:54:53.125448 kubelet[2511]: I0414 00:54:53.124907 2511 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9e5bc40-0425-4113-a407-a1133e43b316-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 14 00:54:53.159043 systemd[1]: var-lib-kubelet-pods-a9e5bc40\x2d0425\x2d4113\x2da407\x2da1133e43b316-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 14 00:54:53.159625 systemd[1]: var-lib-kubelet-pods-a9e5bc40\x2d0425\x2d4113\x2da407\x2da1133e43b316-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 14 00:54:53.681349 kubelet[2511]: I0414 00:54:53.678662 2511 scope.go:117] "RemoveContainer" containerID="4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7" Apr 14 00:54:53.698978 containerd[1472]: time="2026-04-14T00:54:53.698821566Z" level=info msg="RemoveContainer for \"4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7\"" Apr 14 00:54:53.721419 systemd[1]: Removed slice kubepods-besteffort-pod8c3eece2_8891_49ba_9804_7e5ff7463046.slice - libcontainer container kubepods-besteffort-pod8c3eece2_8891_49ba_9804_7e5ff7463046.slice. Apr 14 00:54:53.722137 systemd[1]: kubepods-besteffort-pod8c3eece2_8891_49ba_9804_7e5ff7463046.slice: Consumed 2.469s CPU time. Apr 14 00:54:53.753227 systemd[1]: Removed slice kubepods-burstable-poda9e5bc40_0425_4113_a407_a1133e43b316.slice - libcontainer container kubepods-burstable-poda9e5bc40_0425_4113_a407_a1133e43b316.slice. Apr 14 00:54:53.753441 systemd[1]: kubepods-burstable-poda9e5bc40_0425_4113_a407_a1133e43b316.slice: Consumed 28.872s CPU time. Apr 14 00:54:53.777383 containerd[1472]: time="2026-04-14T00:54:53.777267938Z" level=info msg="RemoveContainer for \"4faab6501259ddf74fd5ba99eed9d9be509aeb85477dcfd6ba842ccd0b5178c7\" returns successfully" Apr 14 00:54:53.807195 kubelet[2511]: I0414 00:54:53.804378 2511 scope.go:117] "RemoveContainer" containerID="aa20769a08eb4640a351435749e1089ae03eb31b27e80b89038dbea270ef2357" Apr 14 00:54:53.827208 containerd[1472]: time="2026-04-14T00:54:53.826896632Z" level=info msg="RemoveContainer for \"aa20769a08eb4640a351435749e1089ae03eb31b27e80b89038dbea270ef2357\"" Apr 14 00:54:54.028777 containerd[1472]: time="2026-04-14T00:54:54.028278886Z" level=info msg="RemoveContainer for \"aa20769a08eb4640a351435749e1089ae03eb31b27e80b89038dbea270ef2357\" returns successfully" Apr 14 00:54:54.040751 kubelet[2511]: I0414 00:54:54.036429 2511 scope.go:117] "RemoveContainer" containerID="44768989f781f196283ae0e246d4b0f12a068c47a1e9358ae0997da8380d89fc" Apr 14 00:54:54.066831 containerd[1472]: time="2026-04-14T00:54:54.066693469Z" level=info msg="RemoveContainer for \"44768989f781f196283ae0e246d4b0f12a068c47a1e9358ae0997da8380d89fc\"" Apr 14 00:54:54.123162 containerd[1472]: time="2026-04-14T00:54:54.119313694Z" level=info msg="RemoveContainer for \"44768989f781f196283ae0e246d4b0f12a068c47a1e9358ae0997da8380d89fc\" returns successfully" Apr 14 00:54:54.133459 kubelet[2511]: I0414 00:54:54.130405 2511 scope.go:117] "RemoveContainer" containerID="affcab1af9a600dd5d9cac5f8133559110e41aefe9147731e99cfb138413ac37" Apr 14 00:54:54.199673 containerd[1472]: time="2026-04-14T00:54:54.198322250Z" level=info msg="RemoveContainer for \"affcab1af9a600dd5d9cac5f8133559110e41aefe9147731e99cfb138413ac37\"" Apr 14 00:54:54.355319 containerd[1472]: time="2026-04-14T00:54:54.355168358Z" level=info msg="RemoveContainer for \"affcab1af9a600dd5d9cac5f8133559110e41aefe9147731e99cfb138413ac37\" returns successfully" Apr 14 00:54:54.358981 kubelet[2511]: I0414 00:54:54.357970 2511 scope.go:117] "RemoveContainer" containerID="4402fbcd6383f635f172547d5ffd4525789fbaf0bd69cdd9c5c5bac4d3e4018b" Apr 14 00:54:54.375035 containerd[1472]: time="2026-04-14T00:54:54.374813076Z" level=info msg="RemoveContainer for \"4402fbcd6383f635f172547d5ffd4525789fbaf0bd69cdd9c5c5bac4d3e4018b\"" Apr 14 00:54:54.549289 containerd[1472]: time="2026-04-14T00:54:54.548311422Z" level=info msg="RemoveContainer for \"4402fbcd6383f635f172547d5ffd4525789fbaf0bd69cdd9c5c5bac4d3e4018b\" returns successfully" Apr 14 00:54:54.680642 kubelet[2511]: E0414 00:54:54.674359 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:54.735081 kubelet[2511]: I0414 00:54:54.735030 2511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c3eece2-8891-49ba-9804-7e5ff7463046" path="/var/lib/kubelet/pods/8c3eece2-8891-49ba-9804-7e5ff7463046/volumes" Apr 14 00:54:54.749624 kubelet[2511]: I0414 00:54:54.747126 2511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9e5bc40-0425-4113-a407-a1133e43b316" path="/var/lib/kubelet/pods/a9e5bc40-0425-4113-a407-a1133e43b316/volumes" Apr 14 00:54:55.325300 kubelet[2511]: E0414 00:54:55.321336 2511 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:54:56.310157 sshd[4649]: pam_unix(sshd:session): session closed for user core Apr 14 00:54:56.331276 systemd[1]: sshd@44-10.0.0.55:22-10.0.0.1:36422.service: Deactivated successfully. Apr 14 00:54:56.426490 systemd[1]: session-45.scope: Deactivated successfully. Apr 14 00:54:56.429440 systemd[1]: session-45.scope: Consumed 1.165s CPU time. Apr 14 00:54:56.436939 systemd-logind[1449]: Session 45 logged out. Waiting for processes to exit. Apr 14 00:54:56.458138 systemd[1]: Started sshd@45-10.0.0.55:22-10.0.0.1:52356.service - OpenSSH per-connection server daemon (10.0.0.1:52356). Apr 14 00:54:56.466304 systemd-logind[1449]: Removed session 45. Apr 14 00:54:56.673659 sshd[4679]: Accepted publickey for core from 10.0.0.1 port 52356 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:54:56.681003 sshd[4679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:54:56.737005 kubelet[2511]: I0414 00:54:56.732415 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8c449d5d-658c-4874-aa4b-4d867cce4eb3-hostproc\") pod \"cilium-vfzc5\" (UID: \"8c449d5d-658c-4874-aa4b-4d867cce4eb3\") " pod="kube-system/cilium-vfzc5" Apr 14 00:54:56.737005 kubelet[2511]: I0414 00:54:56.732476 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c449d5d-658c-4874-aa4b-4d867cce4eb3-etc-cni-netd\") pod \"cilium-vfzc5\" (UID: \"8c449d5d-658c-4874-aa4b-4d867cce4eb3\") " pod="kube-system/cilium-vfzc5" Apr 14 00:54:56.735406 systemd[1]: Created slice kubepods-burstable-pod8c449d5d_658c_4874_aa4b_4d867cce4eb3.slice - libcontainer container kubepods-burstable-pod8c449d5d_658c_4874_aa4b_4d867cce4eb3.slice. Apr 14 00:54:56.792844 kubelet[2511]: I0414 00:54:56.792769 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c449d5d-658c-4874-aa4b-4d867cce4eb3-lib-modules\") pod \"cilium-vfzc5\" (UID: \"8c449d5d-658c-4874-aa4b-4d867cce4eb3\") " pod="kube-system/cilium-vfzc5" Apr 14 00:54:56.792991 kubelet[2511]: I0414 00:54:56.792873 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c449d5d-658c-4874-aa4b-4d867cce4eb3-xtables-lock\") pod \"cilium-vfzc5\" (UID: \"8c449d5d-658c-4874-aa4b-4d867cce4eb3\") " pod="kube-system/cilium-vfzc5" Apr 14 00:54:56.792991 kubelet[2511]: I0414 00:54:56.792895 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c449d5d-658c-4874-aa4b-4d867cce4eb3-cilium-config-path\") pod \"cilium-vfzc5\" (UID: \"8c449d5d-658c-4874-aa4b-4d867cce4eb3\") " pod="kube-system/cilium-vfzc5" Apr 14 00:54:56.792991 kubelet[2511]: I0414 00:54:56.792915 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8c449d5d-658c-4874-aa4b-4d867cce4eb3-host-proc-sys-kernel\") pod \"cilium-vfzc5\" (UID: \"8c449d5d-658c-4874-aa4b-4d867cce4eb3\") " pod="kube-system/cilium-vfzc5" Apr 14 00:54:56.792991 kubelet[2511]: I0414 00:54:56.792933 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8c449d5d-658c-4874-aa4b-4d867cce4eb3-cilium-cgroup\") pod \"cilium-vfzc5\" (UID: \"8c449d5d-658c-4874-aa4b-4d867cce4eb3\") " pod="kube-system/cilium-vfzc5" Apr 14 00:54:56.792991 kubelet[2511]: I0414 00:54:56.792961 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8c449d5d-658c-4874-aa4b-4d867cce4eb3-cilium-run\") pod \"cilium-vfzc5\" (UID: \"8c449d5d-658c-4874-aa4b-4d867cce4eb3\") " pod="kube-system/cilium-vfzc5" Apr 14 00:54:56.802413 kubelet[2511]: I0414 00:54:56.792986 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8c449d5d-658c-4874-aa4b-4d867cce4eb3-cilium-ipsec-secrets\") pod \"cilium-vfzc5\" (UID: \"8c449d5d-658c-4874-aa4b-4d867cce4eb3\") " pod="kube-system/cilium-vfzc5" Apr 14 00:54:56.805354 kubelet[2511]: I0414 00:54:56.803100 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8c449d5d-658c-4874-aa4b-4d867cce4eb3-hubble-tls\") pod \"cilium-vfzc5\" (UID: \"8c449d5d-658c-4874-aa4b-4d867cce4eb3\") " pod="kube-system/cilium-vfzc5" Apr 14 00:54:56.805354 kubelet[2511]: I0414 00:54:56.803183 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8c449d5d-658c-4874-aa4b-4d867cce4eb3-cni-path\") pod \"cilium-vfzc5\" (UID: \"8c449d5d-658c-4874-aa4b-4d867cce4eb3\") " pod="kube-system/cilium-vfzc5" Apr 14 00:54:56.805354 kubelet[2511]: I0414 00:54:56.803218 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8c449d5d-658c-4874-aa4b-4d867cce4eb3-clustermesh-secrets\") pod \"cilium-vfzc5\" (UID: \"8c449d5d-658c-4874-aa4b-4d867cce4eb3\") " pod="kube-system/cilium-vfzc5" Apr 14 00:54:56.805354 kubelet[2511]: I0414 00:54:56.803436 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmj47\" (UniqueName: \"kubernetes.io/projected/8c449d5d-658c-4874-aa4b-4d867cce4eb3-kube-api-access-mmj47\") pod \"cilium-vfzc5\" (UID: \"8c449d5d-658c-4874-aa4b-4d867cce4eb3\") " pod="kube-system/cilium-vfzc5" Apr 14 00:54:56.805354 kubelet[2511]: I0414 00:54:56.803594 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8c449d5d-658c-4874-aa4b-4d867cce4eb3-host-proc-sys-net\") pod \"cilium-vfzc5\" (UID: \"8c449d5d-658c-4874-aa4b-4d867cce4eb3\") " pod="kube-system/cilium-vfzc5" Apr 14 00:54:56.805354 kubelet[2511]: I0414 00:54:56.803628 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8c449d5d-658c-4874-aa4b-4d867cce4eb3-bpf-maps\") pod \"cilium-vfzc5\" (UID: \"8c449d5d-658c-4874-aa4b-4d867cce4eb3\") " pod="kube-system/cilium-vfzc5" Apr 14 00:54:56.832385 systemd-logind[1449]: New session 46 of user core. Apr 14 00:54:56.867444 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 14 00:54:57.003826 sshd[4679]: pam_unix(sshd:session): session closed for user core Apr 14 00:54:57.124350 kubelet[2511]: E0414 00:54:57.124297 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:57.133908 containerd[1472]: time="2026-04-14T00:54:57.133771704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vfzc5,Uid:8c449d5d-658c-4874-aa4b-4d867cce4eb3,Namespace:kube-system,Attempt:0,}" Apr 14 00:54:57.154486 systemd[1]: Started sshd@46-10.0.0.55:22-10.0.0.1:52370.service - OpenSSH per-connection server daemon (10.0.0.1:52370). Apr 14 00:54:57.159442 systemd[1]: sshd@45-10.0.0.55:22-10.0.0.1:52356.service: Deactivated successfully. Apr 14 00:54:57.171066 systemd[1]: session-46.scope: Deactivated successfully. Apr 14 00:54:57.180127 systemd-logind[1449]: Session 46 logged out. Waiting for processes to exit. Apr 14 00:54:57.188714 systemd-logind[1449]: Removed session 46. Apr 14 00:54:57.269421 sshd[4689]: Accepted publickey for core from 10.0.0.1 port 52370 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:54:57.268001 sshd[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:54:57.404306 systemd-logind[1449]: New session 47 of user core. Apr 14 00:54:57.411280 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 14 00:54:57.499137 containerd[1472]: time="2026-04-14T00:54:57.497439992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:54:57.500099 containerd[1472]: time="2026-04-14T00:54:57.499928591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:54:57.500267 containerd[1472]: time="2026-04-14T00:54:57.499980757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:54:57.500267 containerd[1472]: time="2026-04-14T00:54:57.500232809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:54:57.639292 systemd[1]: Started cri-containerd-39904d6fe2ec896cd141076d63e667279ab30105677ce415631917860a030f6f.scope - libcontainer container 39904d6fe2ec896cd141076d63e667279ab30105677ce415631917860a030f6f. Apr 14 00:54:57.877719 containerd[1472]: time="2026-04-14T00:54:57.877360676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vfzc5,Uid:8c449d5d-658c-4874-aa4b-4d867cce4eb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"39904d6fe2ec896cd141076d63e667279ab30105677ce415631917860a030f6f\"" Apr 14 00:54:57.885204 kubelet[2511]: E0414 00:54:57.885072 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:58.006382 containerd[1472]: time="2026-04-14T00:54:58.006261565Z" level=info msg="CreateContainer within sandbox \"39904d6fe2ec896cd141076d63e667279ab30105677ce415631917860a030f6f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 14 00:54:58.286312 containerd[1472]: time="2026-04-14T00:54:58.283486627Z" level=info msg="CreateContainer within sandbox \"39904d6fe2ec896cd141076d63e667279ab30105677ce415631917860a030f6f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c6518f617f09f0bcd6e189c1cc1715802e8c461a91c5b0f4f2c4c9be3f678d5b\"" Apr 14 00:54:58.292006 containerd[1472]: time="2026-04-14T00:54:58.291662230Z" level=info msg="StartContainer for \"c6518f617f09f0bcd6e189c1cc1715802e8c461a91c5b0f4f2c4c9be3f678d5b\"" Apr 14 00:54:58.658188 systemd[1]: Started cri-containerd-c6518f617f09f0bcd6e189c1cc1715802e8c461a91c5b0f4f2c4c9be3f678d5b.scope - libcontainer container c6518f617f09f0bcd6e189c1cc1715802e8c461a91c5b0f4f2c4c9be3f678d5b. Apr 14 00:54:58.663659 kubelet[2511]: E0414 00:54:58.659290 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:58.663659 kubelet[2511]: E0414 00:54:58.661400 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:58.967273 containerd[1472]: time="2026-04-14T00:54:58.967132369Z" level=info msg="StartContainer for \"c6518f617f09f0bcd6e189c1cc1715802e8c461a91c5b0f4f2c4c9be3f678d5b\" returns successfully" Apr 14 00:54:59.007765 systemd[1]: cri-containerd-c6518f617f09f0bcd6e189c1cc1715802e8c461a91c5b0f4f2c4c9be3f678d5b.scope: Deactivated successfully. Apr 14 00:54:59.271276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6518f617f09f0bcd6e189c1cc1715802e8c461a91c5b0f4f2c4c9be3f678d5b-rootfs.mount: Deactivated successfully. Apr 14 00:54:59.411347 containerd[1472]: time="2026-04-14T00:54:59.407195041Z" level=info msg="shim disconnected" id=c6518f617f09f0bcd6e189c1cc1715802e8c461a91c5b0f4f2c4c9be3f678d5b namespace=k8s.io Apr 14 00:54:59.411347 containerd[1472]: time="2026-04-14T00:54:59.411252497Z" level=warning msg="cleaning up after shim disconnected" id=c6518f617f09f0bcd6e189c1cc1715802e8c461a91c5b0f4f2c4c9be3f678d5b namespace=k8s.io Apr 14 00:54:59.411347 containerd[1472]: time="2026-04-14T00:54:59.411283563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:54:59.582351 containerd[1472]: time="2026-04-14T00:54:59.581608712Z" level=warning msg="cleanup warnings time=\"2026-04-14T00:54:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 00:54:59.953255 kubelet[2511]: E0414 00:54:59.953130 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:55:00.015815 containerd[1472]: time="2026-04-14T00:55:00.015311861Z" level=info msg="CreateContainer within sandbox \"39904d6fe2ec896cd141076d63e667279ab30105677ce415631917860a030f6f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 14 00:55:00.090784 kubelet[2511]: I0414 00:55:00.089239 2511 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-14T00:55:00Z","lastTransitionTime":"2026-04-14T00:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 14 00:55:00.209724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3532073572.mount: Deactivated successfully. Apr 14 00:55:00.294142 containerd[1472]: time="2026-04-14T00:55:00.294003171Z" level=info msg="CreateContainer within sandbox \"39904d6fe2ec896cd141076d63e667279ab30105677ce415631917860a030f6f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5371be8185347e81c7066d796a72afbad577aed9b5dde529509e7c3c41166435\"" Apr 14 00:55:00.303790 containerd[1472]: time="2026-04-14T00:55:00.303572942Z" level=info msg="StartContainer for \"5371be8185347e81c7066d796a72afbad577aed9b5dde529509e7c3c41166435\"" Apr 14 00:55:00.329038 kubelet[2511]: E0414 00:55:00.328645 2511 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:55:00.578225 systemd[1]: Started cri-containerd-5371be8185347e81c7066d796a72afbad577aed9b5dde529509e7c3c41166435.scope - libcontainer container 5371be8185347e81c7066d796a72afbad577aed9b5dde529509e7c3c41166435. Apr 14 00:55:01.022258 containerd[1472]: time="2026-04-14T00:55:01.019210439Z" level=info msg="StartContainer for \"5371be8185347e81c7066d796a72afbad577aed9b5dde529509e7c3c41166435\" returns successfully" Apr 14 00:55:01.068456 kubelet[2511]: E0414 00:55:01.066074 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:55:01.070303 systemd[1]: cri-containerd-5371be8185347e81c7066d796a72afbad577aed9b5dde529509e7c3c41166435.scope: Deactivated successfully. Apr 14 00:55:01.288421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5371be8185347e81c7066d796a72afbad577aed9b5dde529509e7c3c41166435-rootfs.mount: Deactivated successfully. Apr 14 00:55:01.510129 containerd[1472]: time="2026-04-14T00:55:01.509727924Z" level=info msg="shim disconnected" id=5371be8185347e81c7066d796a72afbad577aed9b5dde529509e7c3c41166435 namespace=k8s.io Apr 14 00:55:01.511286 containerd[1472]: time="2026-04-14T00:55:01.510182093Z" level=warning msg="cleaning up after shim disconnected" id=5371be8185347e81c7066d796a72afbad577aed9b5dde529509e7c3c41166435 namespace=k8s.io Apr 14 00:55:01.511286 containerd[1472]: time="2026-04-14T00:55:01.510200975Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:55:02.115262 kubelet[2511]: E0414 00:55:02.114854 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:55:02.216183 containerd[1472]: time="2026-04-14T00:55:02.215918932Z" level=info msg="CreateContainer within sandbox \"39904d6fe2ec896cd141076d63e667279ab30105677ce415631917860a030f6f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 14 00:55:02.425695 containerd[1472]: time="2026-04-14T00:55:02.425362383Z" level=info msg="CreateContainer within sandbox \"39904d6fe2ec896cd141076d63e667279ab30105677ce415631917860a030f6f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f95b3a645f6aaa3198def786b19efba244fb2aab3830aefb4a230fc71e7b1e06\"" Apr 14 00:55:02.434293 containerd[1472]: time="2026-04-14T00:55:02.432211040Z" level=info msg="StartContainer for \"f95b3a645f6aaa3198def786b19efba244fb2aab3830aefb4a230fc71e7b1e06\"" Apr 14 00:55:02.671077 systemd[1]: Started cri-containerd-f95b3a645f6aaa3198def786b19efba244fb2aab3830aefb4a230fc71e7b1e06.scope - libcontainer container f95b3a645f6aaa3198def786b19efba244fb2aab3830aefb4a230fc71e7b1e06. Apr 14 00:55:02.913096 systemd[1]: cri-containerd-f95b3a645f6aaa3198def786b19efba244fb2aab3830aefb4a230fc71e7b1e06.scope: Deactivated successfully. Apr 14 00:55:02.939791 containerd[1472]: time="2026-04-14T00:55:02.939416448Z" level=info msg="StartContainer for \"f95b3a645f6aaa3198def786b19efba244fb2aab3830aefb4a230fc71e7b1e06\" returns successfully" Apr 14 00:55:03.070749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f95b3a645f6aaa3198def786b19efba244fb2aab3830aefb4a230fc71e7b1e06-rootfs.mount: Deactivated successfully. Apr 14 00:55:03.235453 containerd[1472]: time="2026-04-14T00:55:03.235208012Z" level=info msg="shim disconnected" id=f95b3a645f6aaa3198def786b19efba244fb2aab3830aefb4a230fc71e7b1e06 namespace=k8s.io Apr 14 00:55:03.236866 containerd[1472]: time="2026-04-14T00:55:03.235656094Z" level=warning msg="cleaning up after shim disconnected" id=f95b3a645f6aaa3198def786b19efba244fb2aab3830aefb4a230fc71e7b1e06 namespace=k8s.io Apr 14 00:55:03.236866 containerd[1472]: time="2026-04-14T00:55:03.235684106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:55:03.242102 kubelet[2511]: E0414 00:55:03.237875 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:55:04.280905 kubelet[2511]: E0414 00:55:04.279914 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:55:04.329021 containerd[1472]: time="2026-04-14T00:55:04.327995298Z" level=info msg="CreateContainer within sandbox \"39904d6fe2ec896cd141076d63e667279ab30105677ce415631917860a030f6f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 14 00:55:04.596323 containerd[1472]: time="2026-04-14T00:55:04.595728846Z" level=info msg="CreateContainer within sandbox \"39904d6fe2ec896cd141076d63e667279ab30105677ce415631917860a030f6f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0f3238784f965a4e47ff0cc7551dbae51e0c7c1a0ade1a27717100ed2c085880\"" Apr 14 00:55:04.637716 containerd[1472]: time="2026-04-14T00:55:04.637275568Z" level=info msg="StartContainer for \"0f3238784f965a4e47ff0cc7551dbae51e0c7c1a0ade1a27717100ed2c085880\"" Apr 14 00:55:04.830331 systemd[1]: Started cri-containerd-0f3238784f965a4e47ff0cc7551dbae51e0c7c1a0ade1a27717100ed2c085880.scope - libcontainer container 0f3238784f965a4e47ff0cc7551dbae51e0c7c1a0ade1a27717100ed2c085880. Apr 14 00:55:05.056455 systemd[1]: cri-containerd-0f3238784f965a4e47ff0cc7551dbae51e0c7c1a0ade1a27717100ed2c085880.scope: Deactivated successfully. Apr 14 00:55:05.079016 containerd[1472]: time="2026-04-14T00:55:05.078843125Z" level=info msg="StartContainer for \"0f3238784f965a4e47ff0cc7551dbae51e0c7c1a0ade1a27717100ed2c085880\" returns successfully" Apr 14 00:55:05.319852 kubelet[2511]: E0414 00:55:05.319092 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:55:05.337261 kubelet[2511]: E0414 00:55:05.337058 2511 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:55:05.368337 containerd[1472]: time="2026-04-14T00:55:05.368177875Z" level=info msg="shim disconnected" id=0f3238784f965a4e47ff0cc7551dbae51e0c7c1a0ade1a27717100ed2c085880 namespace=k8s.io Apr 14 00:55:05.369729 containerd[1472]: time="2026-04-14T00:55:05.368373035Z" level=warning msg="cleaning up after shim disconnected" id=0f3238784f965a4e47ff0cc7551dbae51e0c7c1a0ade1a27717100ed2c085880 namespace=k8s.io Apr 14 00:55:05.369729 containerd[1472]: time="2026-04-14T00:55:05.368386237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:55:05.433861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f3238784f965a4e47ff0cc7551dbae51e0c7c1a0ade1a27717100ed2c085880-rootfs.mount: Deactivated successfully. Apr 14 00:55:06.378107 kubelet[2511]: E0414 00:55:06.375771 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:55:06.508441 containerd[1472]: time="2026-04-14T00:55:06.498394459Z" level=info msg="CreateContainer within sandbox \"39904d6fe2ec896cd141076d63e667279ab30105677ce415631917860a030f6f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 14 00:55:06.643494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4236676572.mount: Deactivated successfully. Apr 14 00:55:06.752854 containerd[1472]: time="2026-04-14T00:55:06.749218094Z" level=info msg="CreateContainer within sandbox \"39904d6fe2ec896cd141076d63e667279ab30105677ce415631917860a030f6f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"96aa1ee47f313936264aac0c8b098127c873b85806ac4328220f6df05cba8e5d\"" Apr 14 00:55:06.772128 containerd[1472]: time="2026-04-14T00:55:06.767375590Z" level=info msg="StartContainer for \"96aa1ee47f313936264aac0c8b098127c873b85806ac4328220f6df05cba8e5d\"" Apr 14 00:55:07.066136 systemd[1]: Started cri-containerd-96aa1ee47f313936264aac0c8b098127c873b85806ac4328220f6df05cba8e5d.scope - libcontainer container 96aa1ee47f313936264aac0c8b098127c873b85806ac4328220f6df05cba8e5d. Apr 14 00:55:07.380482 containerd[1472]: time="2026-04-14T00:55:07.379967934Z" level=info msg="StartContainer for \"96aa1ee47f313936264aac0c8b098127c873b85806ac4328220f6df05cba8e5d\" returns successfully" Apr 14 00:55:08.530983 kubelet[2511]: E0414 00:55:08.530112 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:55:08.797731 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 14 00:55:09.550116 kubelet[2511]: E0414 00:55:09.549662 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:55:10.749167 systemd[1]: run-containerd-runc-k8s.io-96aa1ee47f313936264aac0c8b098127c873b85806ac4328220f6df05cba8e5d-runc.fcmD7e.mount: Deactivated successfully. Apr 14 00:55:12.661248 kubelet[2511]: E0414 00:55:12.660070 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:55:24.833604 systemd-networkd[1401]: lxc_health: Link UP Apr 14 00:55:24.957022 systemd-networkd[1401]: lxc_health: Gained carrier Apr 14 00:55:25.183883 kubelet[2511]: E0414 00:55:25.183780 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:55:25.460392 kubelet[2511]: I0414 00:55:25.460187 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vfzc5" podStartSLOduration=29.460173125 podStartE2EDuration="29.460173125s" podCreationTimestamp="2026-04-14 00:54:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:55:08.72306599 +0000 UTC m=+264.295141778" watchObservedRunningTime="2026-04-14 00:55:25.460173125 +0000 UTC m=+281.032248885" Apr 14 00:55:26.030655 kubelet[2511]: E0414 00:55:26.030614 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:55:26.305786 systemd-networkd[1401]: lxc_health: Gained IPv6LL Apr 14 00:55:37.194463 systemd[1]: run-containerd-runc-k8s.io-96aa1ee47f313936264aac0c8b098127c873b85806ac4328220f6df05cba8e5d-runc.ba2Rll.mount: Deactivated successfully. Apr 14 00:55:40.316379 systemd[1]: run-containerd-runc-k8s.io-96aa1ee47f313936264aac0c8b098127c873b85806ac4328220f6df05cba8e5d-runc.jRwQCU.mount: Deactivated successfully. Apr 14 00:55:44.703561 kubelet[2511]: I0414 00:55:44.703457 2511 scope.go:117] "RemoveContainer" containerID="b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7" Apr 14 00:55:44.714716 containerd[1472]: time="2026-04-14T00:55:44.709273234Z" level=info msg="RemoveContainer for \"b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7\"" Apr 14 00:55:44.841596 containerd[1472]: time="2026-04-14T00:55:44.841280976Z" level=info msg="RemoveContainer for \"b6922d3aaa7120c57ef5dd9bee81249a90315b9e692b73f6649a6d1afa001dc7\" returns successfully" Apr 14 00:55:44.912877 containerd[1472]: time="2026-04-14T00:55:44.911396540Z" level=info msg="StopPodSandbox for \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\"" Apr 14 00:55:44.912877 containerd[1472]: time="2026-04-14T00:55:44.911643391Z" level=info msg="TearDown network for sandbox \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\" successfully" Apr 14 00:55:44.912877 containerd[1472]: time="2026-04-14T00:55:44.911660501Z" level=info msg="StopPodSandbox for \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\" returns successfully" Apr 14 00:55:44.913157 containerd[1472]: time="2026-04-14T00:55:44.912910548Z" level=info msg="RemovePodSandbox for \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\"" Apr 14 00:55:44.913157 containerd[1472]: time="2026-04-14T00:55:44.913020176Z" level=info msg="Forcibly stopping sandbox \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\"" Apr 14 00:55:44.913157 containerd[1472]: time="2026-04-14T00:55:44.913086531Z" level=info msg="TearDown network for sandbox \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\" successfully" Apr 14 00:55:44.996661 containerd[1472]: time="2026-04-14T00:55:44.995903226Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 00:55:44.996661 containerd[1472]: time="2026-04-14T00:55:44.996022773Z" level=info msg="RemovePodSandbox \"21c51bdfc6ad7a591c8e3c7d0d1b6d37bd8dfdc64393ff69f8efb133f1f722a4\" returns successfully" Apr 14 00:55:44.999372 containerd[1472]: time="2026-04-14T00:55:44.998843468Z" level=info msg="StopPodSandbox for \"7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77\"" Apr 14 00:55:44.999713 containerd[1472]: time="2026-04-14T00:55:44.999443724Z" level=info msg="TearDown network for sandbox \"7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77\" successfully" Apr 14 00:55:44.999713 containerd[1472]: time="2026-04-14T00:55:44.999597558Z" level=info msg="StopPodSandbox for \"7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77\" returns successfully" Apr 14 00:55:45.000911 containerd[1472]: time="2026-04-14T00:55:45.000813669Z" level=info msg="RemovePodSandbox for \"7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77\"" Apr 14 00:55:45.000911 containerd[1472]: time="2026-04-14T00:55:45.000858326Z" level=info msg="Forcibly stopping sandbox \"7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77\"" Apr 14 00:55:45.003594 containerd[1472]: time="2026-04-14T00:55:45.001093441Z" level=info msg="TearDown network for sandbox \"7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77\" successfully" Apr 14 00:55:45.106643 containerd[1472]: time="2026-04-14T00:55:45.105186613Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 00:55:45.106643 containerd[1472]: time="2026-04-14T00:55:45.105451687Z" level=info msg="RemovePodSandbox \"7d3676710234295874a2be746d39323fa6684edd9bf108ea8a0a46a573b31f77\" returns successfully" Apr 14 00:55:46.663036 kubelet[2511]: E0414 00:55:46.659982 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:55:47.662712 kubelet[2511]: E0414 00:55:47.660287 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:55:49.180276 systemd[1]: run-containerd-runc-k8s.io-96aa1ee47f313936264aac0c8b098127c873b85806ac4328220f6df05cba8e5d-runc.ATZjoW.mount: Deactivated successfully. Apr 14 00:55:59.128196 sshd[4689]: pam_unix(sshd:session): session closed for user core Apr 14 00:55:59.156076 systemd[1]: sshd@46-10.0.0.55:22-10.0.0.1:52370.service: Deactivated successfully. Apr 14 00:55:59.197449 systemd[1]: session-47.scope: Deactivated successfully. Apr 14 00:55:59.215126 systemd[1]: session-47.scope: Consumed 1.680s CPU time. Apr 14 00:55:59.316260 systemd-logind[1449]: Session 47 logged out. Waiting for processes to exit. Apr 14 00:55:59.325189 systemd-logind[1449]: Removed session 47.