Aug 12 23:54:16.924170 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:47:31 -00 2025 Aug 12 23:54:16.924203 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 12 23:54:16.924215 kernel: BIOS-provided physical RAM map: Aug 12 23:54:16.924222 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 12 23:54:16.924229 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 12 23:54:16.924236 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 12 23:54:16.924243 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Aug 12 23:54:16.924250 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Aug 12 23:54:16.924257 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 12 23:54:16.924266 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 12 23:54:16.924273 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 12 23:54:16.924280 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 12 23:54:16.924289 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 12 23:54:16.924296 kernel: NX (Execute Disable) protection: active Aug 12 23:54:16.924304 kernel: APIC: Static calls initialized Aug 12 23:54:16.924317 kernel: SMBIOS 2.8 present. Aug 12 23:54:16.924326 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Aug 12 23:54:16.924333 kernel: Hypervisor detected: KVM Aug 12 23:54:16.924340 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 12 23:54:16.924347 kernel: kvm-clock: using sched offset of 3313734220 cycles Aug 12 23:54:16.924355 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 12 23:54:16.924363 kernel: tsc: Detected 2794.750 MHz processor Aug 12 23:54:16.924370 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 12 23:54:16.924378 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 12 23:54:16.924385 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Aug 12 23:54:16.924395 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 12 23:54:16.924403 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 12 23:54:16.924410 kernel: Using GB pages for direct mapping Aug 12 23:54:16.924417 kernel: ACPI: Early table checksum verification disabled Aug 12 23:54:16.924424 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Aug 12 23:54:16.924432 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:54:16.924439 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:54:16.924447 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:54:16.924456 kernel: ACPI: FACS 0x000000009CFE0000 000040 Aug 12 23:54:16.924464 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:54:16.924471 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:54:16.924479 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:54:16.924486 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:54:16.924493 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Aug 12 23:54:16.924501 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Aug 12 23:54:16.924512 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Aug 12 23:54:16.924522 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Aug 12 23:54:16.924530 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Aug 12 23:54:16.924537 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Aug 12 23:54:16.924545 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Aug 12 23:54:16.924552 kernel: No NUMA configuration found Aug 12 23:54:16.924560 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Aug 12 23:54:16.924568 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Aug 12 23:54:16.924578 kernel: Zone ranges: Aug 12 23:54:16.924585 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 12 23:54:16.924593 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Aug 12 23:54:16.924600 kernel: Normal empty Aug 12 23:54:16.924608 kernel: Movable zone start for each node Aug 12 23:54:16.924616 kernel: Early memory node ranges Aug 12 23:54:16.924623 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 12 23:54:16.924631 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Aug 12 23:54:16.924638 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Aug 12 23:54:16.924648 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 12 23:54:16.924658 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 12 23:54:16.924666 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Aug 12 23:54:16.924674 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 12 23:54:16.924682 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 12 23:54:16.924689 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 12 23:54:16.924697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 12 23:54:16.924704 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 12 23:54:16.924712 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 12 23:54:16.924723 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 12 23:54:16.924733 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 12 23:54:16.924743 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 12 23:54:16.924753 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 12 23:54:16.924763 kernel: TSC deadline timer available Aug 12 23:54:16.924772 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 12 23:54:16.924780 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 12 23:54:16.924787 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 12 23:54:16.924798 kernel: kvm-guest: setup PV sched yield Aug 12 23:54:16.924809 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 12 23:54:16.924816 kernel: Booting paravirtualized kernel on KVM Aug 12 23:54:16.924824 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 12 23:54:16.924832 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 12 23:54:16.924840 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Aug 12 23:54:16.924847 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Aug 12 23:54:16.924855 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 12 23:54:16.924862 kernel: kvm-guest: PV spinlocks enabled Aug 12 23:54:16.924881 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 12 23:54:16.924908 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 12 23:54:16.924926 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 12 23:54:16.924943 kernel: random: crng init done Aug 12 23:54:16.924951 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 12 23:54:16.924959 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 12 23:54:16.924966 kernel: Fallback order for Node 0: 0 Aug 12 23:54:16.924974 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Aug 12 23:54:16.924982 kernel: Policy zone: DMA32 Aug 12 23:54:16.924993 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 12 23:54:16.925000 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43504K init, 1572K bss, 138948K reserved, 0K cma-reserved) Aug 12 23:54:16.925008 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 12 23:54:16.925016 kernel: ftrace: allocating 37942 entries in 149 pages Aug 12 23:54:16.925028 kernel: ftrace: allocated 149 pages with 4 groups Aug 12 23:54:16.925036 kernel: Dynamic Preempt: voluntary Aug 12 23:54:16.925043 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 12 23:54:16.925067 kernel: rcu: RCU event tracing is enabled. Aug 12 23:54:16.925075 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 12 23:54:16.925086 kernel: Trampoline variant of Tasks RCU enabled. Aug 12 23:54:16.925094 kernel: Rude variant of Tasks RCU enabled. Aug 12 23:54:16.925102 kernel: Tracing variant of Tasks RCU enabled. Aug 12 23:54:16.925110 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 12 23:54:16.925120 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 12 23:54:16.925128 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 12 23:54:16.925136 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 12 23:54:16.925143 kernel: Console: colour VGA+ 80x25 Aug 12 23:54:16.925151 kernel: printk: console [ttyS0] enabled Aug 12 23:54:16.925161 kernel: ACPI: Core revision 20230628 Aug 12 23:54:16.925169 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 12 23:54:16.925177 kernel: APIC: Switch to symmetric I/O mode setup Aug 12 23:54:16.925193 kernel: x2apic enabled Aug 12 23:54:16.925200 kernel: APIC: Switched APIC routing to: physical x2apic Aug 12 23:54:16.925208 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 12 23:54:16.925216 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 12 23:54:16.925227 kernel: kvm-guest: setup PV IPIs Aug 12 23:54:16.925249 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 12 23:54:16.925258 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 12 23:54:16.925266 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 12 23:54:16.925274 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 12 23:54:16.925284 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 12 23:54:16.925292 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 12 23:54:16.925300 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 12 23:54:16.925308 kernel: Spectre V2 : Mitigation: Retpolines Aug 12 23:54:16.925316 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 12 23:54:16.925326 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 12 23:54:16.925334 kernel: RETBleed: Mitigation: untrained return thunk Aug 12 23:54:16.925345 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 12 23:54:16.925353 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 12 23:54:16.925361 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 12 23:54:16.925370 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 12 23:54:16.925378 kernel: x86/bugs: return thunk changed Aug 12 23:54:16.925385 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 12 23:54:16.925396 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 12 23:54:16.925404 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 12 23:54:16.925412 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 12 23:54:16.925420 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 12 23:54:16.925428 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 12 23:54:16.925437 kernel: Freeing SMP alternatives memory: 32K Aug 12 23:54:16.925447 kernel: pid_max: default: 32768 minimum: 301 Aug 12 23:54:16.925458 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 12 23:54:16.925467 kernel: landlock: Up and running. Aug 12 23:54:16.925479 kernel: SELinux: Initializing. Aug 12 23:54:16.925487 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:54:16.925495 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:54:16.925503 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 12 23:54:16.925511 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:54:16.925519 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:54:16.925527 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:54:16.925535 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 12 23:54:16.925546 kernel: ... version: 0 Aug 12 23:54:16.925556 kernel: ... bit width: 48 Aug 12 23:54:16.925564 kernel: ... generic registers: 6 Aug 12 23:54:16.925572 kernel: ... value mask: 0000ffffffffffff Aug 12 23:54:16.925580 kernel: ... max period: 00007fffffffffff Aug 12 23:54:16.925588 kernel: ... fixed-purpose events: 0 Aug 12 23:54:16.925596 kernel: ... event mask: 000000000000003f Aug 12 23:54:16.925603 kernel: signal: max sigframe size: 1776 Aug 12 23:54:16.925611 kernel: rcu: Hierarchical SRCU implementation. Aug 12 23:54:16.925620 kernel: rcu: Max phase no-delay instances is 400. Aug 12 23:54:16.925630 kernel: smp: Bringing up secondary CPUs ... Aug 12 23:54:16.925638 kernel: smpboot: x86: Booting SMP configuration: Aug 12 23:54:16.925646 kernel: .... node #0, CPUs: #1 #2 #3 Aug 12 23:54:16.925654 kernel: smp: Brought up 1 node, 4 CPUs Aug 12 23:54:16.925662 kernel: smpboot: Max logical packages: 1 Aug 12 23:54:16.925670 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 12 23:54:16.925678 kernel: devtmpfs: initialized Aug 12 23:54:16.925686 kernel: x86/mm: Memory block size: 128MB Aug 12 23:54:16.925694 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 12 23:54:16.925704 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 12 23:54:16.925712 kernel: pinctrl core: initialized pinctrl subsystem Aug 12 23:54:16.925720 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 12 23:54:16.925728 kernel: audit: initializing netlink subsys (disabled) Aug 12 23:54:16.925736 kernel: audit: type=2000 audit(1755042855.779:1): state=initialized audit_enabled=0 res=1 Aug 12 23:54:16.925744 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 12 23:54:16.925752 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 12 23:54:16.925760 kernel: cpuidle: using governor menu Aug 12 23:54:16.925767 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 12 23:54:16.925778 kernel: dca service started, version 1.12.1 Aug 12 23:54:16.925786 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 12 23:54:16.925794 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 12 23:54:16.925811 kernel: PCI: Using configuration type 1 for base access Aug 12 23:54:16.925828 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 12 23:54:16.925845 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 12 23:54:16.925854 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 12 23:54:16.925864 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 12 23:54:16.925872 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 12 23:54:16.925883 kernel: ACPI: Added _OSI(Module Device) Aug 12 23:54:16.925891 kernel: ACPI: Added _OSI(Processor Device) Aug 12 23:54:16.925899 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 12 23:54:16.925911 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 12 23:54:16.925919 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 12 23:54:16.925927 kernel: ACPI: Interpreter enabled Aug 12 23:54:16.925935 kernel: ACPI: PM: (supports S0 S3 S5) Aug 12 23:54:16.925943 kernel: ACPI: Using IOAPIC for interrupt routing Aug 12 23:54:16.925951 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 12 23:54:16.925962 kernel: PCI: Using E820 reservations for host bridge windows Aug 12 23:54:16.925970 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 12 23:54:16.925978 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 12 23:54:16.926320 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 12 23:54:16.926470 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 12 23:54:16.926608 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 12 23:54:16.926619 kernel: PCI host bridge to bus 0000:00 Aug 12 23:54:16.926780 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 12 23:54:16.926925 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 12 23:54:16.927062 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 12 23:54:16.927194 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 12 23:54:16.927316 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 12 23:54:16.927436 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Aug 12 23:54:16.927556 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 12 23:54:16.927735 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 12 23:54:16.927888 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 12 23:54:16.928023 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 12 23:54:16.928173 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 12 23:54:16.928322 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 12 23:54:16.928468 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 12 23:54:16.928621 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 12 23:54:16.928770 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Aug 12 23:54:16.928904 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 12 23:54:16.929036 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 12 23:54:16.929225 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 12 23:54:16.929365 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Aug 12 23:54:16.929500 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 12 23:54:16.929633 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 12 23:54:16.929803 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 12 23:54:16.929938 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Aug 12 23:54:16.930092 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Aug 12 23:54:16.930236 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Aug 12 23:54:16.930379 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 12 23:54:16.930535 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 12 23:54:16.930677 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 12 23:54:16.930830 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 12 23:54:16.930970 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Aug 12 23:54:16.931121 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Aug 12 23:54:16.931277 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 12 23:54:16.931418 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 12 23:54:16.931430 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 12 23:54:16.931444 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 12 23:54:16.931452 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 12 23:54:16.931460 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 12 23:54:16.931468 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 12 23:54:16.931476 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 12 23:54:16.931484 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 12 23:54:16.931492 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 12 23:54:16.931500 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 12 23:54:16.931512 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 12 23:54:16.931522 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 12 23:54:16.931530 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 12 23:54:16.931538 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 12 23:54:16.931546 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 12 23:54:16.931554 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 12 23:54:16.931562 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 12 23:54:16.931570 kernel: iommu: Default domain type: Translated Aug 12 23:54:16.931578 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 12 23:54:16.931586 kernel: PCI: Using ACPI for IRQ routing Aug 12 23:54:16.931596 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 12 23:54:16.931604 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 12 23:54:16.931612 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Aug 12 23:54:16.931761 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 12 23:54:16.931898 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 12 23:54:16.932031 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 12 23:54:16.932042 kernel: vgaarb: loaded Aug 12 23:54:16.932066 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 12 23:54:16.932097 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 12 23:54:16.932121 kernel: clocksource: Switched to clocksource kvm-clock Aug 12 23:54:16.932138 kernel: VFS: Disk quotas dquot_6.6.0 Aug 12 23:54:16.932147 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 12 23:54:16.932155 kernel: pnp: PnP ACPI init Aug 12 23:54:16.932346 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 12 23:54:16.932359 kernel: pnp: PnP ACPI: found 6 devices Aug 12 23:54:16.932368 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 12 23:54:16.932380 kernel: NET: Registered PF_INET protocol family Aug 12 23:54:16.932388 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 12 23:54:16.932396 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 12 23:54:16.932404 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 12 23:54:16.932412 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 12 23:54:16.932420 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 12 23:54:16.932428 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 12 23:54:16.932436 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:54:16.932445 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:54:16.932456 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 12 23:54:16.932464 kernel: NET: Registered PF_XDP protocol family Aug 12 23:54:16.932589 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 12 23:54:16.932714 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 12 23:54:16.932835 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 12 23:54:16.932956 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 12 23:54:16.933093 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 12 23:54:16.933223 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Aug 12 23:54:16.933239 kernel: PCI: CLS 0 bytes, default 64 Aug 12 23:54:16.933248 kernel: Initialise system trusted keyrings Aug 12 23:54:16.933256 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 12 23:54:16.933264 kernel: Key type asymmetric registered Aug 12 23:54:16.933272 kernel: Asymmetric key parser 'x509' registered Aug 12 23:54:16.933280 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 12 23:54:16.933289 kernel: io scheduler mq-deadline registered Aug 12 23:54:16.933296 kernel: io scheduler kyber registered Aug 12 23:54:16.933304 kernel: io scheduler bfq registered Aug 12 23:54:16.933312 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 12 23:54:16.933323 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 12 23:54:16.933331 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 12 23:54:16.933339 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 12 23:54:16.933347 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 12 23:54:16.933355 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 12 23:54:16.933363 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 12 23:54:16.933371 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 12 23:54:16.933380 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 12 23:54:16.933527 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 12 23:54:16.933543 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 12 23:54:16.933668 kernel: rtc_cmos 00:04: registered as rtc0 Aug 12 23:54:16.933791 kernel: rtc_cmos 00:04: setting system clock to 2025-08-12T23:54:16 UTC (1755042856) Aug 12 23:54:16.933915 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 12 23:54:16.933926 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 12 23:54:16.933935 kernel: NET: Registered PF_INET6 protocol family Aug 12 23:54:16.933943 kernel: Segment Routing with IPv6 Aug 12 23:54:16.933955 kernel: In-situ OAM (IOAM) with IPv6 Aug 12 23:54:16.933964 kernel: NET: Registered PF_PACKET protocol family Aug 12 23:54:16.933972 kernel: Key type dns_resolver registered Aug 12 23:54:16.933980 kernel: IPI shorthand broadcast: enabled Aug 12 23:54:16.933988 kernel: sched_clock: Marking stable (851005347, 113733888)->(981114830, -16375595) Aug 12 23:54:16.933996 kernel: registered taskstats version 1 Aug 12 23:54:16.934004 kernel: Loading compiled-in X.509 certificates Aug 12 23:54:16.934012 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: dfd2b306eb54324ea79eea0261f8d493924aeeeb' Aug 12 23:54:16.934020 kernel: Key type .fscrypt registered Aug 12 23:54:16.934028 kernel: Key type fscrypt-provisioning registered Aug 12 23:54:16.934039 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 12 23:54:16.934132 kernel: ima: Allocated hash algorithm: sha1 Aug 12 23:54:16.934142 kernel: ima: No architecture policies found Aug 12 23:54:16.934150 kernel: clk: Disabling unused clocks Aug 12 23:54:16.934158 kernel: Freeing unused kernel image (initmem) memory: 43504K Aug 12 23:54:16.934166 kernel: Write protecting the kernel read-only data: 38912k Aug 12 23:54:16.934175 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Aug 12 23:54:16.934189 kernel: Run /init as init process Aug 12 23:54:16.934201 kernel: with arguments: Aug 12 23:54:16.934209 kernel: /init Aug 12 23:54:16.934217 kernel: with environment: Aug 12 23:54:16.934224 kernel: HOME=/ Aug 12 23:54:16.934233 kernel: TERM=linux Aug 12 23:54:16.934241 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 12 23:54:16.934250 systemd[1]: Successfully made /usr/ read-only. Aug 12 23:54:16.934261 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:54:16.934273 systemd[1]: Detected virtualization kvm. Aug 12 23:54:16.934281 systemd[1]: Detected architecture x86-64. Aug 12 23:54:16.934290 systemd[1]: Running in initrd. Aug 12 23:54:16.934299 systemd[1]: No hostname configured, using default hostname. Aug 12 23:54:16.934307 systemd[1]: Hostname set to . Aug 12 23:54:16.934316 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:54:16.934324 systemd[1]: Queued start job for default target initrd.target. Aug 12 23:54:16.934333 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:54:16.934344 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:54:16.934354 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 12 23:54:16.934376 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:54:16.934387 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 12 23:54:16.934397 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 12 23:54:16.934409 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 12 23:54:16.934421 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 12 23:54:16.934433 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:54:16.934444 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:54:16.934453 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:54:16.934462 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:54:16.934470 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:54:16.934479 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:54:16.934491 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:54:16.934499 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:54:16.934508 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 12 23:54:16.934517 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 12 23:54:16.934526 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:54:16.934535 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:54:16.934543 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:54:16.934552 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:54:16.934561 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 12 23:54:16.934572 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:54:16.934581 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 12 23:54:16.934589 systemd[1]: Starting systemd-fsck-usr.service... Aug 12 23:54:16.934598 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:54:16.934607 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:54:16.934615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:54:16.934624 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 12 23:54:16.934632 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:54:16.934644 systemd[1]: Finished systemd-fsck-usr.service. Aug 12 23:54:16.934653 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:54:16.934692 systemd-journald[194]: Collecting audit messages is disabled. Aug 12 23:54:16.934717 systemd-journald[194]: Journal started Aug 12 23:54:16.934741 systemd-journald[194]: Runtime Journal (/run/log/journal/96b6476591e048b79971240d731a8c34) is 6M, max 48.4M, 42.3M free. Aug 12 23:54:16.923738 systemd-modules-load[195]: Inserted module 'overlay' Aug 12 23:54:16.956304 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:54:16.956329 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 12 23:54:16.956341 kernel: Bridge firewalling registered Aug 12 23:54:16.954452 systemd-modules-load[195]: Inserted module 'br_netfilter' Aug 12 23:54:16.962625 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:54:16.963934 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:54:16.968309 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:54:16.969929 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:54:16.973398 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:54:16.975928 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:54:16.976681 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:54:16.987173 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:54:16.993259 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:54:16.994372 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:54:17.009317 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:54:17.011693 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:54:17.015083 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 12 23:54:17.031619 dracut-cmdline[232]: dracut-dracut-053 Aug 12 23:54:17.035542 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 12 23:54:17.046864 systemd-resolved[228]: Positive Trust Anchors: Aug 12 23:54:17.046880 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:54:17.046912 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:54:17.049506 systemd-resolved[228]: Defaulting to hostname 'linux'. Aug 12 23:54:17.050935 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:54:17.056478 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:54:17.129096 kernel: SCSI subsystem initialized Aug 12 23:54:17.138081 kernel: Loading iSCSI transport class v2.0-870. Aug 12 23:54:17.148073 kernel: iscsi: registered transport (tcp) Aug 12 23:54:17.171080 kernel: iscsi: registered transport (qla4xxx) Aug 12 23:54:17.171128 kernel: QLogic iSCSI HBA Driver Aug 12 23:54:17.231325 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 12 23:54:17.239264 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 12 23:54:17.265424 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 12 23:54:17.265465 kernel: device-mapper: uevent: version 1.0.3 Aug 12 23:54:17.266485 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 12 23:54:17.310121 kernel: raid6: avx2x4 gen() 29755 MB/s Aug 12 23:54:17.327143 kernel: raid6: avx2x2 gen() 26859 MB/s Aug 12 23:54:17.344191 kernel: raid6: avx2x1 gen() 18131 MB/s Aug 12 23:54:17.344293 kernel: raid6: using algorithm avx2x4 gen() 29755 MB/s Aug 12 23:54:17.362390 kernel: raid6: .... xor() 7248 MB/s, rmw enabled Aug 12 23:54:17.362502 kernel: raid6: using avx2x2 recovery algorithm Aug 12 23:54:17.390096 kernel: xor: automatically using best checksumming function avx Aug 12 23:54:17.560101 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 12 23:54:17.578197 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:54:17.597211 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:54:17.613086 systemd-udevd[414]: Using default interface naming scheme 'v255'. Aug 12 23:54:17.619777 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:54:17.631238 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 12 23:54:17.645726 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Aug 12 23:54:17.683701 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:54:17.698261 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:54:17.784744 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:54:17.796210 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 12 23:54:17.810862 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 12 23:54:17.814226 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:54:17.815778 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:54:17.818140 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:54:17.834165 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 12 23:54:17.834418 kernel: cryptd: max_cpu_qlen set to 1000 Aug 12 23:54:17.837364 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 12 23:54:17.843636 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 12 23:54:17.852371 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:54:17.857491 kernel: AVX2 version of gcm_enc/dec engaged. Aug 12 23:54:17.857515 kernel: AES CTR mode by8 optimization enabled Aug 12 23:54:17.859087 kernel: libata version 3.00 loaded. Aug 12 23:54:17.863298 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 12 23:54:17.863323 kernel: GPT:9289727 != 19775487 Aug 12 23:54:17.863334 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 12 23:54:17.864512 kernel: GPT:9289727 != 19775487 Aug 12 23:54:17.864527 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 12 23:54:17.865520 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:54:17.869079 kernel: ahci 0000:00:1f.2: version 3.0 Aug 12 23:54:17.869306 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 12 23:54:17.871089 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 12 23:54:17.871403 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 12 23:54:17.872503 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:54:17.872647 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:54:17.876079 kernel: scsi host0: ahci Aug 12 23:54:17.877099 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:54:17.878264 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:54:17.881716 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:54:17.885075 kernel: scsi host1: ahci Aug 12 23:54:17.886967 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:54:17.890402 kernel: scsi host2: ahci Aug 12 23:54:17.890603 kernel: scsi host3: ahci Aug 12 23:54:17.890762 kernel: scsi host4: ahci Aug 12 23:54:17.893187 kernel: scsi host5: ahci Aug 12 23:54:17.896076 kernel: BTRFS: device fsid 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 devid 1 transid 45 /dev/vda3 scanned by (udev-worker) (476) Aug 12 23:54:17.896101 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Aug 12 23:54:17.896113 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Aug 12 23:54:17.897622 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Aug 12 23:54:17.897645 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Aug 12 23:54:17.899339 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Aug 12 23:54:17.899362 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Aug 12 23:54:17.900882 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:54:17.903397 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:54:17.906152 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (467) Aug 12 23:54:17.934095 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 12 23:54:17.965558 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:54:17.979174 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 12 23:54:17.988901 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 12 23:54:17.992411 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 12 23:54:18.003782 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:54:18.026398 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 12 23:54:18.029065 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:54:18.036575 disk-uuid[557]: Primary Header is updated. Aug 12 23:54:18.036575 disk-uuid[557]: Secondary Entries is updated. Aug 12 23:54:18.036575 disk-uuid[557]: Secondary Header is updated. Aug 12 23:54:18.041075 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:54:18.046089 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:54:18.063388 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:54:18.215466 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 12 23:54:18.215554 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 12 23:54:18.215567 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 12 23:54:18.215581 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 12 23:54:18.217090 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 12 23:54:18.218108 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 12 23:54:18.218140 kernel: ata3.00: applying bridge limits Aug 12 23:54:18.219089 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 12 23:54:18.220127 kernel: ata3.00: configured for UDMA/100 Aug 12 23:54:18.220167 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 12 23:54:18.264079 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 12 23:54:18.264344 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 12 23:54:18.278076 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 12 23:54:19.048085 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:54:19.048631 disk-uuid[558]: The operation has completed successfully. Aug 12 23:54:19.076807 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 12 23:54:19.076936 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 12 23:54:19.130592 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 12 23:54:19.134737 sh[594]: Success Aug 12 23:54:19.149109 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 12 23:54:19.188761 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 12 23:54:19.208874 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 12 23:54:19.213644 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 12 23:54:19.227744 kernel: BTRFS info (device dm-0): first mount of filesystem 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 Aug 12 23:54:19.227830 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:54:19.227843 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 12 23:54:19.228729 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 12 23:54:19.229484 kernel: BTRFS info (device dm-0): using free space tree Aug 12 23:54:19.235106 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 12 23:54:19.235864 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 12 23:54:19.240209 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 12 23:54:19.242264 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 12 23:54:19.262884 kernel: BTRFS info (device vda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:54:19.262954 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:54:19.262966 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:54:19.266078 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:54:19.271096 kernel: BTRFS info (device vda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:54:19.280132 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 12 23:54:19.286285 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 12 23:54:19.411089 ignition[679]: Ignition 2.20.0 Aug 12 23:54:19.411105 ignition[679]: Stage: fetch-offline Aug 12 23:54:19.411162 ignition[679]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:54:19.411173 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:54:19.411303 ignition[679]: parsed url from cmdline: "" Aug 12 23:54:19.411308 ignition[679]: no config URL provided Aug 12 23:54:19.411313 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:54:19.411323 ignition[679]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:54:19.411353 ignition[679]: op(1): [started] loading QEMU firmware config module Aug 12 23:54:19.411358 ignition[679]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 12 23:54:19.421434 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:54:19.421519 ignition[679]: op(1): [finished] loading QEMU firmware config module Aug 12 23:54:19.436929 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:54:19.466192 ignition[679]: parsing config with SHA512: 477f20c44289abac05addf9f2e1e15db47757abb3c47911bc3ff2cb714a1d2de96029cfb550651a3ec725bd030a591aac92c15d2eebbf0821fe295e3882cbe91 Aug 12 23:54:19.467574 systemd-networkd[781]: lo: Link UP Aug 12 23:54:19.467579 systemd-networkd[781]: lo: Gained carrier Aug 12 23:54:19.470256 unknown[679]: fetched base config from "system" Aug 12 23:54:19.470267 unknown[679]: fetched user config from "qemu" Aug 12 23:54:19.471186 ignition[679]: fetch-offline: fetch-offline passed Aug 12 23:54:19.470749 systemd-networkd[781]: Enumeration completed Aug 12 23:54:19.471339 ignition[679]: Ignition finished successfully Aug 12 23:54:19.470986 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:54:19.471317 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:54:19.471322 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:54:19.472118 systemd-networkd[781]: eth0: Link UP Aug 12 23:54:19.472122 systemd-networkd[781]: eth0: Gained carrier Aug 12 23:54:19.472129 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:54:19.472872 systemd[1]: Reached target network.target - Network. Aug 12 23:54:19.487246 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:54:19.488939 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 12 23:54:19.495135 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:54:19.495223 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 12 23:54:19.520967 ignition[785]: Ignition 2.20.0 Aug 12 23:54:19.520980 ignition[785]: Stage: kargs Aug 12 23:54:19.521175 ignition[785]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:54:19.521188 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:54:19.522120 ignition[785]: kargs: kargs passed Aug 12 23:54:19.522170 ignition[785]: Ignition finished successfully Aug 12 23:54:19.525457 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 12 23:54:19.543217 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 12 23:54:19.554403 ignition[795]: Ignition 2.20.0 Aug 12 23:54:19.554418 ignition[795]: Stage: disks Aug 12 23:54:19.554579 ignition[795]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:54:19.554592 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:54:19.557412 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 12 23:54:19.555389 ignition[795]: disks: disks passed Aug 12 23:54:19.559208 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 12 23:54:19.555434 ignition[795]: Ignition finished successfully Aug 12 23:54:19.561013 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 12 23:54:19.562842 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:54:19.563853 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:54:19.566033 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:54:19.576316 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 12 23:54:19.589382 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 12 23:54:19.595957 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 12 23:54:19.604158 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 12 23:54:19.718105 kernel: EXT4-fs (vda9): mounted filesystem 27db109b-2440-48a3-909e-fd8973275523 r/w with ordered data mode. Quota mode: none. Aug 12 23:54:19.719252 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 12 23:54:19.720718 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 12 23:54:19.739248 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:54:19.741779 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 12 23:54:19.742270 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 12 23:54:19.742322 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 12 23:54:19.755707 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (813) Aug 12 23:54:19.755738 kernel: BTRFS info (device vda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:54:19.755752 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:54:19.755765 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:54:19.742358 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:54:19.759359 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:54:19.750237 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 12 23:54:19.756916 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 12 23:54:19.761695 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:54:19.794776 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Aug 12 23:54:19.799532 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Aug 12 23:54:19.804806 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Aug 12 23:54:19.810277 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Aug 12 23:54:19.924168 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 12 23:54:19.934201 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 12 23:54:19.936166 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 12 23:54:19.946084 kernel: BTRFS info (device vda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:54:19.989549 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 12 23:54:19.996849 ignition[927]: INFO : Ignition 2.20.0 Aug 12 23:54:19.996849 ignition[927]: INFO : Stage: mount Aug 12 23:54:19.998650 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:54:19.998650 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:54:19.998650 ignition[927]: INFO : mount: mount passed Aug 12 23:54:19.998650 ignition[927]: INFO : Ignition finished successfully Aug 12 23:54:20.000557 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 12 23:54:20.008202 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 12 23:54:20.226561 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 12 23:54:20.243364 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:54:20.251091 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (939) Aug 12 23:54:20.251133 kernel: BTRFS info (device vda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:54:20.252468 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:54:20.252483 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:54:20.256073 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:54:20.258067 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:54:20.310455 ignition[956]: INFO : Ignition 2.20.0 Aug 12 23:54:20.310455 ignition[956]: INFO : Stage: files Aug 12 23:54:20.312247 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:54:20.312247 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:54:20.312247 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Aug 12 23:54:20.312247 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 12 23:54:20.312247 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 12 23:54:20.318880 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 12 23:54:20.318880 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 12 23:54:20.318880 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 12 23:54:20.318880 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 12 23:54:20.318880 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 12 23:54:20.315523 unknown[956]: wrote ssh authorized keys file for user: core Aug 12 23:54:20.367422 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 12 23:54:20.702491 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 12 23:54:20.702491 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 12 23:54:20.706885 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 12 23:54:20.742298 systemd-networkd[781]: eth0: Gained IPv6LL Aug 12 23:54:20.929031 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 12 23:54:21.344493 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 12 23:54:21.344493 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 12 23:54:21.349316 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 12 23:54:21.349316 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:54:21.349316 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:54:21.349316 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:54:21.349316 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:54:21.349316 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:54:21.349316 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:54:21.349316 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:54:21.349316 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:54:21.349316 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 12 23:54:21.349316 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 12 23:54:21.349316 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 12 23:54:21.349316 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 12 23:54:21.727415 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 12 23:54:22.397478 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 12 23:54:22.397478 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 12 23:54:22.401775 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:54:22.401775 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:54:22.401775 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 12 23:54:22.401775 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 12 23:54:22.401775 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:54:22.401775 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:54:22.401775 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 12 23:54:22.401775 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 12 23:54:22.422760 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:54:22.426956 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:54:22.428507 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 12 23:54:22.428507 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 12 23:54:22.428507 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 12 23:54:22.428507 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:54:22.428507 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:54:22.428507 ignition[956]: INFO : files: files passed Aug 12 23:54:22.428507 ignition[956]: INFO : Ignition finished successfully Aug 12 23:54:22.429629 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 12 23:54:22.438295 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 12 23:54:22.440867 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 12 23:54:22.442736 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 12 23:54:22.442852 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 12 23:54:22.451336 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Aug 12 23:54:22.454376 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:54:22.454376 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:54:22.457702 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:54:22.460884 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:54:22.462416 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 12 23:54:22.471397 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 12 23:54:22.501712 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 12 23:54:22.503150 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 12 23:54:22.507074 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 12 23:54:22.509744 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 12 23:54:22.512378 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 12 23:54:22.528314 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 12 23:54:22.542484 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:54:22.557283 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 12 23:54:22.569526 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:54:22.572521 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:54:22.575536 systemd[1]: Stopped target timers.target - Timer Units. Aug 12 23:54:22.577776 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 12 23:54:22.579034 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:54:22.582148 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 12 23:54:22.584203 systemd[1]: Stopped target basic.target - Basic System. Aug 12 23:54:22.586126 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 12 23:54:22.588304 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:54:22.590590 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 12 23:54:22.592786 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 12 23:54:22.594811 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:54:22.597270 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 12 23:54:22.599344 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 12 23:54:22.601347 systemd[1]: Stopped target swap.target - Swaps. Aug 12 23:54:22.602942 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 12 23:54:22.603968 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:54:22.606255 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:54:22.608536 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:54:22.610986 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 12 23:54:22.612103 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:54:22.614688 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 12 23:54:22.615731 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 12 23:54:22.617976 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 12 23:54:22.619071 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:54:22.621378 systemd[1]: Stopped target paths.target - Path Units. Aug 12 23:54:22.623114 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 12 23:54:22.624246 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:54:22.626914 systemd[1]: Stopped target slices.target - Slice Units. Aug 12 23:54:22.628759 systemd[1]: Stopped target sockets.target - Socket Units. Aug 12 23:54:22.630597 systemd[1]: iscsid.socket: Deactivated successfully. Aug 12 23:54:22.631493 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:54:22.633456 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 12 23:54:22.634334 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:54:22.636351 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 12 23:54:22.637502 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:54:22.639977 systemd[1]: ignition-files.service: Deactivated successfully. Aug 12 23:54:22.640945 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 12 23:54:22.654210 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 12 23:54:22.656910 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 12 23:54:22.658640 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 12 23:54:22.659750 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:54:22.662151 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 12 23:54:22.663232 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:54:22.672169 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 12 23:54:22.673150 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 12 23:54:22.674554 ignition[1010]: INFO : Ignition 2.20.0 Aug 12 23:54:22.674554 ignition[1010]: INFO : Stage: umount Aug 12 23:54:22.674554 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:54:22.674554 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:54:22.674554 ignition[1010]: INFO : umount: umount passed Aug 12 23:54:22.674554 ignition[1010]: INFO : Ignition finished successfully Aug 12 23:54:22.677518 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 12 23:54:22.677668 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 12 23:54:22.679807 systemd[1]: Stopped target network.target - Network. Aug 12 23:54:22.681540 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 12 23:54:22.681611 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 12 23:54:22.683342 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 12 23:54:22.683396 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 12 23:54:22.685389 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 12 23:54:22.685461 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 12 23:54:22.687387 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 12 23:54:22.687454 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 12 23:54:22.689487 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 12 23:54:22.691262 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 12 23:54:22.694671 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 12 23:54:22.695353 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 12 23:54:22.695490 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 12 23:54:22.700792 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 12 23:54:22.701846 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 12 23:54:22.701976 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:54:22.705926 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:54:22.706297 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 12 23:54:22.706451 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 12 23:54:22.708880 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 12 23:54:22.709634 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 12 23:54:22.709725 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:54:22.720244 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 12 23:54:22.722250 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 12 23:54:22.722329 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:54:22.724734 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:54:22.724795 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:54:22.727314 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 12 23:54:22.727385 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 12 23:54:22.729232 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:54:22.732412 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 12 23:54:22.742564 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 12 23:54:22.742725 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 12 23:54:22.753968 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 12 23:54:22.754197 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:54:22.756559 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 12 23:54:22.756619 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 12 23:54:22.758487 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 12 23:54:22.758529 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:54:22.759458 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 12 23:54:22.759513 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:54:22.760258 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 12 23:54:22.760308 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 12 23:54:22.760924 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:54:22.760973 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:54:22.777437 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 12 23:54:22.779821 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 12 23:54:22.779948 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:54:22.782634 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 12 23:54:22.782732 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:54:22.785236 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 12 23:54:22.785290 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:54:22.786424 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:54:22.786476 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:54:22.794914 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 12 23:54:22.795111 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 12 23:54:22.909948 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 12 23:54:22.910124 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 12 23:54:22.912091 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 12 23:54:22.912784 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 12 23:54:22.912843 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 12 23:54:22.928181 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 12 23:54:22.936661 systemd[1]: Switching root. Aug 12 23:54:22.968669 systemd-journald[194]: Journal stopped Aug 12 23:54:24.325684 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Aug 12 23:54:24.325754 kernel: SELinux: policy capability network_peer_controls=1 Aug 12 23:54:24.325789 kernel: SELinux: policy capability open_perms=1 Aug 12 23:54:24.325801 kernel: SELinux: policy capability extended_socket_class=1 Aug 12 23:54:24.325813 kernel: SELinux: policy capability always_check_network=0 Aug 12 23:54:24.325825 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 12 23:54:24.325838 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 12 23:54:24.325850 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 12 23:54:24.325862 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 12 23:54:24.325874 kernel: audit: type=1403 audit(1755042863.481:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 12 23:54:24.325892 systemd[1]: Successfully loaded SELinux policy in 46.219ms. Aug 12 23:54:24.325923 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 18.127ms. Aug 12 23:54:24.325936 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:54:24.325949 systemd[1]: Detected virtualization kvm. Aug 12 23:54:24.325971 systemd[1]: Detected architecture x86-64. Aug 12 23:54:24.325984 systemd[1]: Detected first boot. Aug 12 23:54:24.325996 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:54:24.326009 zram_generator::config[1057]: No configuration found. Aug 12 23:54:24.326023 kernel: Guest personality initialized and is inactive Aug 12 23:54:24.326038 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 12 23:54:24.326076 kernel: Initialized host personality Aug 12 23:54:24.326091 kernel: NET: Registered PF_VSOCK protocol family Aug 12 23:54:24.326103 systemd[1]: Populated /etc with preset unit settings. Aug 12 23:54:24.326116 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 12 23:54:24.326148 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 12 23:54:24.326164 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 12 23:54:24.326176 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 12 23:54:24.326189 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 12 23:54:24.326207 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 12 23:54:24.326219 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 12 23:54:24.326232 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 12 23:54:24.326251 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 12 23:54:24.326264 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 12 23:54:24.326277 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 12 23:54:24.326290 systemd[1]: Created slice user.slice - User and Session Slice. Aug 12 23:54:24.326302 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:54:24.326315 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:54:24.326334 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 12 23:54:24.326347 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 12 23:54:24.326360 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 12 23:54:24.326372 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:54:24.326385 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 12 23:54:24.326398 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:54:24.326411 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 12 23:54:24.326426 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 12 23:54:24.326439 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 12 23:54:24.326451 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 12 23:54:24.326464 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:54:24.326476 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:54:24.326489 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:54:24.326501 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:54:24.326515 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 12 23:54:24.326528 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 12 23:54:24.326549 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 12 23:54:24.326562 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:54:24.326574 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:54:24.326587 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:54:24.326608 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 12 23:54:24.326622 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 12 23:54:24.326636 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 12 23:54:24.326648 systemd[1]: Mounting media.mount - External Media Directory... Aug 12 23:54:24.326661 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:54:24.326676 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 12 23:54:24.326688 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 12 23:54:24.326701 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 12 23:54:24.326714 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 12 23:54:24.326727 systemd[1]: Reached target machines.target - Containers. Aug 12 23:54:24.326739 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 12 23:54:24.326752 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:54:24.326764 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:54:24.326777 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 12 23:54:24.326792 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:54:24.326805 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:54:24.326835 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:54:24.326852 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 12 23:54:24.326864 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:54:24.326877 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 12 23:54:24.326890 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 12 23:54:24.326903 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 12 23:54:24.326920 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 12 23:54:24.326932 systemd[1]: Stopped systemd-fsck-usr.service. Aug 12 23:54:24.326946 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:54:24.326958 kernel: fuse: init (API version 7.39) Aug 12 23:54:24.326980 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:54:24.326993 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:54:24.327006 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:54:24.327018 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 12 23:54:24.327031 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 12 23:54:24.327067 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:54:24.327087 systemd[1]: verity-setup.service: Deactivated successfully. Aug 12 23:54:24.327102 systemd[1]: Stopped verity-setup.service. Aug 12 23:54:24.327114 kernel: ACPI: bus type drm_connector registered Aug 12 23:54:24.327131 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:54:24.327147 kernel: loop: module loaded Aug 12 23:54:24.327161 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 12 23:54:24.327173 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 12 23:54:24.327186 systemd[1]: Mounted media.mount - External Media Directory. Aug 12 23:54:24.327198 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 12 23:54:24.327211 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 12 23:54:24.327246 systemd-journald[1132]: Collecting audit messages is disabled. Aug 12 23:54:24.327276 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 12 23:54:24.327289 systemd-journald[1132]: Journal started Aug 12 23:54:24.327312 systemd-journald[1132]: Runtime Journal (/run/log/journal/96b6476591e048b79971240d731a8c34) is 6M, max 48.4M, 42.3M free. Aug 12 23:54:24.327359 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 12 23:54:24.081346 systemd[1]: Queued start job for default target multi-user.target. Aug 12 23:54:24.099323 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 12 23:54:24.099805 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 12 23:54:24.331342 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:54:24.332378 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:54:24.334037 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 12 23:54:24.334283 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 12 23:54:24.335775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:54:24.336009 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:54:24.337486 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:54:24.337713 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:54:24.339126 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:54:24.339344 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:54:24.340982 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 12 23:54:24.341228 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 12 23:54:24.342617 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:54:24.342836 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:54:24.344499 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:54:24.345975 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:54:24.347643 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 12 23:54:24.349252 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 12 23:54:24.366573 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:54:24.373135 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 12 23:54:24.375468 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 12 23:54:24.376755 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 12 23:54:24.376850 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:54:24.378911 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 12 23:54:24.381340 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 12 23:54:24.383727 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 12 23:54:24.384946 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:54:24.387520 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 12 23:54:24.392459 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 12 23:54:24.393727 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:54:24.395368 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 12 23:54:24.396522 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:54:24.399890 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:54:24.406300 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 12 23:54:24.411176 systemd-journald[1132]: Time spent on flushing to /var/log/journal/96b6476591e048b79971240d731a8c34 is 23.610ms for 967 entries. Aug 12 23:54:24.411176 systemd-journald[1132]: System Journal (/var/log/journal/96b6476591e048b79971240d731a8c34) is 8M, max 195.6M, 187.6M free. Aug 12 23:54:24.448137 systemd-journald[1132]: Received client request to flush runtime journal. Aug 12 23:54:24.448171 kernel: loop0: detected capacity change from 0 to 147912 Aug 12 23:54:24.410470 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:54:24.415470 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 12 23:54:24.418368 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:54:24.421668 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 12 23:54:24.423548 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 12 23:54:24.434563 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 12 23:54:24.436380 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:54:24.437829 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 12 23:54:24.445566 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 12 23:54:24.450324 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 12 23:54:24.452662 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 12 23:54:24.462287 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Aug 12 23:54:24.462305 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Aug 12 23:54:24.466813 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 12 23:54:24.471501 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:54:24.473543 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 12 23:54:24.482139 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 12 23:54:24.484262 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 12 23:54:24.516343 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 12 23:54:24.518310 kernel: loop1: detected capacity change from 0 to 138176 Aug 12 23:54:24.531376 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:54:24.548905 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Aug 12 23:54:24.548929 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Aug 12 23:54:24.554788 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:54:24.563066 kernel: loop2: detected capacity change from 0 to 221472 Aug 12 23:54:24.597089 kernel: loop3: detected capacity change from 0 to 147912 Aug 12 23:54:24.613080 kernel: loop4: detected capacity change from 0 to 138176 Aug 12 23:54:24.630090 kernel: loop5: detected capacity change from 0 to 221472 Aug 12 23:54:24.638206 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 12 23:54:24.638828 (sd-merge)[1204]: Merged extensions into '/usr'. Aug 12 23:54:24.643206 systemd[1]: Reload requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Aug 12 23:54:24.643226 systemd[1]: Reloading... Aug 12 23:54:24.706085 zram_generator::config[1232]: No configuration found. Aug 12 23:54:24.740425 ldconfig[1172]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 12 23:54:24.858964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:54:24.928467 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 12 23:54:24.928596 systemd[1]: Reloading finished in 284 ms. Aug 12 23:54:24.948800 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 12 23:54:24.950555 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 12 23:54:24.966791 systemd[1]: Starting ensure-sysext.service... Aug 12 23:54:24.968964 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:54:24.983022 systemd[1]: Reload requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... Aug 12 23:54:24.983040 systemd[1]: Reloading... Aug 12 23:54:24.996502 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 12 23:54:24.996799 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 12 23:54:24.997837 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 12 23:54:24.998185 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Aug 12 23:54:24.998275 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Aug 12 23:54:25.036150 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:54:25.036165 systemd-tmpfiles[1270]: Skipping /boot Aug 12 23:54:25.046126 zram_generator::config[1302]: No configuration found. Aug 12 23:54:25.053048 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:54:25.053142 systemd-tmpfiles[1270]: Skipping /boot Aug 12 23:54:25.165134 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:54:25.231758 systemd[1]: Reloading finished in 248 ms. Aug 12 23:54:25.247191 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 12 23:54:25.267062 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:54:25.278393 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:54:25.280979 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 12 23:54:25.283794 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 12 23:54:25.288250 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:54:25.294373 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:54:25.297370 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 12 23:54:25.302223 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:54:25.302406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:54:25.303971 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:54:25.310118 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:54:25.314135 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:54:25.315346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:54:25.315456 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:54:25.317497 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 12 23:54:25.318741 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:54:25.321257 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:54:25.321539 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:54:25.323416 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:54:25.323986 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:54:25.329199 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:54:25.329653 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:54:25.337163 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 12 23:54:25.344010 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:54:25.345071 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Aug 12 23:54:25.345249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:54:25.351732 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:54:25.354500 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:54:25.359442 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:54:25.361119 augenrules[1374]: No rules Aug 12 23:54:25.360945 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:54:25.361150 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:54:25.365815 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 12 23:54:25.367247 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:54:25.368950 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:54:25.369507 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:54:25.371466 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 12 23:54:25.373496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:54:25.373757 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:54:25.375514 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:54:25.375747 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:54:25.377875 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:54:25.378145 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:54:25.380371 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 12 23:54:25.384044 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:54:25.387834 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 12 23:54:25.411773 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:54:25.420407 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:54:25.423327 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:54:25.425020 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:54:25.428305 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:54:25.434148 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:54:25.448345 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:54:25.449626 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:54:25.449676 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:54:25.454811 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:54:25.457120 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:54:25.460446 systemd[1]: Finished ensure-sysext.service. Aug 12 23:54:25.462966 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 12 23:54:25.465318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:54:25.465704 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:54:25.468118 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:54:25.468455 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:54:25.470678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:54:25.471337 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:54:25.473572 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:54:25.474199 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:54:25.489158 augenrules[1409]: /sbin/augenrules: No change Aug 12 23:54:25.493027 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 12 23:54:25.503959 augenrules[1441]: No rules Aug 12 23:54:25.501265 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:54:25.501538 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:54:25.506080 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (1402) Aug 12 23:54:25.520995 systemd-resolved[1341]: Positive Trust Anchors: Aug 12 23:54:25.521467 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:54:25.521509 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:54:25.522751 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:54:25.522827 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:54:25.527203 systemd-resolved[1341]: Defaulting to hostname 'linux'. Aug 12 23:54:25.531310 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 12 23:54:25.532636 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:54:25.532820 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:54:25.534196 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:54:25.553085 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 12 23:54:25.553088 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:54:25.565093 kernel: ACPI: button: Power Button [PWRF] Aug 12 23:54:25.566327 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 12 23:54:25.587753 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 12 23:54:25.598426 systemd-networkd[1421]: lo: Link UP Aug 12 23:54:25.598438 systemd-networkd[1421]: lo: Gained carrier Aug 12 23:54:25.599174 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 12 23:54:25.601346 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 12 23:54:25.601583 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 12 23:54:25.602806 systemd-networkd[1421]: Enumeration completed Aug 12 23:54:25.603244 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:54:25.604514 systemd[1]: Reached target network.target - Network. Aug 12 23:54:25.606345 systemd-networkd[1421]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:54:25.606358 systemd-networkd[1421]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:54:25.607048 systemd-networkd[1421]: eth0: Link UP Aug 12 23:54:25.607096 systemd-networkd[1421]: eth0: Gained carrier Aug 12 23:54:25.607110 systemd-networkd[1421]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:54:25.611248 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 12 23:54:25.618571 systemd-networkd[1421]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:54:25.621069 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 12 23:54:25.626189 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 12 23:54:25.659712 systemd-timesyncd[1451]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 12 23:54:25.659761 systemd-timesyncd[1451]: Initial clock synchronization to Tue 2025-08-12 23:54:25.775109 UTC. Aug 12 23:54:25.670125 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:54:25.671497 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 12 23:54:25.676216 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 12 23:54:25.685709 systemd[1]: Reached target time-set.target - System Time Set. Aug 12 23:54:25.724100 kernel: mousedev: PS/2 mouse device common for all mice Aug 12 23:54:25.737820 kernel: kvm_amd: TSC scaling supported Aug 12 23:54:25.737892 kernel: kvm_amd: Nested Virtualization enabled Aug 12 23:54:25.737906 kernel: kvm_amd: Nested Paging enabled Aug 12 23:54:25.737929 kernel: kvm_amd: LBR virtualization supported Aug 12 23:54:25.739313 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 12 23:54:25.739408 kernel: kvm_amd: Virtual GIF supported Aug 12 23:54:25.762107 kernel: EDAC MC: Ver: 3.0.0 Aug 12 23:54:25.797903 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 12 23:54:25.806010 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:54:25.818454 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 12 23:54:25.827773 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:54:25.862454 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 12 23:54:25.864349 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:54:25.865689 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:54:25.867062 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 12 23:54:25.868537 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 12 23:54:25.870246 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 12 23:54:25.871723 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 12 23:54:25.873242 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 12 23:54:25.874710 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 12 23:54:25.874737 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:54:25.875837 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:54:25.877896 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 12 23:54:25.880813 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 12 23:54:25.884939 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 12 23:54:25.886621 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 12 23:54:25.887954 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 12 23:54:25.891828 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 12 23:54:25.893270 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 12 23:54:25.895656 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 12 23:54:25.897324 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 12 23:54:25.898490 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:54:25.899432 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:54:25.900401 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:54:25.900432 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:54:25.901503 systemd[1]: Starting containerd.service - containerd container runtime... Aug 12 23:54:25.903570 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 12 23:54:25.908094 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:54:25.908577 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 12 23:54:25.913908 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 12 23:54:25.915656 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 12 23:54:25.916709 jq[1481]: false Aug 12 23:54:25.917266 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 12 23:54:25.920189 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 12 23:54:25.923486 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 12 23:54:25.926788 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 12 23:54:25.935352 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 12 23:54:25.941191 extend-filesystems[1482]: Found loop3 Aug 12 23:54:25.941191 extend-filesystems[1482]: Found loop4 Aug 12 23:54:25.941191 extend-filesystems[1482]: Found loop5 Aug 12 23:54:25.941191 extend-filesystems[1482]: Found sr0 Aug 12 23:54:25.941191 extend-filesystems[1482]: Found vda Aug 12 23:54:25.941191 extend-filesystems[1482]: Found vda1 Aug 12 23:54:25.941191 extend-filesystems[1482]: Found vda2 Aug 12 23:54:25.941191 extend-filesystems[1482]: Found vda3 Aug 12 23:54:25.941191 extend-filesystems[1482]: Found usr Aug 12 23:54:25.941191 extend-filesystems[1482]: Found vda4 Aug 12 23:54:25.941191 extend-filesystems[1482]: Found vda6 Aug 12 23:54:25.941191 extend-filesystems[1482]: Found vda7 Aug 12 23:54:25.941191 extend-filesystems[1482]: Found vda9 Aug 12 23:54:25.941191 extend-filesystems[1482]: Checking size of /dev/vda9 Aug 12 23:54:25.999188 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (1406) Aug 12 23:54:25.937674 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 12 23:54:25.999426 extend-filesystems[1482]: Resized partition /dev/vda9 Aug 12 23:54:25.938217 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 12 23:54:26.006453 extend-filesystems[1513]: resize2fs 1.47.1 (20-May-2024) Aug 12 23:54:26.028159 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 12 23:54:25.939279 systemd[1]: Starting update-engine.service - Update Engine... Aug 12 23:54:26.007659 dbus-daemon[1480]: [system] SELinux support is enabled Aug 12 23:54:26.031456 jq[1494]: true Aug 12 23:54:25.942988 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 12 23:54:26.031706 tar[1502]: linux-amd64/helm Aug 12 23:54:25.948268 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 12 23:54:26.032020 update_engine[1491]: I20250812 23:54:26.004673 1491 main.cc:92] Flatcar Update Engine starting Aug 12 23:54:26.032020 update_engine[1491]: I20250812 23:54:26.015703 1491 update_check_scheduler.cc:74] Next update check in 11m45s Aug 12 23:54:25.951461 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 12 23:54:25.951714 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 12 23:54:26.032510 jq[1503]: true Aug 12 23:54:25.960369 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 12 23:54:25.960799 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 12 23:54:25.964635 systemd[1]: motdgen.service: Deactivated successfully. Aug 12 23:54:25.964885 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 12 23:54:25.999603 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 12 23:54:26.057719 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 12 23:54:26.010726 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 12 23:54:26.030329 systemd[1]: Started update-engine.service - Update Engine. Aug 12 23:54:26.043299 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 12 23:54:26.043336 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 12 23:54:26.046180 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 12 23:54:26.046199 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 12 23:54:26.059324 extend-filesystems[1513]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 12 23:54:26.059324 extend-filesystems[1513]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 12 23:54:26.059324 extend-filesystems[1513]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 12 23:54:26.067622 extend-filesystems[1482]: Resized filesystem in /dev/vda9 Aug 12 23:54:26.059525 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 12 23:54:26.062394 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 12 23:54:26.062701 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 12 23:54:26.077930 systemd-logind[1489]: Watching system buttons on /dev/input/event1 (Power Button) Aug 12 23:54:26.077961 systemd-logind[1489]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 12 23:54:26.078647 systemd-logind[1489]: New seat seat0. Aug 12 23:54:26.080355 systemd[1]: Started systemd-logind.service - User Login Management. Aug 12 23:54:26.109496 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:54:26.113007 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 12 23:54:26.115277 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 12 23:54:26.121765 locksmithd[1522]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 12 23:54:26.229334 containerd[1508]: time="2025-08-12T23:54:26.229230263Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Aug 12 23:54:26.253128 containerd[1508]: time="2025-08-12T23:54:26.253082800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:54:26.255240 containerd[1508]: time="2025-08-12T23:54:26.255188653Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:54:26.255240 containerd[1508]: time="2025-08-12T23:54:26.255226318Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 12 23:54:26.255240 containerd[1508]: time="2025-08-12T23:54:26.255244999Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 12 23:54:26.255428 containerd[1508]: time="2025-08-12T23:54:26.255406526Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 12 23:54:26.255428 containerd[1508]: time="2025-08-12T23:54:26.255426731Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 12 23:54:26.255516 containerd[1508]: time="2025-08-12T23:54:26.255495822Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:54:26.255516 containerd[1508]: time="2025-08-12T23:54:26.255513598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:54:26.255796 containerd[1508]: time="2025-08-12T23:54:26.255763791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:54:26.255796 containerd[1508]: time="2025-08-12T23:54:26.255782746Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 12 23:54:26.255796 containerd[1508]: time="2025-08-12T23:54:26.255795653Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:54:26.255899 containerd[1508]: time="2025-08-12T23:54:26.255808337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 12 23:54:26.255936 containerd[1508]: time="2025-08-12T23:54:26.255910450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:54:26.256216 containerd[1508]: time="2025-08-12T23:54:26.256190035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:54:26.256382 containerd[1508]: time="2025-08-12T23:54:26.256360700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:54:26.256382 containerd[1508]: time="2025-08-12T23:54:26.256377154Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 12 23:54:26.256501 containerd[1508]: time="2025-08-12T23:54:26.256479682Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 12 23:54:26.256575 containerd[1508]: time="2025-08-12T23:54:26.256546874Z" level=info msg="metadata content store policy set" policy=shared Aug 12 23:54:26.262399 containerd[1508]: time="2025-08-12T23:54:26.262370387Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 12 23:54:26.262471 containerd[1508]: time="2025-08-12T23:54:26.262410045Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 12 23:54:26.262471 containerd[1508]: time="2025-08-12T23:54:26.262434793Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 12 23:54:26.262471 containerd[1508]: time="2025-08-12T23:54:26.262452803Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 12 23:54:26.262471 containerd[1508]: time="2025-08-12T23:54:26.262467844Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 12 23:54:26.262678 containerd[1508]: time="2025-08-12T23:54:26.262598292Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 12 23:54:26.262901 containerd[1508]: time="2025-08-12T23:54:26.262826817Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 12 23:54:26.262952 containerd[1508]: time="2025-08-12T23:54:26.262943686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 12 23:54:26.262992 containerd[1508]: time="2025-08-12T23:54:26.262957518Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 12 23:54:26.262992 containerd[1508]: time="2025-08-12T23:54:26.262970640Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 12 23:54:26.262992 containerd[1508]: time="2025-08-12T23:54:26.262983384Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 12 23:54:26.263101 containerd[1508]: time="2025-08-12T23:54:26.262995713Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 12 23:54:26.263101 containerd[1508]: time="2025-08-12T23:54:26.263008519Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 12 23:54:26.263101 containerd[1508]: time="2025-08-12T23:54:26.263020887Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 12 23:54:26.263101 containerd[1508]: time="2025-08-12T23:54:26.263034486Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 12 23:54:26.263101 containerd[1508]: time="2025-08-12T23:54:26.263046753Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 12 23:54:26.263250 containerd[1508]: time="2025-08-12T23:54:26.263058604Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 12 23:54:26.263250 containerd[1508]: time="2025-08-12T23:54:26.263178410Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 12 23:54:26.263250 containerd[1508]: time="2025-08-12T23:54:26.263205029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.263250 containerd[1508]: time="2025-08-12T23:54:26.263219908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.263250 containerd[1508]: time="2025-08-12T23:54:26.263236952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.263250 containerd[1508]: time="2025-08-12T23:54:26.263249392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.263434 containerd[1508]: time="2025-08-12T23:54:26.263261954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.263434 containerd[1508]: time="2025-08-12T23:54:26.263274587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.263434 containerd[1508]: time="2025-08-12T23:54:26.263286245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.263434 containerd[1508]: time="2025-08-12T23:54:26.263308919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.263434 containerd[1508]: time="2025-08-12T23:54:26.263323585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.263434 containerd[1508]: time="2025-08-12T23:54:26.263337641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.263434 containerd[1508]: time="2025-08-12T23:54:26.263349085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.263434 containerd[1508]: time="2025-08-12T23:54:26.263361037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.263434 containerd[1508]: time="2025-08-12T23:54:26.263373111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.263434 containerd[1508]: time="2025-08-12T23:54:26.263387533Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 12 23:54:26.263434 containerd[1508]: time="2025-08-12T23:54:26.263406935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.263434 containerd[1508]: time="2025-08-12T23:54:26.263419111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.263434 containerd[1508]: time="2025-08-12T23:54:26.263430006Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 12 23:54:26.263807 containerd[1508]: time="2025-08-12T23:54:26.263482785Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 12 23:54:26.263807 containerd[1508]: time="2025-08-12T23:54:26.263499097Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 12 23:54:26.263807 containerd[1508]: time="2025-08-12T23:54:26.263509200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 12 23:54:26.263807 containerd[1508]: time="2025-08-12T23:54:26.263576807Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 12 23:54:26.263807 containerd[1508]: time="2025-08-12T23:54:26.263588525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.263807 containerd[1508]: time="2025-08-12T23:54:26.263601209Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 12 23:54:26.263807 containerd[1508]: time="2025-08-12T23:54:26.263611637Z" level=info msg="NRI interface is disabled by configuration." Aug 12 23:54:26.263807 containerd[1508]: time="2025-08-12T23:54:26.263622868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 12 23:54:26.264092 containerd[1508]: time="2025-08-12T23:54:26.263914761Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 12 23:54:26.264092 containerd[1508]: time="2025-08-12T23:54:26.263990082Z" level=info msg="Connect containerd service" Aug 12 23:54:26.264092 containerd[1508]: time="2025-08-12T23:54:26.264013997Z" level=info msg="using legacy CRI server" Aug 12 23:54:26.264092 containerd[1508]: time="2025-08-12T23:54:26.264020472Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 12 23:54:26.264376 containerd[1508]: time="2025-08-12T23:54:26.264133529Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 12 23:54:26.264814 containerd[1508]: time="2025-08-12T23:54:26.264783064Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:54:26.264940 containerd[1508]: time="2025-08-12T23:54:26.264906550Z" level=info msg="Start subscribing containerd event" Aug 12 23:54:26.264976 containerd[1508]: time="2025-08-12T23:54:26.264945597Z" level=info msg="Start recovering state" Aug 12 23:54:26.265025 containerd[1508]: time="2025-08-12T23:54:26.265005379Z" level=info msg="Start event monitor" Aug 12 23:54:26.265025 containerd[1508]: time="2025-08-12T23:54:26.265018907Z" level=info msg="Start snapshots syncer" Aug 12 23:54:26.265109 containerd[1508]: time="2025-08-12T23:54:26.265028307Z" level=info msg="Start cni network conf syncer for default" Aug 12 23:54:26.265109 containerd[1508]: time="2025-08-12T23:54:26.265037902Z" level=info msg="Start streaming server" Aug 12 23:54:26.265527 containerd[1508]: time="2025-08-12T23:54:26.265483385Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 12 23:54:26.265566 containerd[1508]: time="2025-08-12T23:54:26.265556013Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 12 23:54:26.267619 systemd[1]: Started containerd.service - containerd container runtime. Aug 12 23:54:26.268969 containerd[1508]: time="2025-08-12T23:54:26.268775697Z" level=info msg="containerd successfully booted in 0.041360s" Aug 12 23:54:26.441702 tar[1502]: linux-amd64/LICENSE Aug 12 23:54:26.441702 tar[1502]: linux-amd64/README.md Aug 12 23:54:26.456737 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 12 23:54:26.556141 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 12 23:54:26.580684 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 12 23:54:26.593360 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 12 23:54:26.600736 systemd[1]: issuegen.service: Deactivated successfully. Aug 12 23:54:26.601028 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 12 23:54:26.603936 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 12 23:54:26.619280 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 12 23:54:26.622476 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 12 23:54:26.624820 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 12 23:54:26.626113 systemd[1]: Reached target getty.target - Login Prompts. Aug 12 23:54:26.951559 systemd-networkd[1421]: eth0: Gained IPv6LL Aug 12 23:54:26.955509 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 12 23:54:26.957434 systemd[1]: Reached target network-online.target - Network is Online. Aug 12 23:54:26.972337 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 12 23:54:26.975204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:54:26.977536 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 12 23:54:26.998604 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 12 23:54:26.998959 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 12 23:54:27.000690 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 12 23:54:27.005133 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 12 23:54:27.767879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:54:27.769621 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 12 23:54:27.770881 systemd[1]: Startup finished in 995ms (kernel) + 6.760s (initrd) + 4.334s (userspace) = 12.089s. Aug 12 23:54:27.803481 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:54:28.705752 kubelet[1594]: E0812 23:54:28.705644 1594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:54:28.712951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:54:28.713495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:54:28.714598 systemd[1]: kubelet.service: Consumed 1.535s CPU time, 268.8M memory peak. Aug 12 23:54:30.474377 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 12 23:54:30.483367 systemd[1]: Started sshd@0-10.0.0.52:22-10.0.0.1:56670.service - OpenSSH per-connection server daemon (10.0.0.1:56670). Aug 12 23:54:30.529881 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 56670 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:54:30.531635 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:30.543695 systemd-logind[1489]: New session 1 of user core. Aug 12 23:54:30.545581 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 12 23:54:30.562594 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 12 23:54:30.578728 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 12 23:54:30.581882 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 12 23:54:30.603745 (systemd)[1611]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:54:30.606744 systemd-logind[1489]: New session c1 of user core. Aug 12 23:54:30.784293 systemd[1611]: Queued start job for default target default.target. Aug 12 23:54:30.795912 systemd[1611]: Created slice app.slice - User Application Slice. Aug 12 23:54:30.795949 systemd[1611]: Reached target paths.target - Paths. Aug 12 23:54:30.796010 systemd[1611]: Reached target timers.target - Timers. Aug 12 23:54:30.798340 systemd[1611]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 12 23:54:30.811192 systemd[1611]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 12 23:54:30.811397 systemd[1611]: Reached target sockets.target - Sockets. Aug 12 23:54:30.811476 systemd[1611]: Reached target basic.target - Basic System. Aug 12 23:54:30.811543 systemd[1611]: Reached target default.target - Main User Target. Aug 12 23:54:30.811605 systemd[1611]: Startup finished in 196ms. Aug 12 23:54:30.811663 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 12 23:54:30.813448 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 12 23:54:30.893497 systemd[1]: Started sshd@1-10.0.0.52:22-10.0.0.1:56672.service - OpenSSH per-connection server daemon (10.0.0.1:56672). Aug 12 23:54:30.929920 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 56672 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:54:30.931955 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:30.937026 systemd-logind[1489]: New session 2 of user core. Aug 12 23:54:30.946265 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 12 23:54:31.003896 sshd[1624]: Connection closed by 10.0.0.1 port 56672 Aug 12 23:54:31.004290 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:31.031215 systemd[1]: sshd@1-10.0.0.52:22-10.0.0.1:56672.service: Deactivated successfully. Aug 12 23:54:31.033819 systemd[1]: session-2.scope: Deactivated successfully. Aug 12 23:54:31.035968 systemd-logind[1489]: Session 2 logged out. Waiting for processes to exit. Aug 12 23:54:31.044529 systemd[1]: Started sshd@2-10.0.0.52:22-10.0.0.1:56684.service - OpenSSH per-connection server daemon (10.0.0.1:56684). Aug 12 23:54:31.045691 systemd-logind[1489]: Removed session 2. Aug 12 23:54:31.078577 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 56684 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:54:31.080436 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:31.085238 systemd-logind[1489]: New session 3 of user core. Aug 12 23:54:31.095302 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 12 23:54:31.146935 sshd[1632]: Connection closed by 10.0.0.1 port 56684 Aug 12 23:54:31.147952 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:31.168220 systemd[1]: sshd@2-10.0.0.52:22-10.0.0.1:56684.service: Deactivated successfully. Aug 12 23:54:31.171043 systemd[1]: session-3.scope: Deactivated successfully. Aug 12 23:54:31.172700 systemd-logind[1489]: Session 3 logged out. Waiting for processes to exit. Aug 12 23:54:31.185344 systemd[1]: Started sshd@3-10.0.0.52:22-10.0.0.1:56688.service - OpenSSH per-connection server daemon (10.0.0.1:56688). Aug 12 23:54:31.186443 systemd-logind[1489]: Removed session 3. Aug 12 23:54:31.219441 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 56688 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:54:31.221345 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:31.226145 systemd-logind[1489]: New session 4 of user core. Aug 12 23:54:31.236186 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 12 23:54:31.291694 sshd[1640]: Connection closed by 10.0.0.1 port 56688 Aug 12 23:54:31.292373 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:31.306147 systemd[1]: sshd@3-10.0.0.52:22-10.0.0.1:56688.service: Deactivated successfully. Aug 12 23:54:31.308188 systemd[1]: session-4.scope: Deactivated successfully. Aug 12 23:54:31.310055 systemd-logind[1489]: Session 4 logged out. Waiting for processes to exit. Aug 12 23:54:31.317392 systemd[1]: Started sshd@4-10.0.0.52:22-10.0.0.1:56700.service - OpenSSH per-connection server daemon (10.0.0.1:56700). Aug 12 23:54:31.318418 systemd-logind[1489]: Removed session 4. Aug 12 23:54:31.355846 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 56700 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:54:31.357363 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:31.362380 systemd-logind[1489]: New session 5 of user core. Aug 12 23:54:31.372267 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 12 23:54:31.684996 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 12 23:54:31.685486 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:54:31.703639 sudo[1649]: pam_unix(sudo:session): session closed for user root Aug 12 23:54:31.706016 sshd[1648]: Connection closed by 10.0.0.1 port 56700 Aug 12 23:54:31.706620 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:31.724429 systemd[1]: sshd@4-10.0.0.52:22-10.0.0.1:56700.service: Deactivated successfully. Aug 12 23:54:31.727574 systemd[1]: session-5.scope: Deactivated successfully. Aug 12 23:54:31.729862 systemd-logind[1489]: Session 5 logged out. Waiting for processes to exit. Aug 12 23:54:31.745636 systemd[1]: Started sshd@5-10.0.0.52:22-10.0.0.1:56714.service - OpenSSH per-connection server daemon (10.0.0.1:56714). Aug 12 23:54:31.747088 systemd-logind[1489]: Removed session 5. Aug 12 23:54:31.781638 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 56714 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:54:31.783483 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:31.788188 systemd-logind[1489]: New session 6 of user core. Aug 12 23:54:31.804214 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 12 23:54:31.861696 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 12 23:54:31.862110 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:54:31.867922 sudo[1659]: pam_unix(sudo:session): session closed for user root Aug 12 23:54:31.876603 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 12 23:54:31.877073 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:54:31.898567 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:54:31.938345 augenrules[1681]: No rules Aug 12 23:54:31.940615 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:54:31.941056 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:54:31.942565 sudo[1658]: pam_unix(sudo:session): session closed for user root Aug 12 23:54:31.944243 sshd[1657]: Connection closed by 10.0.0.1 port 56714 Aug 12 23:54:31.944714 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:31.958539 systemd[1]: sshd@5-10.0.0.52:22-10.0.0.1:56714.service: Deactivated successfully. Aug 12 23:54:31.960807 systemd[1]: session-6.scope: Deactivated successfully. Aug 12 23:54:31.962785 systemd-logind[1489]: Session 6 logged out. Waiting for processes to exit. Aug 12 23:54:31.971474 systemd[1]: Started sshd@6-10.0.0.52:22-10.0.0.1:56728.service - OpenSSH per-connection server daemon (10.0.0.1:56728). Aug 12 23:54:31.972490 systemd-logind[1489]: Removed session 6. Aug 12 23:54:32.007271 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 56728 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:54:32.008826 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:32.013793 systemd-logind[1489]: New session 7 of user core. Aug 12 23:54:32.024311 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 12 23:54:32.080116 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 12 23:54:32.080490 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:54:32.387443 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 12 23:54:32.387564 (dockerd)[1712]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 12 23:54:33.326863 dockerd[1712]: time="2025-08-12T23:54:33.326755325Z" level=info msg="Starting up" Aug 12 23:54:33.853770 dockerd[1712]: time="2025-08-12T23:54:33.853710062Z" level=info msg="Loading containers: start." Aug 12 23:54:34.059095 kernel: Initializing XFRM netlink socket Aug 12 23:54:34.153483 systemd-networkd[1421]: docker0: Link UP Aug 12 23:54:34.207642 dockerd[1712]: time="2025-08-12T23:54:34.207582980Z" level=info msg="Loading containers: done." Aug 12 23:54:34.224673 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2179644714-merged.mount: Deactivated successfully. Aug 12 23:54:34.225968 dockerd[1712]: time="2025-08-12T23:54:34.225922473Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 12 23:54:34.226047 dockerd[1712]: time="2025-08-12T23:54:34.226027427Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Aug 12 23:54:34.226224 dockerd[1712]: time="2025-08-12T23:54:34.226195530Z" level=info msg="Daemon has completed initialization" Aug 12 23:54:34.266944 dockerd[1712]: time="2025-08-12T23:54:34.266860981Z" level=info msg="API listen on /run/docker.sock" Aug 12 23:54:34.267109 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 12 23:54:35.292967 containerd[1508]: time="2025-08-12T23:54:35.292894210Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 12 23:54:37.307406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1031582923.mount: Deactivated successfully. Aug 12 23:54:38.671782 containerd[1508]: time="2025-08-12T23:54:38.671727260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:38.672349 containerd[1508]: time="2025-08-12T23:54:38.672311647Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 12 23:54:38.673505 containerd[1508]: time="2025-08-12T23:54:38.673475617Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:38.679192 containerd[1508]: time="2025-08-12T23:54:38.679163462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:38.682567 containerd[1508]: time="2025-08-12T23:54:38.680674337Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 3.387716498s" Aug 12 23:54:38.682567 containerd[1508]: time="2025-08-12T23:54:38.680711243Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 12 23:54:38.683282 containerd[1508]: time="2025-08-12T23:54:38.683243961Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 12 23:54:38.963662 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 12 23:54:38.976271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:54:39.183255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:54:39.187390 (kubelet)[1971]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:54:39.259408 kubelet[1971]: E0812 23:54:39.259262 1971 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:54:39.266272 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:54:39.266543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:54:39.266998 systemd[1]: kubelet.service: Consumed 278ms CPU time, 113.2M memory peak. Aug 12 23:54:40.525869 containerd[1508]: time="2025-08-12T23:54:40.525798774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:40.526683 containerd[1508]: time="2025-08-12T23:54:40.526628439Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 12 23:54:40.527956 containerd[1508]: time="2025-08-12T23:54:40.527893988Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:40.530769 containerd[1508]: time="2025-08-12T23:54:40.530728265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:40.531865 containerd[1508]: time="2025-08-12T23:54:40.531827001Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.84854811s" Aug 12 23:54:40.531865 containerd[1508]: time="2025-08-12T23:54:40.531863169Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 12 23:54:40.532415 containerd[1508]: time="2025-08-12T23:54:40.532370174Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 12 23:54:42.120795 containerd[1508]: time="2025-08-12T23:54:42.120712351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:42.121550 containerd[1508]: time="2025-08-12T23:54:42.121496884Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 12 23:54:42.122698 containerd[1508]: time="2025-08-12T23:54:42.122638232Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:42.126047 containerd[1508]: time="2025-08-12T23:54:42.125901753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:42.128723 containerd[1508]: time="2025-08-12T23:54:42.128616334Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.596192995s" Aug 12 23:54:42.128723 containerd[1508]: time="2025-08-12T23:54:42.128712095Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 12 23:54:42.129442 containerd[1508]: time="2025-08-12T23:54:42.129409417Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 12 23:54:43.477942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2288250776.mount: Deactivated successfully. Aug 12 23:54:44.270474 containerd[1508]: time="2025-08-12T23:54:44.270382368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:44.271310 containerd[1508]: time="2025-08-12T23:54:44.271233119Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 12 23:54:44.272593 containerd[1508]: time="2025-08-12T23:54:44.272556361Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:44.276953 containerd[1508]: time="2025-08-12T23:54:44.276902543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:44.277597 containerd[1508]: time="2025-08-12T23:54:44.277542545Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 2.148099552s" Aug 12 23:54:44.277597 containerd[1508]: time="2025-08-12T23:54:44.277580526Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 12 23:54:44.278196 containerd[1508]: time="2025-08-12T23:54:44.278162293Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 12 23:54:44.885641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3597381824.mount: Deactivated successfully. Aug 12 23:54:45.665757 containerd[1508]: time="2025-08-12T23:54:45.665692949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:45.666415 containerd[1508]: time="2025-08-12T23:54:45.666355486Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 12 23:54:45.667523 containerd[1508]: time="2025-08-12T23:54:45.667472843Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:45.670350 containerd[1508]: time="2025-08-12T23:54:45.670320162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:45.671462 containerd[1508]: time="2025-08-12T23:54:45.671425532Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.393156391s" Aug 12 23:54:45.671462 containerd[1508]: time="2025-08-12T23:54:45.671459084Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 12 23:54:45.671996 containerd[1508]: time="2025-08-12T23:54:45.671973313Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 12 23:54:46.234245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3564649897.mount: Deactivated successfully. Aug 12 23:54:46.239960 containerd[1508]: time="2025-08-12T23:54:46.239912249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:46.240636 containerd[1508]: time="2025-08-12T23:54:46.240565395Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 12 23:54:46.241722 containerd[1508]: time="2025-08-12T23:54:46.241692912Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:46.243962 containerd[1508]: time="2025-08-12T23:54:46.243930676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:46.244652 containerd[1508]: time="2025-08-12T23:54:46.244616285Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 572.6158ms" Aug 12 23:54:46.244652 containerd[1508]: time="2025-08-12T23:54:46.244646371Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 12 23:54:46.245190 containerd[1508]: time="2025-08-12T23:54:46.245147770Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 12 23:54:46.901192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1324891560.mount: Deactivated successfully. Aug 12 23:54:49.024426 containerd[1508]: time="2025-08-12T23:54:49.024330072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:49.025122 containerd[1508]: time="2025-08-12T23:54:49.025040258Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 12 23:54:49.026531 containerd[1508]: time="2025-08-12T23:54:49.026479267Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:49.029800 containerd[1508]: time="2025-08-12T23:54:49.029748037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:54:49.031021 containerd[1508]: time="2025-08-12T23:54:49.030976209Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.785800231s" Aug 12 23:54:49.031021 containerd[1508]: time="2025-08-12T23:54:49.031014687Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 12 23:54:49.453382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 12 23:54:49.469324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:54:49.873292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:54:49.877704 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:54:49.976005 kubelet[2136]: E0812 23:54:49.975922 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:54:49.980780 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:54:49.981032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:54:49.981503 systemd[1]: kubelet.service: Consumed 337ms CPU time, 110.1M memory peak. Aug 12 23:54:51.249617 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:54:51.249791 systemd[1]: kubelet.service: Consumed 337ms CPU time, 110.1M memory peak. Aug 12 23:54:51.264531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:54:51.294015 systemd[1]: Reload requested from client PID 2153 ('systemctl') (unit session-7.scope)... Aug 12 23:54:51.294083 systemd[1]: Reloading... Aug 12 23:54:51.403103 zram_generator::config[2203]: No configuration found. Aug 12 23:54:52.051474 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:54:52.159440 systemd[1]: Reloading finished in 864 ms. Aug 12 23:54:52.217873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:54:52.224219 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:54:52.225407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:54:52.225819 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:54:52.226130 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:54:52.226180 systemd[1]: kubelet.service: Consumed 165ms CPU time, 98.3M memory peak. Aug 12 23:54:52.229042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:54:52.403638 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:54:52.418512 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:54:52.467429 kubelet[2248]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:54:52.467429 kubelet[2248]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 12 23:54:52.467429 kubelet[2248]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:54:52.467886 kubelet[2248]: I0812 23:54:52.467499 2248 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:54:52.750006 kubelet[2248]: I0812 23:54:52.749843 2248 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 12 23:54:52.750006 kubelet[2248]: I0812 23:54:52.749891 2248 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:54:52.750193 kubelet[2248]: I0812 23:54:52.750172 2248 server.go:934] "Client rotation is on, will bootstrap in background" Aug 12 23:54:52.769669 kubelet[2248]: E0812 23:54:52.769598 2248 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:54:52.771017 kubelet[2248]: I0812 23:54:52.770966 2248 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:54:52.779575 kubelet[2248]: E0812 23:54:52.779531 2248 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:54:52.779575 kubelet[2248]: I0812 23:54:52.779572 2248 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:54:52.786849 kubelet[2248]: I0812 23:54:52.786784 2248 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:54:52.787687 kubelet[2248]: I0812 23:54:52.787646 2248 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 12 23:54:52.787973 kubelet[2248]: I0812 23:54:52.787886 2248 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:54:52.788225 kubelet[2248]: I0812 23:54:52.787964 2248 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:54:52.788393 kubelet[2248]: I0812 23:54:52.788243 2248 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:54:52.788393 kubelet[2248]: I0812 23:54:52.788258 2248 container_manager_linux.go:300] "Creating device plugin manager" Aug 12 23:54:52.788483 kubelet[2248]: I0812 23:54:52.788452 2248 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:54:52.791159 kubelet[2248]: I0812 23:54:52.791125 2248 kubelet.go:408] "Attempting to sync node with API server" Aug 12 23:54:52.791159 kubelet[2248]: I0812 23:54:52.791149 2248 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:54:52.791245 kubelet[2248]: I0812 23:54:52.791184 2248 kubelet.go:314] "Adding apiserver pod source" Aug 12 23:54:52.791245 kubelet[2248]: I0812 23:54:52.791207 2248 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:54:52.807568 kubelet[2248]: I0812 23:54:52.807473 2248 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 12 23:54:52.808913 kubelet[2248]: I0812 23:54:52.808632 2248 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:54:52.808913 kubelet[2248]: W0812 23:54:52.808739 2248 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 12 23:54:52.808913 kubelet[2248]: W0812 23:54:52.808792 2248 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Aug 12 23:54:52.808913 kubelet[2248]: E0812 23:54:52.808841 2248 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:54:52.808913 kubelet[2248]: W0812 23:54:52.808713 2248 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Aug 12 23:54:52.808913 kubelet[2248]: E0812 23:54:52.808877 2248 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:54:52.811304 kubelet[2248]: I0812 23:54:52.811272 2248 server.go:1274] "Started kubelet" Aug 12 23:54:52.812018 kubelet[2248]: I0812 23:54:52.811712 2248 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:54:52.813093 kubelet[2248]: I0812 23:54:52.812225 2248 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:54:52.813093 kubelet[2248]: I0812 23:54:52.812231 2248 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:54:52.813093 kubelet[2248]: I0812 23:54:52.812808 2248 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:54:52.813303 kubelet[2248]: I0812 23:54:52.813281 2248 server.go:449] "Adding debug handlers to kubelet server" Aug 12 23:54:52.814904 kubelet[2248]: I0812 23:54:52.814876 2248 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:54:52.815804 kubelet[2248]: E0812 23:54:52.815774 2248 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:54:52.815850 kubelet[2248]: I0812 23:54:52.815829 2248 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 12 23:54:52.816099 kubelet[2248]: I0812 23:54:52.816071 2248 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 12 23:54:52.816179 kubelet[2248]: I0812 23:54:52.816161 2248 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:54:52.816477 kubelet[2248]: W0812 23:54:52.816435 2248 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Aug 12 23:54:52.816477 kubelet[2248]: E0812 23:54:52.816474 2248 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:54:52.816639 kubelet[2248]: E0812 23:54:52.814404 2248 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.52:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.52:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2a365f96877b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-12 23:54:52.811241339 +0000 UTC m=+0.387397908,LastTimestamp:2025-08-12 23:54:52.811241339 +0000 UTC m=+0.387397908,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 12 23:54:52.816707 kubelet[2248]: E0812 23:54:52.816681 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="200ms" Aug 12 23:54:52.818766 kubelet[2248]: E0812 23:54:52.818740 2248 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:54:52.818926 kubelet[2248]: I0812 23:54:52.818906 2248 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:54:52.818954 kubelet[2248]: I0812 23:54:52.818926 2248 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:54:52.819187 kubelet[2248]: I0812 23:54:52.819004 2248 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:54:52.835606 kubelet[2248]: I0812 23:54:52.835548 2248 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:54:52.837970 kubelet[2248]: I0812 23:54:52.837633 2248 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:54:52.837970 kubelet[2248]: I0812 23:54:52.837665 2248 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 12 23:54:52.837970 kubelet[2248]: I0812 23:54:52.837695 2248 kubelet.go:2321] "Starting kubelet main sync loop" Aug 12 23:54:52.837970 kubelet[2248]: E0812 23:54:52.837741 2248 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:54:52.838819 kubelet[2248]: W0812 23:54:52.838764 2248 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Aug 12 23:54:52.838881 kubelet[2248]: E0812 23:54:52.838830 2248 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:54:52.839940 kubelet[2248]: I0812 23:54:52.839922 2248 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 12 23:54:52.839940 kubelet[2248]: I0812 23:54:52.839936 2248 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 12 23:54:52.840035 kubelet[2248]: I0812 23:54:52.839953 2248 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:54:52.916938 kubelet[2248]: E0812 23:54:52.916869 2248 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:54:52.937973 kubelet[2248]: E0812 23:54:52.937913 2248 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 12 23:54:53.017672 kubelet[2248]: E0812 23:54:53.017419 2248 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:54:53.018297 kubelet[2248]: E0812 23:54:53.018169 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="400ms" Aug 12 23:54:53.117813 kubelet[2248]: E0812 23:54:53.117729 2248 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:54:53.138998 kubelet[2248]: E0812 23:54:53.138919 2248 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 12 23:54:53.218512 kubelet[2248]: E0812 23:54:53.218435 2248 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:54:53.237821 kubelet[2248]: I0812 23:54:53.237742 2248 policy_none.go:49] "None policy: Start" Aug 12 23:54:53.238846 kubelet[2248]: I0812 23:54:53.238811 2248 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 12 23:54:53.238846 kubelet[2248]: I0812 23:54:53.238842 2248 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:54:53.251962 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 12 23:54:53.264960 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 12 23:54:53.268459 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 12 23:54:53.281172 kubelet[2248]: I0812 23:54:53.281125 2248 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:54:53.281498 kubelet[2248]: I0812 23:54:53.281450 2248 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:54:53.281498 kubelet[2248]: I0812 23:54:53.281477 2248 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:54:53.281970 kubelet[2248]: I0812 23:54:53.281901 2248 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:54:53.283331 kubelet[2248]: E0812 23:54:53.283282 2248 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 12 23:54:53.384134 kubelet[2248]: I0812 23:54:53.384067 2248 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:54:53.384505 kubelet[2248]: E0812 23:54:53.384473 2248 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Aug 12 23:54:53.421612 kubelet[2248]: E0812 23:54:53.419508 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="800ms" Aug 12 23:54:53.548858 systemd[1]: Created slice kubepods-burstable-pod7702d1de1b3e69c4fca1cfa4ab44d05f.slice - libcontainer container kubepods-burstable-pod7702d1de1b3e69c4fca1cfa4ab44d05f.slice. Aug 12 23:54:53.574793 systemd[1]: Created slice kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice - libcontainer container kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice. Aug 12 23:54:53.586039 kubelet[2248]: I0812 23:54:53.586002 2248 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:54:53.586512 kubelet[2248]: E0812 23:54:53.586447 2248 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Aug 12 23:54:53.590521 systemd[1]: Created slice kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice - libcontainer container kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice. Aug 12 23:54:53.620769 kubelet[2248]: I0812 23:54:53.620727 2248 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7702d1de1b3e69c4fca1cfa4ab44d05f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7702d1de1b3e69c4fca1cfa4ab44d05f\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:54:53.620769 kubelet[2248]: I0812 23:54:53.620765 2248 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:54:53.620907 kubelet[2248]: I0812 23:54:53.620786 2248 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:54:53.620907 kubelet[2248]: I0812 23:54:53.620805 2248 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:54:53.620907 kubelet[2248]: I0812 23:54:53.620823 2248 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:54:53.620907 kubelet[2248]: I0812 23:54:53.620838 2248 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7702d1de1b3e69c4fca1cfa4ab44d05f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7702d1de1b3e69c4fca1cfa4ab44d05f\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:54:53.620907 kubelet[2248]: I0812 23:54:53.620850 2248 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7702d1de1b3e69c4fca1cfa4ab44d05f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7702d1de1b3e69c4fca1cfa4ab44d05f\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:54:53.621088 kubelet[2248]: I0812 23:54:53.620876 2248 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:54:53.621088 kubelet[2248]: I0812 23:54:53.620910 2248 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:54:53.738894 kubelet[2248]: W0812 23:54:53.738812 2248 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Aug 12 23:54:53.738894 kubelet[2248]: E0812 23:54:53.738899 2248 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:54:53.872691 kubelet[2248]: E0812 23:54:53.872628 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:53.873471 containerd[1508]: time="2025-08-12T23:54:53.873407601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7702d1de1b3e69c4fca1cfa4ab44d05f,Namespace:kube-system,Attempt:0,}" Aug 12 23:54:53.888584 kubelet[2248]: E0812 23:54:53.888557 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:53.888944 containerd[1508]: time="2025-08-12T23:54:53.888891350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 12 23:54:53.893177 kubelet[2248]: E0812 23:54:53.893147 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:53.893542 containerd[1508]: time="2025-08-12T23:54:53.893499844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 12 23:54:53.988305 kubelet[2248]: I0812 23:54:53.988249 2248 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:54:53.988718 kubelet[2248]: E0812 23:54:53.988673 2248 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Aug 12 23:54:54.109645 kubelet[2248]: W0812 23:54:54.109552 2248 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Aug 12 23:54:54.109645 kubelet[2248]: E0812 23:54:54.109641 2248 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:54:54.222515 kubelet[2248]: E0812 23:54:54.222288 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="1.6s" Aug 12 23:54:54.269008 kubelet[2248]: W0812 23:54:54.268928 2248 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Aug 12 23:54:54.269203 kubelet[2248]: E0812 23:54:54.269015 2248 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:54:54.424136 kubelet[2248]: W0812 23:54:54.424024 2248 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Aug 12 23:54:54.424315 kubelet[2248]: E0812 23:54:54.424253 2248 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:54:54.791123 kubelet[2248]: I0812 23:54:54.791077 2248 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:54:54.791642 kubelet[2248]: E0812 23:54:54.791443 2248 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Aug 12 23:54:54.932073 kubelet[2248]: E0812 23:54:54.932002 2248 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:54:55.514349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3533671176.mount: Deactivated successfully. Aug 12 23:54:55.520280 containerd[1508]: time="2025-08-12T23:54:55.520237620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:54:55.522190 containerd[1508]: time="2025-08-12T23:54:55.522147833Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 12 23:54:55.525221 containerd[1508]: time="2025-08-12T23:54:55.525194969Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:54:55.526706 containerd[1508]: time="2025-08-12T23:54:55.526685490Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:54:55.528093 containerd[1508]: time="2025-08-12T23:54:55.528045227Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 12 23:54:55.529099 containerd[1508]: time="2025-08-12T23:54:55.529073322Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:54:55.529990 containerd[1508]: time="2025-08-12T23:54:55.529962816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 12 23:54:55.531101 containerd[1508]: time="2025-08-12T23:54:55.531069232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:54:55.531875 containerd[1508]: time="2025-08-12T23:54:55.531843336Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.65832258s" Aug 12 23:54:55.536064 containerd[1508]: time="2025-08-12T23:54:55.536026631Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.642417763s" Aug 12 23:54:55.536874 containerd[1508]: time="2025-08-12T23:54:55.536819215Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.647809217s" Aug 12 23:54:55.797981 containerd[1508]: time="2025-08-12T23:54:55.797412153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:54:55.797981 containerd[1508]: time="2025-08-12T23:54:55.797502259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:54:55.797981 containerd[1508]: time="2025-08-12T23:54:55.797521259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:54:55.797981 containerd[1508]: time="2025-08-12T23:54:55.797631709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:54:55.798651 containerd[1508]: time="2025-08-12T23:54:55.798425005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:54:55.798651 containerd[1508]: time="2025-08-12T23:54:55.798492001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:54:55.798651 containerd[1508]: time="2025-08-12T23:54:55.798506041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:54:55.798651 containerd[1508]: time="2025-08-12T23:54:55.798580803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:54:55.803539 containerd[1508]: time="2025-08-12T23:54:55.803401565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:54:55.803837 containerd[1508]: time="2025-08-12T23:54:55.803600238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:54:55.803837 containerd[1508]: time="2025-08-12T23:54:55.803673237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:54:55.804097 containerd[1508]: time="2025-08-12T23:54:55.803831009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:54:55.841255 kubelet[2248]: E0812 23:54:55.823487 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="3.2s" Aug 12 23:54:55.865420 systemd[1]: Started cri-containerd-b6f3e6d5a606c35e8abea964ddbc078686b94aafdfc1a74a70d552435d8a1463.scope - libcontainer container b6f3e6d5a606c35e8abea964ddbc078686b94aafdfc1a74a70d552435d8a1463. Aug 12 23:54:55.874013 systemd[1]: Started cri-containerd-39fcc553a5111ecca27807453e267613b1ba9e0f5ea27ecc8d61d04d68f90b9b.scope - libcontainer container 39fcc553a5111ecca27807453e267613b1ba9e0f5ea27ecc8d61d04d68f90b9b. Aug 12 23:54:55.882115 systemd[1]: Started cri-containerd-eed7c50f2e63c8af663a85570c75e3ed3c669a0335fe57b12902c26bdfd569f8.scope - libcontainer container eed7c50f2e63c8af663a85570c75e3ed3c669a0335fe57b12902c26bdfd569f8. Aug 12 23:54:55.926333 containerd[1508]: time="2025-08-12T23:54:55.926285356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6f3e6d5a606c35e8abea964ddbc078686b94aafdfc1a74a70d552435d8a1463\"" Aug 12 23:54:55.927825 kubelet[2248]: E0812 23:54:55.927680 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:55.932075 containerd[1508]: time="2025-08-12T23:54:55.932023684Z" level=info msg="CreateContainer within sandbox \"b6f3e6d5a606c35e8abea964ddbc078686b94aafdfc1a74a70d552435d8a1463\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 12 23:54:55.932634 containerd[1508]: time="2025-08-12T23:54:55.932484847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"39fcc553a5111ecca27807453e267613b1ba9e0f5ea27ecc8d61d04d68f90b9b\"" Aug 12 23:54:55.934323 kubelet[2248]: E0812 23:54:55.934192 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:55.937045 containerd[1508]: time="2025-08-12T23:54:55.936995995Z" level=info msg="CreateContainer within sandbox \"39fcc553a5111ecca27807453e267613b1ba9e0f5ea27ecc8d61d04d68f90b9b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 12 23:54:55.938005 containerd[1508]: time="2025-08-12T23:54:55.937944989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7702d1de1b3e69c4fca1cfa4ab44d05f,Namespace:kube-system,Attempt:0,} returns sandbox id \"eed7c50f2e63c8af663a85570c75e3ed3c669a0335fe57b12902c26bdfd569f8\"" Aug 12 23:54:55.939118 kubelet[2248]: E0812 23:54:55.939015 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:55.940992 containerd[1508]: time="2025-08-12T23:54:55.940950604Z" level=info msg="CreateContainer within sandbox \"eed7c50f2e63c8af663a85570c75e3ed3c669a0335fe57b12902c26bdfd569f8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 12 23:54:56.074208 containerd[1508]: time="2025-08-12T23:54:56.074146690Z" level=info msg="CreateContainer within sandbox \"b6f3e6d5a606c35e8abea964ddbc078686b94aafdfc1a74a70d552435d8a1463\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1b08dfd605a434c7cc17a06c1a23416cd9e4566e1d2d6a3f938ac096d2557e6c\"" Aug 12 23:54:56.074905 containerd[1508]: time="2025-08-12T23:54:56.074873403Z" level=info msg="StartContainer for \"1b08dfd605a434c7cc17a06c1a23416cd9e4566e1d2d6a3f938ac096d2557e6c\"" Aug 12 23:54:56.077813 containerd[1508]: time="2025-08-12T23:54:56.077759942Z" level=info msg="CreateContainer within sandbox \"39fcc553a5111ecca27807453e267613b1ba9e0f5ea27ecc8d61d04d68f90b9b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f889a6d2e2d4672375c348b1fc3095469e523cfc3df13784df11f3a1e7d38904\"" Aug 12 23:54:56.078181 containerd[1508]: time="2025-08-12T23:54:56.078160187Z" level=info msg="StartContainer for \"f889a6d2e2d4672375c348b1fc3095469e523cfc3df13784df11f3a1e7d38904\"" Aug 12 23:54:56.081114 containerd[1508]: time="2025-08-12T23:54:56.081072390Z" level=info msg="CreateContainer within sandbox \"eed7c50f2e63c8af663a85570c75e3ed3c669a0335fe57b12902c26bdfd569f8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1697b982192fd44f68ad55d2797df8713ad49fbe3c68ea50ce9da012bdd9f97d\"" Aug 12 23:54:56.082829 containerd[1508]: time="2025-08-12T23:54:56.081519416Z" level=info msg="StartContainer for \"1697b982192fd44f68ad55d2797df8713ad49fbe3c68ea50ce9da012bdd9f97d\"" Aug 12 23:54:56.109288 systemd[1]: Started cri-containerd-f889a6d2e2d4672375c348b1fc3095469e523cfc3df13784df11f3a1e7d38904.scope - libcontainer container f889a6d2e2d4672375c348b1fc3095469e523cfc3df13784df11f3a1e7d38904. Aug 12 23:54:56.114257 systemd[1]: Started cri-containerd-1697b982192fd44f68ad55d2797df8713ad49fbe3c68ea50ce9da012bdd9f97d.scope - libcontainer container 1697b982192fd44f68ad55d2797df8713ad49fbe3c68ea50ce9da012bdd9f97d. Aug 12 23:54:56.116642 systemd[1]: Started cri-containerd-1b08dfd605a434c7cc17a06c1a23416cd9e4566e1d2d6a3f938ac096d2557e6c.scope - libcontainer container 1b08dfd605a434c7cc17a06c1a23416cd9e4566e1d2d6a3f938ac096d2557e6c. Aug 12 23:54:56.183797 containerd[1508]: time="2025-08-12T23:54:56.183728853Z" level=info msg="StartContainer for \"1b08dfd605a434c7cc17a06c1a23416cd9e4566e1d2d6a3f938ac096d2557e6c\" returns successfully" Aug 12 23:54:56.183972 containerd[1508]: time="2025-08-12T23:54:56.183873041Z" level=info msg="StartContainer for \"1697b982192fd44f68ad55d2797df8713ad49fbe3c68ea50ce9da012bdd9f97d\" returns successfully" Aug 12 23:54:56.239995 containerd[1508]: time="2025-08-12T23:54:56.239940674Z" level=info msg="StartContainer for \"f889a6d2e2d4672375c348b1fc3095469e523cfc3df13784df11f3a1e7d38904\" returns successfully" Aug 12 23:54:56.395121 kubelet[2248]: I0812 23:54:56.393935 2248 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:54:56.850761 kubelet[2248]: E0812 23:54:56.850718 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:56.857815 kubelet[2248]: E0812 23:54:56.857791 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:56.858151 kubelet[2248]: E0812 23:54:56.858127 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:57.860630 kubelet[2248]: E0812 23:54:57.860589 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:57.948211 kubelet[2248]: I0812 23:54:57.947573 2248 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 12 23:54:58.795000 kubelet[2248]: I0812 23:54:58.794705 2248 apiserver.go:52] "Watching apiserver" Aug 12 23:54:58.817080 kubelet[2248]: I0812 23:54:58.816982 2248 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 12 23:54:59.171037 kubelet[2248]: E0812 23:54:59.170954 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:59.862886 kubelet[2248]: E0812 23:54:59.862835 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:00.965113 kubelet[2248]: E0812 23:55:00.965043 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:01.749298 kubelet[2248]: E0812 23:55:01.749255 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:01.851336 systemd[1]: Reload requested from client PID 2530 ('systemctl') (unit session-7.scope)... Aug 12 23:55:01.851354 systemd[1]: Reloading... Aug 12 23:55:01.867942 kubelet[2248]: E0812 23:55:01.867583 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:01.867942 kubelet[2248]: E0812 23:55:01.867634 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:01.935090 zram_generator::config[2575]: No configuration found. Aug 12 23:55:02.061118 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:55:02.187752 systemd[1]: Reloading finished in 335 ms. Aug 12 23:55:02.222247 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:55:02.238561 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:55:02.238973 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:55:02.239039 systemd[1]: kubelet.service: Consumed 1.093s CPU time, 133M memory peak. Aug 12 23:55:02.251456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:55:02.436818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:55:02.443801 (kubelet)[2619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:55:02.494084 kubelet[2619]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:55:02.494084 kubelet[2619]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 12 23:55:02.494084 kubelet[2619]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:55:02.494084 kubelet[2619]: I0812 23:55:02.494002 2619 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:55:02.502670 kubelet[2619]: I0812 23:55:02.502613 2619 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 12 23:55:02.502670 kubelet[2619]: I0812 23:55:02.502647 2619 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:55:02.503038 kubelet[2619]: I0812 23:55:02.502941 2619 server.go:934] "Client rotation is on, will bootstrap in background" Aug 12 23:55:02.504672 kubelet[2619]: I0812 23:55:02.504643 2619 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 12 23:55:02.507638 kubelet[2619]: I0812 23:55:02.507582 2619 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:55:02.510780 kubelet[2619]: E0812 23:55:02.510727 2619 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:55:02.510780 kubelet[2619]: I0812 23:55:02.510767 2619 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:55:02.625934 kubelet[2619]: I0812 23:55:02.625701 2619 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:55:02.625934 kubelet[2619]: I0812 23:55:02.625878 2619 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 12 23:55:02.626126 kubelet[2619]: I0812 23:55:02.626011 2619 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:55:02.626588 kubelet[2619]: I0812 23:55:02.626072 2619 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:55:02.626588 kubelet[2619]: I0812 23:55:02.626538 2619 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:55:02.626588 kubelet[2619]: I0812 23:55:02.626557 2619 container_manager_linux.go:300] "Creating device plugin manager" Aug 12 23:55:02.627249 kubelet[2619]: I0812 23:55:02.626644 2619 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:55:02.627249 kubelet[2619]: I0812 23:55:02.626813 2619 kubelet.go:408] "Attempting to sync node with API server" Aug 12 23:55:02.627249 kubelet[2619]: I0812 23:55:02.626827 2619 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:55:02.627249 kubelet[2619]: I0812 23:55:02.626867 2619 kubelet.go:314] "Adding apiserver pod source" Aug 12 23:55:02.627249 kubelet[2619]: I0812 23:55:02.626895 2619 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:55:02.630120 kubelet[2619]: I0812 23:55:02.629890 2619 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 12 23:55:02.630589 kubelet[2619]: I0812 23:55:02.630363 2619 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:55:02.631370 kubelet[2619]: I0812 23:55:02.631343 2619 server.go:1274] "Started kubelet" Aug 12 23:55:02.635091 kubelet[2619]: I0812 23:55:02.633818 2619 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:55:02.635091 kubelet[2619]: I0812 23:55:02.634154 2619 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:55:02.635091 kubelet[2619]: I0812 23:55:02.634312 2619 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:55:02.635091 kubelet[2619]: I0812 23:55:02.634777 2619 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:55:02.635091 kubelet[2619]: I0812 23:55:02.635020 2619 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:55:02.635610 kubelet[2619]: I0812 23:55:02.635592 2619 server.go:449] "Adding debug handlers to kubelet server" Aug 12 23:55:02.641885 kubelet[2619]: E0812 23:55:02.641824 2619 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:55:02.642630 kubelet[2619]: I0812 23:55:02.642576 2619 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 12 23:55:02.643085 kubelet[2619]: I0812 23:55:02.643039 2619 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 12 23:55:02.643488 kubelet[2619]: I0812 23:55:02.643471 2619 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:55:02.646376 kubelet[2619]: I0812 23:55:02.646339 2619 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:55:02.646376 kubelet[2619]: I0812 23:55:02.646362 2619 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:55:02.646562 kubelet[2619]: I0812 23:55:02.646460 2619 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:55:02.648283 kubelet[2619]: I0812 23:55:02.648221 2619 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:55:02.651512 kubelet[2619]: I0812 23:55:02.651493 2619 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:55:02.651620 kubelet[2619]: I0812 23:55:02.651585 2619 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 12 23:55:02.651620 kubelet[2619]: I0812 23:55:02.651610 2619 kubelet.go:2321] "Starting kubelet main sync loop" Aug 12 23:55:02.651804 kubelet[2619]: E0812 23:55:02.651655 2619 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:55:02.681405 kubelet[2619]: I0812 23:55:02.681362 2619 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 12 23:55:02.681405 kubelet[2619]: I0812 23:55:02.681381 2619 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 12 23:55:02.681405 kubelet[2619]: I0812 23:55:02.681401 2619 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:55:02.681622 kubelet[2619]: I0812 23:55:02.681547 2619 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 12 23:55:02.681622 kubelet[2619]: I0812 23:55:02.681573 2619 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 12 23:55:02.681622 kubelet[2619]: I0812 23:55:02.681592 2619 policy_none.go:49] "None policy: Start" Aug 12 23:55:02.682286 kubelet[2619]: I0812 23:55:02.682260 2619 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 12 23:55:02.682337 kubelet[2619]: I0812 23:55:02.682292 2619 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:55:02.682469 kubelet[2619]: I0812 23:55:02.682446 2619 state_mem.go:75] "Updated machine memory state" Aug 12 23:55:02.687525 kubelet[2619]: I0812 23:55:02.687357 2619 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:55:02.687636 kubelet[2619]: I0812 23:55:02.687566 2619 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:55:02.687636 kubelet[2619]: I0812 23:55:02.687579 2619 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:55:02.688758 kubelet[2619]: I0812 23:55:02.687829 2619 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:55:02.759253 kubelet[2619]: E0812 23:55:02.759208 2619 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 12 23:55:02.759536 kubelet[2619]: E0812 23:55:02.759496 2619 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 12 23:55:02.760032 kubelet[2619]: E0812 23:55:02.759992 2619 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:55:02.797375 kubelet[2619]: I0812 23:55:02.797341 2619 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:55:02.803792 kubelet[2619]: I0812 23:55:02.803758 2619 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 12 23:55:02.803858 kubelet[2619]: I0812 23:55:02.803830 2619 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 12 23:55:02.846457 kubelet[2619]: I0812 23:55:02.846382 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7702d1de1b3e69c4fca1cfa4ab44d05f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7702d1de1b3e69c4fca1cfa4ab44d05f\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:55:02.846457 kubelet[2619]: I0812 23:55:02.846440 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:55:02.846457 kubelet[2619]: I0812 23:55:02.846472 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:55:02.846638 kubelet[2619]: I0812 23:55:02.846487 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:55:02.846638 kubelet[2619]: I0812 23:55:02.846503 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7702d1de1b3e69c4fca1cfa4ab44d05f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7702d1de1b3e69c4fca1cfa4ab44d05f\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:55:02.846638 kubelet[2619]: I0812 23:55:02.846516 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:55:02.846638 kubelet[2619]: I0812 23:55:02.846529 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:55:02.846638 kubelet[2619]: I0812 23:55:02.846547 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:55:02.846765 kubelet[2619]: I0812 23:55:02.846566 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7702d1de1b3e69c4fca1cfa4ab44d05f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7702d1de1b3e69c4fca1cfa4ab44d05f\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:55:02.853037 sudo[2659]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 12 23:55:02.853465 sudo[2659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 12 23:55:03.060025 kubelet[2619]: E0812 23:55:03.059900 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:03.060025 kubelet[2619]: E0812 23:55:03.059953 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:03.061266 kubelet[2619]: E0812 23:55:03.061044 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:03.339446 sudo[2659]: pam_unix(sudo:session): session closed for user root Aug 12 23:55:03.627724 kubelet[2619]: I0812 23:55:03.627559 2619 apiserver.go:52] "Watching apiserver" Aug 12 23:55:03.644280 kubelet[2619]: I0812 23:55:03.644228 2619 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 12 23:55:03.664700 kubelet[2619]: E0812 23:55:03.664514 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:03.664876 kubelet[2619]: E0812 23:55:03.664743 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:03.669367 kubelet[2619]: E0812 23:55:03.669319 2619 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 12 23:55:03.670126 kubelet[2619]: E0812 23:55:03.669489 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:03.725635 kubelet[2619]: I0812 23:55:03.725427 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.725399069 podStartE2EDuration="3.725399069s" podCreationTimestamp="2025-08-12 23:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:55:03.717259643 +0000 UTC m=+1.267882876" watchObservedRunningTime="2025-08-12 23:55:03.725399069 +0000 UTC m=+1.276022302" Aug 12 23:55:03.733427 kubelet[2619]: I0812 23:55:03.733347 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.733322531 podStartE2EDuration="5.733322531s" podCreationTimestamp="2025-08-12 23:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:55:03.725656347 +0000 UTC m=+1.276279580" watchObservedRunningTime="2025-08-12 23:55:03.733322531 +0000 UTC m=+1.283945764" Aug 12 23:55:03.741539 kubelet[2619]: I0812 23:55:03.741478 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.741459651 podStartE2EDuration="2.741459651s" podCreationTimestamp="2025-08-12 23:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:55:03.733612596 +0000 UTC m=+1.284235829" watchObservedRunningTime="2025-08-12 23:55:03.741459651 +0000 UTC m=+1.292082884" Aug 12 23:55:04.666024 kubelet[2619]: E0812 23:55:04.665487 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:04.769391 sudo[1693]: pam_unix(sudo:session): session closed for user root Aug 12 23:55:04.771076 sshd[1692]: Connection closed by 10.0.0.1 port 56728 Aug 12 23:55:04.771597 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Aug 12 23:55:04.776404 systemd[1]: sshd@6-10.0.0.52:22-10.0.0.1:56728.service: Deactivated successfully. Aug 12 23:55:04.779364 systemd[1]: session-7.scope: Deactivated successfully. Aug 12 23:55:04.779603 systemd[1]: session-7.scope: Consumed 4.675s CPU time, 253.4M memory peak. Aug 12 23:55:04.781097 systemd-logind[1489]: Session 7 logged out. Waiting for processes to exit. Aug 12 23:55:04.782463 systemd-logind[1489]: Removed session 7. Aug 12 23:55:06.140036 kubelet[2619]: E0812 23:55:06.139996 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:06.669590 kubelet[2619]: E0812 23:55:06.669532 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:08.249836 kubelet[2619]: I0812 23:55:08.249796 2619 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 12 23:55:08.250358 containerd[1508]: time="2025-08-12T23:55:08.250259291Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 12 23:55:08.250633 kubelet[2619]: I0812 23:55:08.250468 2619 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 12 23:55:09.167115 systemd[1]: Created slice kubepods-burstable-pod3caca0bf_4d8d_40d9_849f_6151c3b93199.slice - libcontainer container kubepods-burstable-pod3caca0bf_4d8d_40d9_849f_6151c3b93199.slice. Aug 12 23:55:09.177619 systemd[1]: Created slice kubepods-besteffort-poda8d83b7d_d8f8_4ab0_9fea_9377d54ea74c.slice - libcontainer container kubepods-besteffort-poda8d83b7d_d8f8_4ab0_9fea_9377d54ea74c.slice. Aug 12 23:55:09.187436 kubelet[2619]: E0812 23:55:09.187387 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:09.189595 kubelet[2619]: I0812 23:55:09.189560 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8d83b7d-d8f8-4ab0-9fea-9377d54ea74c-xtables-lock\") pod \"kube-proxy-fn7pl\" (UID: \"a8d83b7d-d8f8-4ab0-9fea-9377d54ea74c\") " pod="kube-system/kube-proxy-fn7pl" Aug 12 23:55:09.189708 kubelet[2619]: I0812 23:55:09.189602 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trf2v\" (UniqueName: \"kubernetes.io/projected/a8d83b7d-d8f8-4ab0-9fea-9377d54ea74c-kube-api-access-trf2v\") pod \"kube-proxy-fn7pl\" (UID: \"a8d83b7d-d8f8-4ab0-9fea-9377d54ea74c\") " pod="kube-system/kube-proxy-fn7pl" Aug 12 23:55:09.189708 kubelet[2619]: I0812 23:55:09.189630 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-xtables-lock\") pod \"cilium-fnxk9\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " pod="kube-system/cilium-fnxk9" Aug 12 23:55:09.189708 kubelet[2619]: I0812 23:55:09.189651 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-cilium-run\") pod \"cilium-fnxk9\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " pod="kube-system/cilium-fnxk9" Aug 12 23:55:09.189708 kubelet[2619]: I0812 23:55:09.189672 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-cni-path\") pod \"cilium-fnxk9\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " pod="kube-system/cilium-fnxk9" Aug 12 23:55:09.189708 kubelet[2619]: I0812 23:55:09.189693 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-etc-cni-netd\") pod \"cilium-fnxk9\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " pod="kube-system/cilium-fnxk9" Aug 12 23:55:09.189960 kubelet[2619]: I0812 23:55:09.189711 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-lib-modules\") pod \"cilium-fnxk9\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " pod="kube-system/cilium-fnxk9" Aug 12 23:55:09.189960 kubelet[2619]: I0812 23:55:09.189743 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skcfc\" (UniqueName: \"kubernetes.io/projected/3caca0bf-4d8d-40d9-849f-6151c3b93199-kube-api-access-skcfc\") pod \"cilium-fnxk9\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " pod="kube-system/cilium-fnxk9" Aug 12 23:55:09.189960 kubelet[2619]: I0812 23:55:09.189764 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-bpf-maps\") pod \"cilium-fnxk9\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " pod="kube-system/cilium-fnxk9" Aug 12 23:55:09.189960 kubelet[2619]: I0812 23:55:09.189784 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-hostproc\") pod \"cilium-fnxk9\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " pod="kube-system/cilium-fnxk9" Aug 12 23:55:09.189960 kubelet[2619]: I0812 23:55:09.189803 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-host-proc-sys-kernel\") pod \"cilium-fnxk9\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " pod="kube-system/cilium-fnxk9" Aug 12 23:55:09.189960 kubelet[2619]: I0812 23:55:09.189822 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3caca0bf-4d8d-40d9-849f-6151c3b93199-cilium-config-path\") pod \"cilium-fnxk9\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " pod="kube-system/cilium-fnxk9" Aug 12 23:55:09.190157 kubelet[2619]: I0812 23:55:09.189878 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8d83b7d-d8f8-4ab0-9fea-9377d54ea74c-lib-modules\") pod \"kube-proxy-fn7pl\" (UID: \"a8d83b7d-d8f8-4ab0-9fea-9377d54ea74c\") " pod="kube-system/kube-proxy-fn7pl" Aug 12 23:55:09.190157 kubelet[2619]: I0812 23:55:09.189914 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3caca0bf-4d8d-40d9-849f-6151c3b93199-clustermesh-secrets\") pod \"cilium-fnxk9\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " pod="kube-system/cilium-fnxk9" Aug 12 23:55:09.190157 kubelet[2619]: I0812 23:55:09.189934 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-host-proc-sys-net\") pod \"cilium-fnxk9\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " pod="kube-system/cilium-fnxk9" Aug 12 23:55:09.190157 kubelet[2619]: I0812 23:55:09.189964 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a8d83b7d-d8f8-4ab0-9fea-9377d54ea74c-kube-proxy\") pod \"kube-proxy-fn7pl\" (UID: \"a8d83b7d-d8f8-4ab0-9fea-9377d54ea74c\") " pod="kube-system/kube-proxy-fn7pl" Aug 12 23:55:09.190157 kubelet[2619]: I0812 23:55:09.190008 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3caca0bf-4d8d-40d9-849f-6151c3b93199-hubble-tls\") pod \"cilium-fnxk9\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " pod="kube-system/cilium-fnxk9" Aug 12 23:55:09.191515 kubelet[2619]: I0812 23:55:09.190045 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-cilium-cgroup\") pod \"cilium-fnxk9\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " pod="kube-system/cilium-fnxk9" Aug 12 23:55:09.205855 systemd[1]: Created slice kubepods-besteffort-pod23f86a9c_b7c0_4af7_b606_b295ca487d2e.slice - libcontainer container kubepods-besteffort-pod23f86a9c_b7c0_4af7_b606_b295ca487d2e.slice. Aug 12 23:55:09.292690 kubelet[2619]: I0812 23:55:09.292638 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23f86a9c-b7c0-4af7-b606-b295ca487d2e-cilium-config-path\") pod \"cilium-operator-5d85765b45-rh478\" (UID: \"23f86a9c-b7c0-4af7-b606-b295ca487d2e\") " pod="kube-system/cilium-operator-5d85765b45-rh478" Aug 12 23:55:09.293260 kubelet[2619]: I0812 23:55:09.292710 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thck7\" (UniqueName: \"kubernetes.io/projected/23f86a9c-b7c0-4af7-b606-b295ca487d2e-kube-api-access-thck7\") pod \"cilium-operator-5d85765b45-rh478\" (UID: \"23f86a9c-b7c0-4af7-b606-b295ca487d2e\") " pod="kube-system/cilium-operator-5d85765b45-rh478" Aug 12 23:55:09.476805 kubelet[2619]: E0812 23:55:09.476629 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:09.477420 containerd[1508]: time="2025-08-12T23:55:09.477308677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fnxk9,Uid:3caca0bf-4d8d-40d9-849f-6151c3b93199,Namespace:kube-system,Attempt:0,}" Aug 12 23:55:09.492299 kubelet[2619]: E0812 23:55:09.492218 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:09.492939 containerd[1508]: time="2025-08-12T23:55:09.492885413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fn7pl,Uid:a8d83b7d-d8f8-4ab0-9fea-9377d54ea74c,Namespace:kube-system,Attempt:0,}" Aug 12 23:55:09.509211 containerd[1508]: time="2025-08-12T23:55:09.506334724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:55:09.509211 containerd[1508]: time="2025-08-12T23:55:09.506412069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:55:09.509211 containerd[1508]: time="2025-08-12T23:55:09.506423642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:55:09.509411 containerd[1508]: time="2025-08-12T23:55:09.506506188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:55:09.513575 kubelet[2619]: E0812 23:55:09.513533 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:09.514504 containerd[1508]: time="2025-08-12T23:55:09.514461415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-rh478,Uid:23f86a9c-b7c0-4af7-b606-b295ca487d2e,Namespace:kube-system,Attempt:0,}" Aug 12 23:55:09.523717 containerd[1508]: time="2025-08-12T23:55:09.523523942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:55:09.523717 containerd[1508]: time="2025-08-12T23:55:09.523579844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:55:09.523717 containerd[1508]: time="2025-08-12T23:55:09.523594574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:55:09.524384 containerd[1508]: time="2025-08-12T23:55:09.523669744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:55:09.531428 systemd[1]: Started cri-containerd-4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57.scope - libcontainer container 4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57. Aug 12 23:55:09.543013 systemd[1]: Started cri-containerd-304de9786df84af339cfd5c7a8d1c0e505549617f3b3640450626ead1541dbd0.scope - libcontainer container 304de9786df84af339cfd5c7a8d1c0e505549617f3b3640450626ead1541dbd0. Aug 12 23:55:09.549567 containerd[1508]: time="2025-08-12T23:55:09.549117007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:55:09.549567 containerd[1508]: time="2025-08-12T23:55:09.549239933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:55:09.549567 containerd[1508]: time="2025-08-12T23:55:09.549267699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:55:09.549567 containerd[1508]: time="2025-08-12T23:55:09.549387640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:55:09.567652 containerd[1508]: time="2025-08-12T23:55:09.566655125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fnxk9,Uid:3caca0bf-4d8d-40d9-849f-6151c3b93199,Namespace:kube-system,Attempt:0,} returns sandbox id \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\"" Aug 12 23:55:09.567787 kubelet[2619]: E0812 23:55:09.567707 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:09.569373 containerd[1508]: time="2025-08-12T23:55:09.569310257Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 12 23:55:09.576307 systemd[1]: Started cri-containerd-45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7.scope - libcontainer container 45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7. Aug 12 23:55:09.582472 containerd[1508]: time="2025-08-12T23:55:09.582421651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fn7pl,Uid:a8d83b7d-d8f8-4ab0-9fea-9377d54ea74c,Namespace:kube-system,Attempt:0,} returns sandbox id \"304de9786df84af339cfd5c7a8d1c0e505549617f3b3640450626ead1541dbd0\"" Aug 12 23:55:09.583870 kubelet[2619]: E0812 23:55:09.583846 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:09.586567 containerd[1508]: time="2025-08-12T23:55:09.586527923Z" level=info msg="CreateContainer within sandbox \"304de9786df84af339cfd5c7a8d1c0e505549617f3b3640450626ead1541dbd0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 12 23:55:09.608328 containerd[1508]: time="2025-08-12T23:55:09.608294066Z" level=info msg="CreateContainer within sandbox \"304de9786df84af339cfd5c7a8d1c0e505549617f3b3640450626ead1541dbd0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6040039945e8733a9a951cb06946d7352683d7745b820a34b521654e14484bda\"" Aug 12 23:55:09.609154 containerd[1508]: time="2025-08-12T23:55:09.609110933Z" level=info msg="StartContainer for \"6040039945e8733a9a951cb06946d7352683d7745b820a34b521654e14484bda\"" Aug 12 23:55:09.626892 containerd[1508]: time="2025-08-12T23:55:09.626854112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-rh478,Uid:23f86a9c-b7c0-4af7-b606-b295ca487d2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7\"" Aug 12 23:55:09.627681 kubelet[2619]: E0812 23:55:09.627645 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:09.646247 systemd[1]: Started cri-containerd-6040039945e8733a9a951cb06946d7352683d7745b820a34b521654e14484bda.scope - libcontainer container 6040039945e8733a9a951cb06946d7352683d7745b820a34b521654e14484bda. Aug 12 23:55:09.677638 kubelet[2619]: E0812 23:55:09.677607 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:09.690438 containerd[1508]: time="2025-08-12T23:55:09.690073665Z" level=info msg="StartContainer for \"6040039945e8733a9a951cb06946d7352683d7745b820a34b521654e14484bda\" returns successfully" Aug 12 23:55:10.680108 kubelet[2619]: E0812 23:55:10.680041 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:11.681585 kubelet[2619]: E0812 23:55:11.681499 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:11.683208 update_engine[1491]: I20250812 23:55:11.683149 1491 update_attempter.cc:509] Updating boot flags... Aug 12 23:55:11.713102 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (2998) Aug 12 23:55:11.799584 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (2864) Aug 12 23:55:12.672544 kubelet[2619]: I0812 23:55:12.672399 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fn7pl" podStartSLOduration=3.672374598 podStartE2EDuration="3.672374598s" podCreationTimestamp="2025-08-12 23:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:55:10.691536214 +0000 UTC m=+8.242159457" watchObservedRunningTime="2025-08-12 23:55:12.672374598 +0000 UTC m=+10.222997831" Aug 12 23:55:13.226338 kubelet[2619]: E0812 23:55:13.226004 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:13.686087 kubelet[2619]: E0812 23:55:13.686001 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:18.642021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount399099506.mount: Deactivated successfully. Aug 12 23:55:26.479571 containerd[1508]: time="2025-08-12T23:55:26.479508594Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:55:26.480724 containerd[1508]: time="2025-08-12T23:55:26.480675283Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 12 23:55:26.481964 containerd[1508]: time="2025-08-12T23:55:26.481916075Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:55:26.483530 containerd[1508]: time="2025-08-12T23:55:26.483487265Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.914111477s" Aug 12 23:55:26.483530 containerd[1508]: time="2025-08-12T23:55:26.483515189Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 12 23:55:26.484671 containerd[1508]: time="2025-08-12T23:55:26.484560974Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 12 23:55:26.485785 containerd[1508]: time="2025-08-12T23:55:26.485740196Z" level=info msg="CreateContainer within sandbox \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 12 23:55:26.501537 containerd[1508]: time="2025-08-12T23:55:26.501490933Z" level=info msg="CreateContainer within sandbox \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9\"" Aug 12 23:55:26.502235 containerd[1508]: time="2025-08-12T23:55:26.502208893Z" level=info msg="StartContainer for \"82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9\"" Aug 12 23:55:26.538439 systemd[1]: Started cri-containerd-82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9.scope - libcontainer container 82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9. Aug 12 23:55:26.569117 containerd[1508]: time="2025-08-12T23:55:26.569065150Z" level=info msg="StartContainer for \"82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9\" returns successfully" Aug 12 23:55:26.580275 systemd[1]: cri-containerd-82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9.scope: Deactivated successfully. Aug 12 23:55:26.730718 kubelet[2619]: E0812 23:55:26.730591 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:27.127043 containerd[1508]: time="2025-08-12T23:55:27.126968283Z" level=info msg="shim disconnected" id=82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9 namespace=k8s.io Aug 12 23:55:27.127043 containerd[1508]: time="2025-08-12T23:55:27.127030673Z" level=warning msg="cleaning up after shim disconnected" id=82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9 namespace=k8s.io Aug 12 23:55:27.127043 containerd[1508]: time="2025-08-12T23:55:27.127039851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:55:27.497643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9-rootfs.mount: Deactivated successfully. Aug 12 23:55:27.732701 kubelet[2619]: E0812 23:55:27.732648 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:27.734770 containerd[1508]: time="2025-08-12T23:55:27.734717536Z" level=info msg="CreateContainer within sandbox \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 12 23:55:27.752680 containerd[1508]: time="2025-08-12T23:55:27.752574942Z" level=info msg="CreateContainer within sandbox \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979\"" Aug 12 23:55:27.753550 containerd[1508]: time="2025-08-12T23:55:27.753519928Z" level=info msg="StartContainer for \"468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979\"" Aug 12 23:55:27.791256 systemd[1]: Started cri-containerd-468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979.scope - libcontainer container 468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979. Aug 12 23:55:27.824444 containerd[1508]: time="2025-08-12T23:55:27.824386995Z" level=info msg="StartContainer for \"468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979\" returns successfully" Aug 12 23:55:27.839791 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:55:27.840384 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:55:27.841025 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:55:27.846614 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:55:27.846974 systemd[1]: cri-containerd-468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979.scope: Deactivated successfully. Aug 12 23:55:27.874674 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:55:27.874980 containerd[1508]: time="2025-08-12T23:55:27.874759097Z" level=info msg="shim disconnected" id=468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979 namespace=k8s.io Aug 12 23:55:27.874980 containerd[1508]: time="2025-08-12T23:55:27.874828261Z" level=warning msg="cleaning up after shim disconnected" id=468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979 namespace=k8s.io Aug 12 23:55:27.874980 containerd[1508]: time="2025-08-12T23:55:27.874839552Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:55:28.335972 containerd[1508]: time="2025-08-12T23:55:28.335900134Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:55:28.336596 containerd[1508]: time="2025-08-12T23:55:28.336550711Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 12 23:55:28.337684 containerd[1508]: time="2025-08-12T23:55:28.337647298Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:55:28.339364 containerd[1508]: time="2025-08-12T23:55:28.339324267Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.854729157s" Aug 12 23:55:28.339412 containerd[1508]: time="2025-08-12T23:55:28.339365187Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 12 23:55:28.341427 containerd[1508]: time="2025-08-12T23:55:28.341392221Z" level=info msg="CreateContainer within sandbox \"45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 12 23:55:28.355111 containerd[1508]: time="2025-08-12T23:55:28.355015514Z" level=info msg="CreateContainer within sandbox \"45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3\"" Aug 12 23:55:28.355748 containerd[1508]: time="2025-08-12T23:55:28.355683193Z" level=info msg="StartContainer for \"602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3\"" Aug 12 23:55:28.387235 systemd[1]: Started cri-containerd-602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3.scope - libcontainer container 602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3. Aug 12 23:55:28.498824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979-rootfs.mount: Deactivated successfully. Aug 12 23:55:28.587333 containerd[1508]: time="2025-08-12T23:55:28.587195527Z" level=info msg="StartContainer for \"602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3\" returns successfully" Aug 12 23:55:28.735781 kubelet[2619]: E0812 23:55:28.735734 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:28.738076 kubelet[2619]: E0812 23:55:28.738019 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:28.739865 containerd[1508]: time="2025-08-12T23:55:28.739814787Z" level=info msg="CreateContainer within sandbox \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 12 23:55:29.121518 kubelet[2619]: I0812 23:55:29.121025 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-rh478" podStartSLOduration=1.410328764 podStartE2EDuration="20.121005623s" podCreationTimestamp="2025-08-12 23:55:09 +0000 UTC" firstStartedPulling="2025-08-12 23:55:09.629342691 +0000 UTC m=+7.179965924" lastFinishedPulling="2025-08-12 23:55:28.34001955 +0000 UTC m=+25.890642783" observedRunningTime="2025-08-12 23:55:29.120468828 +0000 UTC m=+26.671092061" watchObservedRunningTime="2025-08-12 23:55:29.121005623 +0000 UTC m=+26.671628856" Aug 12 23:55:29.650982 containerd[1508]: time="2025-08-12T23:55:29.650895028Z" level=info msg="CreateContainer within sandbox \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d\"" Aug 12 23:55:29.652142 containerd[1508]: time="2025-08-12T23:55:29.651998478Z" level=info msg="StartContainer for \"6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d\"" Aug 12 23:55:29.728601 systemd[1]: Started cri-containerd-6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d.scope - libcontainer container 6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d. Aug 12 23:55:29.742727 kubelet[2619]: E0812 23:55:29.742667 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:29.771454 systemd[1]: cri-containerd-6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d.scope: Deactivated successfully. Aug 12 23:55:29.866090 containerd[1508]: time="2025-08-12T23:55:29.864513707Z" level=info msg="StartContainer for \"6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d\" returns successfully" Aug 12 23:55:29.893721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d-rootfs.mount: Deactivated successfully. Aug 12 23:55:29.902703 containerd[1508]: time="2025-08-12T23:55:29.902546190Z" level=info msg="shim disconnected" id=6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d namespace=k8s.io Aug 12 23:55:29.902703 containerd[1508]: time="2025-08-12T23:55:29.902610645Z" level=warning msg="cleaning up after shim disconnected" id=6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d namespace=k8s.io Aug 12 23:55:29.902703 containerd[1508]: time="2025-08-12T23:55:29.902622067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:55:30.747156 kubelet[2619]: E0812 23:55:30.747108 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:30.749643 containerd[1508]: time="2025-08-12T23:55:30.749597908Z" level=info msg="CreateContainer within sandbox \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 12 23:55:30.770448 containerd[1508]: time="2025-08-12T23:55:30.770388296Z" level=info msg="CreateContainer within sandbox \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4\"" Aug 12 23:55:30.771317 containerd[1508]: time="2025-08-12T23:55:30.770978794Z" level=info msg="StartContainer for \"c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4\"" Aug 12 23:55:30.827260 systemd[1]: Started cri-containerd-c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4.scope - libcontainer container c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4. Aug 12 23:55:30.877680 systemd[1]: cri-containerd-c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4.scope: Deactivated successfully. Aug 12 23:55:30.878902 containerd[1508]: time="2025-08-12T23:55:30.878634234Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3caca0bf_4d8d_40d9_849f_6151c3b93199.slice/cri-containerd-c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4.scope/memory.events\": no such file or directory" Aug 12 23:55:30.883105 containerd[1508]: time="2025-08-12T23:55:30.883040389Z" level=info msg="StartContainer for \"c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4\" returns successfully" Aug 12 23:55:30.920983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4-rootfs.mount: Deactivated successfully. Aug 12 23:55:30.968017 containerd[1508]: time="2025-08-12T23:55:30.967931374Z" level=info msg="shim disconnected" id=c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4 namespace=k8s.io Aug 12 23:55:30.968017 containerd[1508]: time="2025-08-12T23:55:30.967993804Z" level=warning msg="cleaning up after shim disconnected" id=c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4 namespace=k8s.io Aug 12 23:55:30.968017 containerd[1508]: time="2025-08-12T23:55:30.968004035Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:55:31.750350 kubelet[2619]: E0812 23:55:31.750135 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:31.751884 containerd[1508]: time="2025-08-12T23:55:31.751842856Z" level=info msg="CreateContainer within sandbox \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 12 23:55:31.967951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831834622.mount: Deactivated successfully. Aug 12 23:55:31.969317 containerd[1508]: time="2025-08-12T23:55:31.969274651Z" level=info msg="CreateContainer within sandbox \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687\"" Aug 12 23:55:31.969837 containerd[1508]: time="2025-08-12T23:55:31.969810192Z" level=info msg="StartContainer for \"f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687\"" Aug 12 23:55:32.008244 systemd[1]: Started cri-containerd-f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687.scope - libcontainer container f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687. Aug 12 23:55:32.042223 containerd[1508]: time="2025-08-12T23:55:32.042176680Z" level=info msg="StartContainer for \"f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687\" returns successfully" Aug 12 23:55:32.207219 kubelet[2619]: I0812 23:55:32.207165 2619 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 12 23:55:32.242180 systemd[1]: Created slice kubepods-burstable-pod14200754_c803_4e7e_a746_d3e861f10455.slice - libcontainer container kubepods-burstable-pod14200754_c803_4e7e_a746_d3e861f10455.slice. Aug 12 23:55:32.249140 systemd[1]: Created slice kubepods-burstable-pod39c4849f_4041_4f5d_8aff_c175aa07d073.slice - libcontainer container kubepods-burstable-pod39c4849f_4041_4f5d_8aff_c175aa07d073.slice. Aug 12 23:55:32.358724 kubelet[2619]: I0812 23:55:32.358622 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14200754-c803-4e7e-a746-d3e861f10455-config-volume\") pod \"coredns-7c65d6cfc9-2bzmq\" (UID: \"14200754-c803-4e7e-a746-d3e861f10455\") " pod="kube-system/coredns-7c65d6cfc9-2bzmq" Aug 12 23:55:32.358724 kubelet[2619]: I0812 23:55:32.358673 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39c4849f-4041-4f5d-8aff-c175aa07d073-config-volume\") pod \"coredns-7c65d6cfc9-rdtqp\" (UID: \"39c4849f-4041-4f5d-8aff-c175aa07d073\") " pod="kube-system/coredns-7c65d6cfc9-rdtqp" Aug 12 23:55:32.358724 kubelet[2619]: I0812 23:55:32.358692 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46frq\" (UniqueName: \"kubernetes.io/projected/39c4849f-4041-4f5d-8aff-c175aa07d073-kube-api-access-46frq\") pod \"coredns-7c65d6cfc9-rdtqp\" (UID: \"39c4849f-4041-4f5d-8aff-c175aa07d073\") " pod="kube-system/coredns-7c65d6cfc9-rdtqp" Aug 12 23:55:32.359032 kubelet[2619]: I0812 23:55:32.358986 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gxh4\" (UniqueName: \"kubernetes.io/projected/14200754-c803-4e7e-a746-d3e861f10455-kube-api-access-5gxh4\") pod \"coredns-7c65d6cfc9-2bzmq\" (UID: \"14200754-c803-4e7e-a746-d3e861f10455\") " pod="kube-system/coredns-7c65d6cfc9-2bzmq" Aug 12 23:55:32.548718 kubelet[2619]: E0812 23:55:32.548660 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:32.549569 containerd[1508]: time="2025-08-12T23:55:32.549498143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2bzmq,Uid:14200754-c803-4e7e-a746-d3e861f10455,Namespace:kube-system,Attempt:0,}" Aug 12 23:55:32.552363 kubelet[2619]: E0812 23:55:32.552331 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:32.552989 containerd[1508]: time="2025-08-12T23:55:32.552948744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rdtqp,Uid:39c4849f-4041-4f5d-8aff-c175aa07d073,Namespace:kube-system,Attempt:0,}" Aug 12 23:55:32.755822 kubelet[2619]: E0812 23:55:32.755437 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:32.885853 kubelet[2619]: I0812 23:55:32.885761 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fnxk9" podStartSLOduration=6.969939399 podStartE2EDuration="23.885737833s" podCreationTimestamp="2025-08-12 23:55:09 +0000 UTC" firstStartedPulling="2025-08-12 23:55:09.568619442 +0000 UTC m=+7.119242665" lastFinishedPulling="2025-08-12 23:55:26.484417866 +0000 UTC m=+24.035041099" observedRunningTime="2025-08-12 23:55:32.885292585 +0000 UTC m=+30.435915838" watchObservedRunningTime="2025-08-12 23:55:32.885737833 +0000 UTC m=+30.436361066" Aug 12 23:55:33.757023 kubelet[2619]: E0812 23:55:33.756904 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:34.374013 systemd-networkd[1421]: cilium_host: Link UP Aug 12 23:55:34.374223 systemd-networkd[1421]: cilium_net: Link UP Aug 12 23:55:34.374423 systemd-networkd[1421]: cilium_net: Gained carrier Aug 12 23:55:34.374624 systemd-networkd[1421]: cilium_host: Gained carrier Aug 12 23:55:34.382841 systemd-networkd[1421]: cilium_net: Gained IPv6LL Aug 12 23:55:34.492418 systemd-networkd[1421]: cilium_vxlan: Link UP Aug 12 23:55:34.492435 systemd-networkd[1421]: cilium_vxlan: Gained carrier Aug 12 23:55:34.715089 kernel: NET: Registered PF_ALG protocol family Aug 12 23:55:34.759401 kubelet[2619]: E0812 23:55:34.759353 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:35.046232 systemd-networkd[1421]: cilium_host: Gained IPv6LL Aug 12 23:55:35.430508 systemd-networkd[1421]: lxc_health: Link UP Aug 12 23:55:35.443133 systemd-networkd[1421]: lxc_health: Gained carrier Aug 12 23:55:35.558296 systemd-networkd[1421]: cilium_vxlan: Gained IPv6LL Aug 12 23:55:35.665093 kernel: eth0: renamed from tmp25e03 Aug 12 23:55:35.672612 kernel: eth0: renamed from tmp3972a Aug 12 23:55:35.678536 systemd-networkd[1421]: lxc4bdb23c18483: Link UP Aug 12 23:55:35.682531 systemd-networkd[1421]: lxc7cdb88173d2e: Link UP Aug 12 23:55:35.682855 systemd-networkd[1421]: lxc7cdb88173d2e: Gained carrier Aug 12 23:55:35.684207 systemd-networkd[1421]: lxc4bdb23c18483: Gained carrier Aug 12 23:55:35.760773 kubelet[2619]: E0812 23:55:35.760722 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:36.763119 kubelet[2619]: E0812 23:55:36.763073 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:36.902364 systemd-networkd[1421]: lxc_health: Gained IPv6LL Aug 12 23:55:37.542265 systemd-networkd[1421]: lxc4bdb23c18483: Gained IPv6LL Aug 12 23:55:37.670253 systemd-networkd[1421]: lxc7cdb88173d2e: Gained IPv6LL Aug 12 23:55:37.764251 kubelet[2619]: E0812 23:55:37.764216 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:38.803120 systemd[1]: Started sshd@7-10.0.0.52:22-10.0.0.1:52882.service - OpenSSH per-connection server daemon (10.0.0.1:52882). Aug 12 23:55:38.845649 sshd[3851]: Accepted publickey for core from 10.0.0.1 port 52882 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:55:38.847778 sshd-session[3851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:55:38.853163 systemd-logind[1489]: New session 8 of user core. Aug 12 23:55:38.861446 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 12 23:55:39.002221 sshd[3853]: Connection closed by 10.0.0.1 port 52882 Aug 12 23:55:39.002612 sshd-session[3851]: pam_unix(sshd:session): session closed for user core Aug 12 23:55:39.006963 systemd[1]: sshd@7-10.0.0.52:22-10.0.0.1:52882.service: Deactivated successfully. Aug 12 23:55:39.009490 systemd[1]: session-8.scope: Deactivated successfully. Aug 12 23:55:39.010217 systemd-logind[1489]: Session 8 logged out. Waiting for processes to exit. Aug 12 23:55:39.011355 systemd-logind[1489]: Removed session 8. Aug 12 23:55:39.334779 containerd[1508]: time="2025-08-12T23:55:39.334542217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:55:39.335307 containerd[1508]: time="2025-08-12T23:55:39.334767758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:55:39.335307 containerd[1508]: time="2025-08-12T23:55:39.334793438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:55:39.335307 containerd[1508]: time="2025-08-12T23:55:39.334920300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:55:39.339623 containerd[1508]: time="2025-08-12T23:55:39.336293141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:55:39.339623 containerd[1508]: time="2025-08-12T23:55:39.336381220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:55:39.339623 containerd[1508]: time="2025-08-12T23:55:39.336418030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:55:39.339623 containerd[1508]: time="2025-08-12T23:55:39.339241610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:55:39.381259 systemd[1]: Started cri-containerd-25e038f9a391b3b69d5dc0ab894325def606f89b2517813f7e3d72f48becc124.scope - libcontainer container 25e038f9a391b3b69d5dc0ab894325def606f89b2517813f7e3d72f48becc124. Aug 12 23:55:39.383100 systemd[1]: Started cri-containerd-3972ac403b712e1803babbe46fed354e74d30704b14d7fdba0cfa13e3189ea45.scope - libcontainer container 3972ac403b712e1803babbe46fed354e74d30704b14d7fdba0cfa13e3189ea45. Aug 12 23:55:39.399874 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:55:39.402256 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:55:39.429877 containerd[1508]: time="2025-08-12T23:55:39.429817118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2bzmq,Uid:14200754-c803-4e7e-a746-d3e861f10455,Namespace:kube-system,Attempt:0,} returns sandbox id \"25e038f9a391b3b69d5dc0ab894325def606f89b2517813f7e3d72f48becc124\"" Aug 12 23:55:39.431711 kubelet[2619]: E0812 23:55:39.431680 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:39.433888 containerd[1508]: time="2025-08-12T23:55:39.433810920Z" level=info msg="CreateContainer within sandbox \"25e038f9a391b3b69d5dc0ab894325def606f89b2517813f7e3d72f48becc124\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:55:39.436615 containerd[1508]: time="2025-08-12T23:55:39.436593041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rdtqp,Uid:39c4849f-4041-4f5d-8aff-c175aa07d073,Namespace:kube-system,Attempt:0,} returns sandbox id \"3972ac403b712e1803babbe46fed354e74d30704b14d7fdba0cfa13e3189ea45\"" Aug 12 23:55:39.437186 kubelet[2619]: E0812 23:55:39.437166 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:39.439296 containerd[1508]: time="2025-08-12T23:55:39.439272785Z" level=info msg="CreateContainer within sandbox \"3972ac403b712e1803babbe46fed354e74d30704b14d7fdba0cfa13e3189ea45\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:55:39.460429 containerd[1508]: time="2025-08-12T23:55:39.460335830Z" level=info msg="CreateContainer within sandbox \"25e038f9a391b3b69d5dc0ab894325def606f89b2517813f7e3d72f48becc124\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00ab54f6171aa6fa424289d7c1a16d10927f9352d69d021a3a551e1c6eccb214\"" Aug 12 23:55:39.462182 containerd[1508]: time="2025-08-12T23:55:39.461189837Z" level=info msg="StartContainer for \"00ab54f6171aa6fa424289d7c1a16d10927f9352d69d021a3a551e1c6eccb214\"" Aug 12 23:55:39.465703 containerd[1508]: time="2025-08-12T23:55:39.465648148Z" level=info msg="CreateContainer within sandbox \"3972ac403b712e1803babbe46fed354e74d30704b14d7fdba0cfa13e3189ea45\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dc9287b57a5a1883e17048019693346bf30cc38f8a101c19f4bc8e9f527427c5\"" Aug 12 23:55:39.466325 containerd[1508]: time="2025-08-12T23:55:39.466269239Z" level=info msg="StartContainer for \"dc9287b57a5a1883e17048019693346bf30cc38f8a101c19f4bc8e9f527427c5\"" Aug 12 23:55:39.493206 systemd[1]: Started cri-containerd-00ab54f6171aa6fa424289d7c1a16d10927f9352d69d021a3a551e1c6eccb214.scope - libcontainer container 00ab54f6171aa6fa424289d7c1a16d10927f9352d69d021a3a551e1c6eccb214. Aug 12 23:55:39.496566 systemd[1]: Started cri-containerd-dc9287b57a5a1883e17048019693346bf30cc38f8a101c19f4bc8e9f527427c5.scope - libcontainer container dc9287b57a5a1883e17048019693346bf30cc38f8a101c19f4bc8e9f527427c5. Aug 12 23:55:39.537280 containerd[1508]: time="2025-08-12T23:55:39.537139753Z" level=info msg="StartContainer for \"00ab54f6171aa6fa424289d7c1a16d10927f9352d69d021a3a551e1c6eccb214\" returns successfully" Aug 12 23:55:39.539713 containerd[1508]: time="2025-08-12T23:55:39.539608924Z" level=info msg="StartContainer for \"dc9287b57a5a1883e17048019693346bf30cc38f8a101c19f4bc8e9f527427c5\" returns successfully" Aug 12 23:55:39.774000 kubelet[2619]: E0812 23:55:39.773664 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:39.777640 kubelet[2619]: E0812 23:55:39.777601 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:39.839259 kubelet[2619]: I0812 23:55:39.839107 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2bzmq" podStartSLOduration=30.839086266 podStartE2EDuration="30.839086266s" podCreationTimestamp="2025-08-12 23:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:55:39.78528153 +0000 UTC m=+37.335904763" watchObservedRunningTime="2025-08-12 23:55:39.839086266 +0000 UTC m=+37.389709499" Aug 12 23:55:39.847547 kubelet[2619]: I0812 23:55:39.847474 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rdtqp" podStartSLOduration=30.847450001 podStartE2EDuration="30.847450001s" podCreationTimestamp="2025-08-12 23:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:55:39.847187307 +0000 UTC m=+37.397810540" watchObservedRunningTime="2025-08-12 23:55:39.847450001 +0000 UTC m=+37.398073234" Aug 12 23:55:40.779412 kubelet[2619]: E0812 23:55:40.779372 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:40.779412 kubelet[2619]: E0812 23:55:40.779372 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:41.781202 kubelet[2619]: E0812 23:55:41.781159 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:41.781202 kubelet[2619]: E0812 23:55:41.781190 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:55:44.015173 systemd[1]: Started sshd@8-10.0.0.52:22-10.0.0.1:52898.service - OpenSSH per-connection server daemon (10.0.0.1:52898). Aug 12 23:55:44.060300 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 52898 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:55:44.062111 sshd-session[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:55:44.066622 systemd-logind[1489]: New session 9 of user core. Aug 12 23:55:44.073175 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 12 23:55:44.220682 sshd[4046]: Connection closed by 10.0.0.1 port 52898 Aug 12 23:55:44.221166 sshd-session[4044]: pam_unix(sshd:session): session closed for user core Aug 12 23:55:44.225014 systemd[1]: sshd@8-10.0.0.52:22-10.0.0.1:52898.service: Deactivated successfully. Aug 12 23:55:44.227972 systemd[1]: session-9.scope: Deactivated successfully. Aug 12 23:55:44.230361 systemd-logind[1489]: Session 9 logged out. Waiting for processes to exit. Aug 12 23:55:44.231777 systemd-logind[1489]: Removed session 9. Aug 12 23:55:49.237789 systemd[1]: Started sshd@9-10.0.0.52:22-10.0.0.1:53712.service - OpenSSH per-connection server daemon (10.0.0.1:53712). Aug 12 23:55:49.282668 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 53712 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:55:49.284432 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:55:49.290030 systemd-logind[1489]: New session 10 of user core. Aug 12 23:55:49.300265 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 12 23:55:49.428517 sshd[4063]: Connection closed by 10.0.0.1 port 53712 Aug 12 23:55:49.428900 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Aug 12 23:55:49.433390 systemd[1]: sshd@9-10.0.0.52:22-10.0.0.1:53712.service: Deactivated successfully. Aug 12 23:55:49.435721 systemd[1]: session-10.scope: Deactivated successfully. Aug 12 23:55:49.436599 systemd-logind[1489]: Session 10 logged out. Waiting for processes to exit. Aug 12 23:55:49.437622 systemd-logind[1489]: Removed session 10. Aug 12 23:55:54.442827 systemd[1]: Started sshd@10-10.0.0.52:22-10.0.0.1:53724.service - OpenSSH per-connection server daemon (10.0.0.1:53724). Aug 12 23:55:54.483848 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 53724 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:55:54.485912 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:55:54.490734 systemd-logind[1489]: New session 11 of user core. Aug 12 23:55:54.501376 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 12 23:55:54.625534 sshd[4079]: Connection closed by 10.0.0.1 port 53724 Aug 12 23:55:54.625929 sshd-session[4077]: pam_unix(sshd:session): session closed for user core Aug 12 23:55:54.630163 systemd[1]: sshd@10-10.0.0.52:22-10.0.0.1:53724.service: Deactivated successfully. Aug 12 23:55:54.632701 systemd[1]: session-11.scope: Deactivated successfully. Aug 12 23:55:54.633507 systemd-logind[1489]: Session 11 logged out. Waiting for processes to exit. Aug 12 23:55:54.634436 systemd-logind[1489]: Removed session 11. Aug 12 23:55:59.638789 systemd[1]: Started sshd@11-10.0.0.52:22-10.0.0.1:56902.service - OpenSSH per-connection server daemon (10.0.0.1:56902). Aug 12 23:55:59.679581 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 56902 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:55:59.681476 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:55:59.685856 systemd-logind[1489]: New session 12 of user core. Aug 12 23:55:59.695197 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 12 23:55:59.807282 sshd[4095]: Connection closed by 10.0.0.1 port 56902 Aug 12 23:55:59.807861 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Aug 12 23:55:59.822340 systemd[1]: sshd@11-10.0.0.52:22-10.0.0.1:56902.service: Deactivated successfully. Aug 12 23:55:59.824627 systemd[1]: session-12.scope: Deactivated successfully. Aug 12 23:55:59.826624 systemd-logind[1489]: Session 12 logged out. Waiting for processes to exit. Aug 12 23:55:59.832411 systemd[1]: Started sshd@12-10.0.0.52:22-10.0.0.1:56916.service - OpenSSH per-connection server daemon (10.0.0.1:56916). Aug 12 23:55:59.833318 systemd-logind[1489]: Removed session 12. Aug 12 23:55:59.869138 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 56916 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:55:59.870706 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:55:59.875511 systemd-logind[1489]: New session 13 of user core. Aug 12 23:55:59.889231 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 12 23:56:00.040114 sshd[4112]: Connection closed by 10.0.0.1 port 56916 Aug 12 23:56:00.040597 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:00.052532 systemd[1]: sshd@12-10.0.0.52:22-10.0.0.1:56916.service: Deactivated successfully. Aug 12 23:56:00.056651 systemd[1]: session-13.scope: Deactivated successfully. Aug 12 23:56:00.060870 systemd-logind[1489]: Session 13 logged out. Waiting for processes to exit. Aug 12 23:56:00.071522 systemd[1]: Started sshd@13-10.0.0.52:22-10.0.0.1:56924.service - OpenSSH per-connection server daemon (10.0.0.1:56924). Aug 12 23:56:00.072399 systemd-logind[1489]: Removed session 13. Aug 12 23:56:00.109508 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 56924 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:00.111377 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:00.116363 systemd-logind[1489]: New session 14 of user core. Aug 12 23:56:00.125245 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 12 23:56:00.244885 sshd[4126]: Connection closed by 10.0.0.1 port 56924 Aug 12 23:56:00.245315 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:00.250559 systemd[1]: sshd@13-10.0.0.52:22-10.0.0.1:56924.service: Deactivated successfully. Aug 12 23:56:00.253726 systemd[1]: session-14.scope: Deactivated successfully. Aug 12 23:56:00.254590 systemd-logind[1489]: Session 14 logged out. Waiting for processes to exit. Aug 12 23:56:00.255627 systemd-logind[1489]: Removed session 14. Aug 12 23:56:05.263809 systemd[1]: Started sshd@14-10.0.0.52:22-10.0.0.1:56932.service - OpenSSH per-connection server daemon (10.0.0.1:56932). Aug 12 23:56:05.310912 systemd-logind[1489]: New session 15 of user core. Aug 12 23:56:05.306010 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:05.320286 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 12 23:56:05.352820 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 56932 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:05.435899 sshd[4143]: Connection closed by 10.0.0.1 port 56932 Aug 12 23:56:05.436330 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:05.440238 systemd[1]: sshd@14-10.0.0.52:22-10.0.0.1:56932.service: Deactivated successfully. Aug 12 23:56:05.442567 systemd[1]: session-15.scope: Deactivated successfully. Aug 12 23:56:05.443357 systemd-logind[1489]: Session 15 logged out. Waiting for processes to exit. Aug 12 23:56:05.444237 systemd-logind[1489]: Removed session 15. Aug 12 23:56:10.449682 systemd[1]: Started sshd@15-10.0.0.52:22-10.0.0.1:52346.service - OpenSSH per-connection server daemon (10.0.0.1:52346). Aug 12 23:56:10.489091 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 52346 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:10.491178 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:10.495958 systemd-logind[1489]: New session 16 of user core. Aug 12 23:56:10.501254 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 12 23:56:10.631236 sshd[4160]: Connection closed by 10.0.0.1 port 52346 Aug 12 23:56:10.631616 sshd-session[4158]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:10.636769 systemd[1]: sshd@15-10.0.0.52:22-10.0.0.1:52346.service: Deactivated successfully. Aug 12 23:56:10.639032 systemd[1]: session-16.scope: Deactivated successfully. Aug 12 23:56:10.640005 systemd-logind[1489]: Session 16 logged out. Waiting for processes to exit. Aug 12 23:56:10.640939 systemd-logind[1489]: Removed session 16. Aug 12 23:56:15.669530 systemd[1]: Started sshd@16-10.0.0.52:22-10.0.0.1:52356.service - OpenSSH per-connection server daemon (10.0.0.1:52356). Aug 12 23:56:15.739625 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 52356 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:15.741644 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:15.767652 systemd-logind[1489]: New session 17 of user core. Aug 12 23:56:15.780919 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 12 23:56:15.945420 sshd[4175]: Connection closed by 10.0.0.1 port 52356 Aug 12 23:56:15.945901 sshd-session[4173]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:15.957640 systemd[1]: sshd@16-10.0.0.52:22-10.0.0.1:52356.service: Deactivated successfully. Aug 12 23:56:15.960263 systemd[1]: session-17.scope: Deactivated successfully. Aug 12 23:56:15.962793 systemd-logind[1489]: Session 17 logged out. Waiting for processes to exit. Aug 12 23:56:15.968427 systemd[1]: Started sshd@17-10.0.0.52:22-10.0.0.1:52362.service - OpenSSH per-connection server daemon (10.0.0.1:52362). Aug 12 23:56:15.969759 systemd-logind[1489]: Removed session 17. Aug 12 23:56:16.007645 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 52362 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:16.009659 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:16.015529 systemd-logind[1489]: New session 18 of user core. Aug 12 23:56:16.025346 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 12 23:56:16.311572 sshd[4190]: Connection closed by 10.0.0.1 port 52362 Aug 12 23:56:16.312096 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:16.326509 systemd[1]: sshd@17-10.0.0.52:22-10.0.0.1:52362.service: Deactivated successfully. Aug 12 23:56:16.329523 systemd[1]: session-18.scope: Deactivated successfully. Aug 12 23:56:16.333250 systemd-logind[1489]: Session 18 logged out. Waiting for processes to exit. Aug 12 23:56:16.346493 systemd[1]: Started sshd@18-10.0.0.52:22-10.0.0.1:52368.service - OpenSSH per-connection server daemon (10.0.0.1:52368). Aug 12 23:56:16.347685 systemd-logind[1489]: Removed session 18. Aug 12 23:56:16.385377 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 52368 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:16.386993 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:16.391705 systemd-logind[1489]: New session 19 of user core. Aug 12 23:56:16.401201 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 12 23:56:16.652920 kubelet[2619]: E0812 23:56:16.652758 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:18.286402 sshd[4203]: Connection closed by 10.0.0.1 port 52368 Aug 12 23:56:18.289794 sshd-session[4200]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:18.301508 systemd[1]: sshd@18-10.0.0.52:22-10.0.0.1:52368.service: Deactivated successfully. Aug 12 23:56:18.307603 systemd[1]: session-19.scope: Deactivated successfully. Aug 12 23:56:18.309199 systemd-logind[1489]: Session 19 logged out. Waiting for processes to exit. Aug 12 23:56:18.321277 systemd[1]: Started sshd@19-10.0.0.52:22-10.0.0.1:46692.service - OpenSSH per-connection server daemon (10.0.0.1:46692). Aug 12 23:56:18.322994 systemd-logind[1489]: Removed session 19. Aug 12 23:56:18.380140 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 46692 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:18.381485 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:18.389789 systemd-logind[1489]: New session 20 of user core. Aug 12 23:56:18.403311 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 12 23:56:18.710081 sshd[4225]: Connection closed by 10.0.0.1 port 46692 Aug 12 23:56:18.712732 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:18.722548 systemd[1]: sshd@19-10.0.0.52:22-10.0.0.1:46692.service: Deactivated successfully. Aug 12 23:56:18.725270 systemd[1]: session-20.scope: Deactivated successfully. Aug 12 23:56:18.726250 systemd-logind[1489]: Session 20 logged out. Waiting for processes to exit. Aug 12 23:56:18.735885 systemd[1]: Started sshd@20-10.0.0.52:22-10.0.0.1:46694.service - OpenSSH per-connection server daemon (10.0.0.1:46694). Aug 12 23:56:18.737280 systemd-logind[1489]: Removed session 20. Aug 12 23:56:18.782030 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 46694 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:18.784030 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:18.792230 systemd-logind[1489]: New session 21 of user core. Aug 12 23:56:18.803433 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 12 23:56:18.943264 sshd[4238]: Connection closed by 10.0.0.1 port 46694 Aug 12 23:56:18.943762 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:18.949267 systemd[1]: sshd@20-10.0.0.52:22-10.0.0.1:46694.service: Deactivated successfully. Aug 12 23:56:18.952027 systemd[1]: session-21.scope: Deactivated successfully. Aug 12 23:56:18.954486 systemd-logind[1489]: Session 21 logged out. Waiting for processes to exit. Aug 12 23:56:18.957226 systemd-logind[1489]: Removed session 21. Aug 12 23:56:22.652841 kubelet[2619]: E0812 23:56:22.652777 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:23.982608 systemd[1]: Started sshd@21-10.0.0.52:22-10.0.0.1:46710.service - OpenSSH per-connection server daemon (10.0.0.1:46710). Aug 12 23:56:24.094730 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 46710 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:24.097875 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:24.128123 systemd-logind[1489]: New session 22 of user core. Aug 12 23:56:24.143385 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 12 23:56:24.358374 sshd[4253]: Connection closed by 10.0.0.1 port 46710 Aug 12 23:56:24.358805 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:24.365748 systemd[1]: sshd@21-10.0.0.52:22-10.0.0.1:46710.service: Deactivated successfully. Aug 12 23:56:24.372813 systemd[1]: session-22.scope: Deactivated successfully. Aug 12 23:56:24.377566 systemd-logind[1489]: Session 22 logged out. Waiting for processes to exit. Aug 12 23:56:24.382782 systemd-logind[1489]: Removed session 22. Aug 12 23:56:29.373181 systemd[1]: Started sshd@22-10.0.0.52:22-10.0.0.1:38620.service - OpenSSH per-connection server daemon (10.0.0.1:38620). Aug 12 23:56:29.421149 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 38620 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:29.423555 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:29.429460 systemd-logind[1489]: New session 23 of user core. Aug 12 23:56:29.440400 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 12 23:56:29.663248 sshd[4272]: Connection closed by 10.0.0.1 port 38620 Aug 12 23:56:29.663589 sshd-session[4270]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:29.668518 systemd[1]: sshd@22-10.0.0.52:22-10.0.0.1:38620.service: Deactivated successfully. Aug 12 23:56:29.671294 systemd[1]: session-23.scope: Deactivated successfully. Aug 12 23:56:29.673302 systemd-logind[1489]: Session 23 logged out. Waiting for processes to exit. Aug 12 23:56:29.674500 systemd-logind[1489]: Removed session 23. Aug 12 23:56:30.653392 kubelet[2619]: E0812 23:56:30.653314 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:33.652504 kubelet[2619]: E0812 23:56:33.652431 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:34.676754 systemd[1]: Started sshd@23-10.0.0.52:22-10.0.0.1:38636.service - OpenSSH per-connection server daemon (10.0.0.1:38636). Aug 12 23:56:34.716122 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 38636 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:34.717951 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:34.722748 systemd-logind[1489]: New session 24 of user core. Aug 12 23:56:34.733222 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 12 23:56:34.823461 kernel: hrtimer: interrupt took 8136614 ns Aug 12 23:56:34.977748 sshd[4287]: Connection closed by 10.0.0.1 port 38636 Aug 12 23:56:34.974102 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:34.990000 systemd[1]: sshd@23-10.0.0.52:22-10.0.0.1:38636.service: Deactivated successfully. Aug 12 23:56:35.002152 systemd[1]: session-24.scope: Deactivated successfully. Aug 12 23:56:35.010631 systemd-logind[1489]: Session 24 logged out. Waiting for processes to exit. Aug 12 23:56:35.018196 systemd-logind[1489]: Removed session 24. Aug 12 23:56:39.985594 systemd[1]: Started sshd@24-10.0.0.52:22-10.0.0.1:59428.service - OpenSSH per-connection server daemon (10.0.0.1:59428). Aug 12 23:56:40.025540 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 59428 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:40.027451 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:40.032467 systemd-logind[1489]: New session 25 of user core. Aug 12 23:56:40.044321 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 12 23:56:40.166601 sshd[4304]: Connection closed by 10.0.0.1 port 59428 Aug 12 23:56:40.167088 sshd-session[4302]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:40.171772 systemd[1]: sshd@24-10.0.0.52:22-10.0.0.1:59428.service: Deactivated successfully. Aug 12 23:56:40.174292 systemd[1]: session-25.scope: Deactivated successfully. Aug 12 23:56:40.175339 systemd-logind[1489]: Session 25 logged out. Waiting for processes to exit. Aug 12 23:56:40.176530 systemd-logind[1489]: Removed session 25. Aug 12 23:56:43.652496 kubelet[2619]: E0812 23:56:43.652418 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:45.183707 systemd[1]: Started sshd@25-10.0.0.52:22-10.0.0.1:59436.service - OpenSSH per-connection server daemon (10.0.0.1:59436). Aug 12 23:56:45.222655 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 59436 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:45.224432 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:45.228746 systemd-logind[1489]: New session 26 of user core. Aug 12 23:56:45.238195 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 12 23:56:45.351028 sshd[4319]: Connection closed by 10.0.0.1 port 59436 Aug 12 23:56:45.351489 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:45.363291 systemd[1]: sshd@25-10.0.0.52:22-10.0.0.1:59436.service: Deactivated successfully. Aug 12 23:56:45.365619 systemd[1]: session-26.scope: Deactivated successfully. Aug 12 23:56:45.367478 systemd-logind[1489]: Session 26 logged out. Waiting for processes to exit. Aug 12 23:56:45.375445 systemd[1]: Started sshd@26-10.0.0.52:22-10.0.0.1:59440.service - OpenSSH per-connection server daemon (10.0.0.1:59440). Aug 12 23:56:45.376589 systemd-logind[1489]: Removed session 26. Aug 12 23:56:45.409945 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 59440 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:45.411601 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:45.416474 systemd-logind[1489]: New session 27 of user core. Aug 12 23:56:45.426175 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 12 23:56:47.311000 containerd[1508]: time="2025-08-12T23:56:47.310945658Z" level=info msg="StopContainer for \"602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3\" with timeout 30 (s)" Aug 12 23:56:47.312611 containerd[1508]: time="2025-08-12T23:56:47.311345881Z" level=info msg="Stop container \"602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3\" with signal terminated" Aug 12 23:56:47.319187 containerd[1508]: time="2025-08-12T23:56:47.319122289Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:56:47.321527 containerd[1508]: time="2025-08-12T23:56:47.321491171Z" level=info msg="StopContainer for \"f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687\" with timeout 2 (s)" Aug 12 23:56:47.327437 systemd[1]: cri-containerd-602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3.scope: Deactivated successfully. Aug 12 23:56:47.328656 containerd[1508]: time="2025-08-12T23:56:47.328623303Z" level=info msg="Stop container \"f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687\" with signal terminated" Aug 12 23:56:47.337399 systemd-networkd[1421]: lxc_health: Link DOWN Aug 12 23:56:47.337409 systemd-networkd[1421]: lxc_health: Lost carrier Aug 12 23:56:47.355887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3-rootfs.mount: Deactivated successfully. Aug 12 23:56:47.358878 systemd[1]: cri-containerd-f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687.scope: Deactivated successfully. Aug 12 23:56:47.359516 systemd[1]: cri-containerd-f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687.scope: Consumed 7.547s CPU time, 122.7M memory peak, 308K read from disk, 13.3M written to disk. Aug 12 23:56:47.378111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687-rootfs.mount: Deactivated successfully. Aug 12 23:56:47.501720 containerd[1508]: time="2025-08-12T23:56:47.501414577Z" level=info msg="shim disconnected" id=602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3 namespace=k8s.io Aug 12 23:56:47.501720 containerd[1508]: time="2025-08-12T23:56:47.501482887Z" level=warning msg="cleaning up after shim disconnected" id=602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3 namespace=k8s.io Aug 12 23:56:47.501720 containerd[1508]: time="2025-08-12T23:56:47.501494609Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:56:47.502175 containerd[1508]: time="2025-08-12T23:56:47.501703758Z" level=info msg="shim disconnected" id=f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687 namespace=k8s.io Aug 12 23:56:47.502175 containerd[1508]: time="2025-08-12T23:56:47.501774934Z" level=warning msg="cleaning up after shim disconnected" id=f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687 namespace=k8s.io Aug 12 23:56:47.502175 containerd[1508]: time="2025-08-12T23:56:47.501787437Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:56:47.522036 containerd[1508]: time="2025-08-12T23:56:47.521965191Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:56:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 12 23:56:47.525983 containerd[1508]: time="2025-08-12T23:56:47.525934620Z" level=info msg="StopContainer for \"f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687\" returns successfully" Aug 12 23:56:47.527596 containerd[1508]: time="2025-08-12T23:56:47.527571047Z" level=info msg="StopContainer for \"602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3\" returns successfully" Aug 12 23:56:47.529362 containerd[1508]: time="2025-08-12T23:56:47.529325848Z" level=info msg="StopPodSandbox for \"45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7\"" Aug 12 23:56:47.530561 containerd[1508]: time="2025-08-12T23:56:47.530492891Z" level=info msg="StopPodSandbox for \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\"" Aug 12 23:56:47.533521 containerd[1508]: time="2025-08-12T23:56:47.530564778Z" level=info msg="Container to stop \"c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:56:47.533521 containerd[1508]: time="2025-08-12T23:56:47.533509685Z" level=info msg="Container to stop \"f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:56:47.533521 containerd[1508]: time="2025-08-12T23:56:47.533521979Z" level=info msg="Container to stop \"82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:56:47.533627 containerd[1508]: time="2025-08-12T23:56:47.533534002Z" level=info msg="Container to stop \"468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:56:47.533627 containerd[1508]: time="2025-08-12T23:56:47.533546104Z" level=info msg="Container to stop \"6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:56:47.534880 containerd[1508]: time="2025-08-12T23:56:47.529356597Z" level=info msg="Container to stop \"602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:56:47.538845 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57-shm.mount: Deactivated successfully. Aug 12 23:56:47.543799 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7-shm.mount: Deactivated successfully. Aug 12 23:56:47.545011 systemd[1]: cri-containerd-45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7.scope: Deactivated successfully. Aug 12 23:56:47.546584 systemd[1]: cri-containerd-4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57.scope: Deactivated successfully. Aug 12 23:56:47.571370 containerd[1508]: time="2025-08-12T23:56:47.570474327Z" level=info msg="shim disconnected" id=45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7 namespace=k8s.io Aug 12 23:56:47.571370 containerd[1508]: time="2025-08-12T23:56:47.570550192Z" level=warning msg="cleaning up after shim disconnected" id=45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7 namespace=k8s.io Aug 12 23:56:47.571370 containerd[1508]: time="2025-08-12T23:56:47.570558417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:56:47.572811 containerd[1508]: time="2025-08-12T23:56:47.572583273Z" level=info msg="shim disconnected" id=4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57 namespace=k8s.io Aug 12 23:56:47.572811 containerd[1508]: time="2025-08-12T23:56:47.572615705Z" level=warning msg="cleaning up after shim disconnected" id=4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57 namespace=k8s.io Aug 12 23:56:47.572811 containerd[1508]: time="2025-08-12T23:56:47.572623319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:56:47.589536 containerd[1508]: time="2025-08-12T23:56:47.589484548Z" level=info msg="TearDown network for sandbox \"45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7\" successfully" Aug 12 23:56:47.589536 containerd[1508]: time="2025-08-12T23:56:47.589517631Z" level=info msg="StopPodSandbox for \"45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7\" returns successfully" Aug 12 23:56:47.592415 containerd[1508]: time="2025-08-12T23:56:47.592388407Z" level=info msg="TearDown network for sandbox \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\" successfully" Aug 12 23:56:47.592415 containerd[1508]: time="2025-08-12T23:56:47.592411572Z" level=info msg="StopPodSandbox for \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\" returns successfully" Aug 12 23:56:47.705920 kubelet[2619]: I0812 23:56:47.705839 2619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-etc-cni-netd\") pod \"3caca0bf-4d8d-40d9-849f-6151c3b93199\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " Aug 12 23:56:47.705920 kubelet[2619]: I0812 23:56:47.705912 2619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skcfc\" (UniqueName: \"kubernetes.io/projected/3caca0bf-4d8d-40d9-849f-6151c3b93199-kube-api-access-skcfc\") pod \"3caca0bf-4d8d-40d9-849f-6151c3b93199\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " Aug 12 23:56:47.705920 kubelet[2619]: I0812 23:56:47.705938 2619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3caca0bf-4d8d-40d9-849f-6151c3b93199-clustermesh-secrets\") pod \"3caca0bf-4d8d-40d9-849f-6151c3b93199\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " Aug 12 23:56:47.706664 kubelet[2619]: I0812 23:56:47.705955 2619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3caca0bf-4d8d-40d9-849f-6151c3b93199-hubble-tls\") pod \"3caca0bf-4d8d-40d9-849f-6151c3b93199\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " Aug 12 23:56:47.706664 kubelet[2619]: I0812 23:56:47.705974 2619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23f86a9c-b7c0-4af7-b606-b295ca487d2e-cilium-config-path\") pod \"23f86a9c-b7c0-4af7-b606-b295ca487d2e\" (UID: \"23f86a9c-b7c0-4af7-b606-b295ca487d2e\") " Aug 12 23:56:47.706664 kubelet[2619]: I0812 23:56:47.705990 2619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-hostproc\") pod \"3caca0bf-4d8d-40d9-849f-6151c3b93199\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " Aug 12 23:56:47.706664 kubelet[2619]: I0812 23:56:47.706005 2619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-lib-modules\") pod \"3caca0bf-4d8d-40d9-849f-6151c3b93199\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " Aug 12 23:56:47.706664 kubelet[2619]: I0812 23:56:47.705988 2619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3caca0bf-4d8d-40d9-849f-6151c3b93199" (UID: "3caca0bf-4d8d-40d9-849f-6151c3b93199"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:56:47.706664 kubelet[2619]: I0812 23:56:47.706022 2619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thck7\" (UniqueName: \"kubernetes.io/projected/23f86a9c-b7c0-4af7-b606-b295ca487d2e-kube-api-access-thck7\") pod \"23f86a9c-b7c0-4af7-b606-b295ca487d2e\" (UID: \"23f86a9c-b7c0-4af7-b606-b295ca487d2e\") " Aug 12 23:56:47.706889 kubelet[2619]: I0812 23:56:47.706138 2619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-cilium-run\") pod \"3caca0bf-4d8d-40d9-849f-6151c3b93199\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " Aug 12 23:56:47.706889 kubelet[2619]: I0812 23:56:47.706156 2619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-host-proc-sys-kernel\") pod \"3caca0bf-4d8d-40d9-849f-6151c3b93199\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " Aug 12 23:56:47.706889 kubelet[2619]: I0812 23:56:47.706179 2619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3caca0bf-4d8d-40d9-849f-6151c3b93199-cilium-config-path\") pod \"3caca0bf-4d8d-40d9-849f-6151c3b93199\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " Aug 12 23:56:47.706889 kubelet[2619]: I0812 23:56:47.706196 2619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-host-proc-sys-net\") pod \"3caca0bf-4d8d-40d9-849f-6151c3b93199\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " Aug 12 23:56:47.706889 kubelet[2619]: I0812 23:56:47.706232 2619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-cilium-cgroup\") pod \"3caca0bf-4d8d-40d9-849f-6151c3b93199\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " Aug 12 23:56:47.706889 kubelet[2619]: I0812 23:56:47.706253 2619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-xtables-lock\") pod \"3caca0bf-4d8d-40d9-849f-6151c3b93199\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " Aug 12 23:56:47.707133 kubelet[2619]: I0812 23:56:47.706271 2619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-bpf-maps\") pod \"3caca0bf-4d8d-40d9-849f-6151c3b93199\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " Aug 12 23:56:47.707133 kubelet[2619]: I0812 23:56:47.706285 2619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-cni-path\") pod \"3caca0bf-4d8d-40d9-849f-6151c3b93199\" (UID: \"3caca0bf-4d8d-40d9-849f-6151c3b93199\") " Aug 12 23:56:47.707133 kubelet[2619]: I0812 23:56:47.706331 2619 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 12 23:56:47.707133 kubelet[2619]: I0812 23:56:47.706360 2619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-cni-path" (OuterVolumeSpecName: "cni-path") pod "3caca0bf-4d8d-40d9-849f-6151c3b93199" (UID: "3caca0bf-4d8d-40d9-849f-6151c3b93199"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:56:47.707133 kubelet[2619]: I0812 23:56:47.706377 2619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3caca0bf-4d8d-40d9-849f-6151c3b93199" (UID: "3caca0bf-4d8d-40d9-849f-6151c3b93199"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:56:47.707133 kubelet[2619]: I0812 23:56:47.706393 2619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3caca0bf-4d8d-40d9-849f-6151c3b93199" (UID: "3caca0bf-4d8d-40d9-849f-6151c3b93199"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:56:47.709765 kubelet[2619]: I0812 23:56:47.709732 2619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3caca0bf-4d8d-40d9-849f-6151c3b93199-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3caca0bf-4d8d-40d9-849f-6151c3b93199" (UID: "3caca0bf-4d8d-40d9-849f-6151c3b93199"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 12 23:56:47.710411 kubelet[2619]: I0812 23:56:47.709883 2619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3caca0bf-4d8d-40d9-849f-6151c3b93199" (UID: "3caca0bf-4d8d-40d9-849f-6151c3b93199"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:56:47.710411 kubelet[2619]: I0812 23:56:47.709909 2619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3caca0bf-4d8d-40d9-849f-6151c3b93199" (UID: "3caca0bf-4d8d-40d9-849f-6151c3b93199"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:56:47.710411 kubelet[2619]: I0812 23:56:47.709904 2619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f86a9c-b7c0-4af7-b606-b295ca487d2e-kube-api-access-thck7" (OuterVolumeSpecName: "kube-api-access-thck7") pod "23f86a9c-b7c0-4af7-b606-b295ca487d2e" (UID: "23f86a9c-b7c0-4af7-b606-b295ca487d2e"). InnerVolumeSpecName "kube-api-access-thck7". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:56:47.710411 kubelet[2619]: I0812 23:56:47.709934 2619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-hostproc" (OuterVolumeSpecName: "hostproc") pod "3caca0bf-4d8d-40d9-849f-6151c3b93199" (UID: "3caca0bf-4d8d-40d9-849f-6151c3b93199"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:56:47.710411 kubelet[2619]: I0812 23:56:47.709934 2619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3caca0bf-4d8d-40d9-849f-6151c3b93199-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3caca0bf-4d8d-40d9-849f-6151c3b93199" (UID: "3caca0bf-4d8d-40d9-849f-6151c3b93199"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 12 23:56:47.710557 kubelet[2619]: I0812 23:56:47.709974 2619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3caca0bf-4d8d-40d9-849f-6151c3b93199" (UID: "3caca0bf-4d8d-40d9-849f-6151c3b93199"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:56:47.710557 kubelet[2619]: I0812 23:56:47.709992 2619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3caca0bf-4d8d-40d9-849f-6151c3b93199" (UID: "3caca0bf-4d8d-40d9-849f-6151c3b93199"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:56:47.710557 kubelet[2619]: I0812 23:56:47.710010 2619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3caca0bf-4d8d-40d9-849f-6151c3b93199" (UID: "3caca0bf-4d8d-40d9-849f-6151c3b93199"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:56:47.710557 kubelet[2619]: I0812 23:56:47.710392 2619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3caca0bf-4d8d-40d9-849f-6151c3b93199-kube-api-access-skcfc" (OuterVolumeSpecName: "kube-api-access-skcfc") pod "3caca0bf-4d8d-40d9-849f-6151c3b93199" (UID: "3caca0bf-4d8d-40d9-849f-6151c3b93199"). InnerVolumeSpecName "kube-api-access-skcfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:56:47.713303 kubelet[2619]: I0812 23:56:47.713251 2619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3caca0bf-4d8d-40d9-849f-6151c3b93199-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3caca0bf-4d8d-40d9-849f-6151c3b93199" (UID: "3caca0bf-4d8d-40d9-849f-6151c3b93199"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:56:47.713633 kubelet[2619]: I0812 23:56:47.713603 2619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23f86a9c-b7c0-4af7-b606-b295ca487d2e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "23f86a9c-b7c0-4af7-b606-b295ca487d2e" (UID: "23f86a9c-b7c0-4af7-b606-b295ca487d2e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 12 23:56:47.745186 kubelet[2619]: E0812 23:56:47.745139 2619 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 12 23:56:47.806940 kubelet[2619]: I0812 23:56:47.806856 2619 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 12 23:56:47.806940 kubelet[2619]: I0812 23:56:47.806916 2619 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:56:47.806940 kubelet[2619]: I0812 23:56:47.806930 2619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skcfc\" (UniqueName: \"kubernetes.io/projected/3caca0bf-4d8d-40d9-849f-6151c3b93199-kube-api-access-skcfc\") on node \"localhost\" DevicePath \"\"" Aug 12 23:56:47.806940 kubelet[2619]: I0812 23:56:47.806943 2619 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3caca0bf-4d8d-40d9-849f-6151c3b93199-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 12 23:56:47.806940 kubelet[2619]: I0812 23:56:47.806955 2619 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3caca0bf-4d8d-40d9-849f-6151c3b93199-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 12 23:56:47.806940 kubelet[2619]: I0812 23:56:47.806965 2619 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23f86a9c-b7c0-4af7-b606-b295ca487d2e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:56:47.807279 kubelet[2619]: I0812 23:56:47.806977 2619 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 12 23:56:47.807279 kubelet[2619]: I0812 23:56:47.806988 2619 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 12 23:56:47.807279 kubelet[2619]: I0812 23:56:47.807000 2619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thck7\" (UniqueName: \"kubernetes.io/projected/23f86a9c-b7c0-4af7-b606-b295ca487d2e-kube-api-access-thck7\") on node \"localhost\" DevicePath \"\"" Aug 12 23:56:47.807279 kubelet[2619]: I0812 23:56:47.807011 2619 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 12 23:56:47.807279 kubelet[2619]: I0812 23:56:47.807023 2619 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 12 23:56:47.807279 kubelet[2619]: I0812 23:56:47.807034 2619 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 12 23:56:47.807279 kubelet[2619]: I0812 23:56:47.807046 2619 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3caca0bf-4d8d-40d9-849f-6151c3b93199-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:56:47.807279 kubelet[2619]: I0812 23:56:47.807085 2619 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 12 23:56:47.807467 kubelet[2619]: I0812 23:56:47.807095 2619 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3caca0bf-4d8d-40d9-849f-6151c3b93199-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 12 23:56:47.958695 kubelet[2619]: I0812 23:56:47.958397 2619 scope.go:117] "RemoveContainer" containerID="f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687" Aug 12 23:56:47.964832 systemd[1]: Removed slice kubepods-burstable-pod3caca0bf_4d8d_40d9_849f_6151c3b93199.slice - libcontainer container kubepods-burstable-pod3caca0bf_4d8d_40d9_849f_6151c3b93199.slice. Aug 12 23:56:47.965166 systemd[1]: kubepods-burstable-pod3caca0bf_4d8d_40d9_849f_6151c3b93199.slice: Consumed 7.669s CPU time, 123M memory peak, 324K read from disk, 13.3M written to disk. Aug 12 23:56:47.966869 containerd[1508]: time="2025-08-12T23:56:47.966820816Z" level=info msg="RemoveContainer for \"f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687\"" Aug 12 23:56:47.967938 systemd[1]: Removed slice kubepods-besteffort-pod23f86a9c_b7c0_4af7_b606_b295ca487d2e.slice - libcontainer container kubepods-besteffort-pod23f86a9c_b7c0_4af7_b606_b295ca487d2e.slice. Aug 12 23:56:48.083111 containerd[1508]: time="2025-08-12T23:56:48.082987766Z" level=info msg="RemoveContainer for \"f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687\" returns successfully" Aug 12 23:56:48.083485 kubelet[2619]: I0812 23:56:48.083432 2619 scope.go:117] "RemoveContainer" containerID="c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4" Aug 12 23:56:48.085334 containerd[1508]: time="2025-08-12T23:56:48.085287685Z" level=info msg="RemoveContainer for \"c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4\"" Aug 12 23:56:48.091325 containerd[1508]: time="2025-08-12T23:56:48.091173804Z" level=info msg="RemoveContainer for \"c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4\" returns successfully" Aug 12 23:56:48.091891 kubelet[2619]: I0812 23:56:48.091521 2619 scope.go:117] "RemoveContainer" containerID="6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d" Aug 12 23:56:48.093435 containerd[1508]: time="2025-08-12T23:56:48.093381968Z" level=info msg="RemoveContainer for \"6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d\"" Aug 12 23:56:48.146789 containerd[1508]: time="2025-08-12T23:56:48.146708183Z" level=info msg="RemoveContainer for \"6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d\" returns successfully" Aug 12 23:56:48.147071 kubelet[2619]: I0812 23:56:48.147005 2619 scope.go:117] "RemoveContainer" containerID="468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979" Aug 12 23:56:48.148061 containerd[1508]: time="2025-08-12T23:56:48.148021634Z" level=info msg="RemoveContainer for \"468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979\"" Aug 12 23:56:48.294760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7-rootfs.mount: Deactivated successfully. Aug 12 23:56:48.294903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57-rootfs.mount: Deactivated successfully. Aug 12 23:56:48.295007 systemd[1]: var-lib-kubelet-pods-23f86a9c\x2db7c0\x2d4af7\x2db606\x2db295ca487d2e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dthck7.mount: Deactivated successfully. Aug 12 23:56:48.295147 systemd[1]: var-lib-kubelet-pods-3caca0bf\x2d4d8d\x2d40d9\x2d849f\x2d6151c3b93199-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dskcfc.mount: Deactivated successfully. Aug 12 23:56:48.295290 systemd[1]: var-lib-kubelet-pods-3caca0bf\x2d4d8d\x2d40d9\x2d849f\x2d6151c3b93199-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 12 23:56:48.295415 systemd[1]: var-lib-kubelet-pods-3caca0bf\x2d4d8d\x2d40d9\x2d849f\x2d6151c3b93199-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 12 23:56:48.438159 containerd[1508]: time="2025-08-12T23:56:48.438091842Z" level=info msg="RemoveContainer for \"468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979\" returns successfully" Aug 12 23:56:48.438595 kubelet[2619]: I0812 23:56:48.438440 2619 scope.go:117] "RemoveContainer" containerID="82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9" Aug 12 23:56:48.439822 containerd[1508]: time="2025-08-12T23:56:48.439786410Z" level=info msg="RemoveContainer for \"82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9\"" Aug 12 23:56:48.493232 containerd[1508]: time="2025-08-12T23:56:48.493148833Z" level=info msg="RemoveContainer for \"82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9\" returns successfully" Aug 12 23:56:48.493513 kubelet[2619]: I0812 23:56:48.493468 2619 scope.go:117] "RemoveContainer" containerID="f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687" Aug 12 23:56:48.493779 containerd[1508]: time="2025-08-12T23:56:48.493727325Z" level=error msg="ContainerStatus for \"f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687\": not found" Aug 12 23:56:48.493989 kubelet[2619]: E0812 23:56:48.493938 2619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687\": not found" containerID="f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687" Aug 12 23:56:48.494119 kubelet[2619]: I0812 23:56:48.493982 2619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687"} err="failed to get container status \"f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687\": rpc error: code = NotFound desc = an error occurred when try to find container \"f39cb1b59f7461107a128d8f9142553973270bf8da8e98894142ca9568b64687\": not found" Aug 12 23:56:48.494119 kubelet[2619]: I0812 23:56:48.494118 2619 scope.go:117] "RemoveContainer" containerID="c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4" Aug 12 23:56:48.494367 containerd[1508]: time="2025-08-12T23:56:48.494307479Z" level=error msg="ContainerStatus for \"c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4\": not found" Aug 12 23:56:48.494463 kubelet[2619]: E0812 23:56:48.494433 2619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4\": not found" containerID="c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4" Aug 12 23:56:48.494507 kubelet[2619]: I0812 23:56:48.494464 2619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4"} err="failed to get container status \"c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"c04a1aecd217e81a442f105819864f010543fe4c7766c9d15ddf5db9fb27e3b4\": not found" Aug 12 23:56:48.494507 kubelet[2619]: I0812 23:56:48.494490 2619 scope.go:117] "RemoveContainer" containerID="6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d" Aug 12 23:56:48.494664 containerd[1508]: time="2025-08-12T23:56:48.494633551Z" level=error msg="ContainerStatus for \"6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d\": not found" Aug 12 23:56:48.494779 kubelet[2619]: E0812 23:56:48.494750 2619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d\": not found" containerID="6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d" Aug 12 23:56:48.494854 kubelet[2619]: I0812 23:56:48.494782 2619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d"} err="failed to get container status \"6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d3419fb0e9e62fbd68d093fbe6d61203e6fac069e83cbfaea6f6526ebd6b66d\": not found" Aug 12 23:56:48.494854 kubelet[2619]: I0812 23:56:48.494798 2619 scope.go:117] "RemoveContainer" containerID="468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979" Aug 12 23:56:48.494954 containerd[1508]: time="2025-08-12T23:56:48.494928462Z" level=error msg="ContainerStatus for \"468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979\": not found" Aug 12 23:56:48.495018 kubelet[2619]: E0812 23:56:48.495001 2619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979\": not found" containerID="468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979" Aug 12 23:56:48.495083 kubelet[2619]: I0812 23:56:48.495016 2619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979"} err="failed to get container status \"468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979\": rpc error: code = NotFound desc = an error occurred when try to find container \"468ed4539d96fe775012ba57e577c77fa2679bd5ff70e3472b0fac85dfda6979\": not found" Aug 12 23:56:48.495083 kubelet[2619]: I0812 23:56:48.495031 2619 scope.go:117] "RemoveContainer" containerID="82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9" Aug 12 23:56:48.495370 containerd[1508]: time="2025-08-12T23:56:48.495323835Z" level=error msg="ContainerStatus for \"82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9\": not found" Aug 12 23:56:48.495503 kubelet[2619]: E0812 23:56:48.495478 2619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9\": not found" containerID="82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9" Aug 12 23:56:48.495551 kubelet[2619]: I0812 23:56:48.495503 2619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9"} err="failed to get container status \"82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9\": rpc error: code = NotFound desc = an error occurred when try to find container \"82e5f6f8b0878e759ac36fe4c83aae7c935809908663217e874633058b688ac9\": not found" Aug 12 23:56:48.495551 kubelet[2619]: I0812 23:56:48.495521 2619 scope.go:117] "RemoveContainer" containerID="602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3" Aug 12 23:56:48.496451 containerd[1508]: time="2025-08-12T23:56:48.496425022Z" level=info msg="RemoveContainer for \"602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3\"" Aug 12 23:56:48.500518 containerd[1508]: time="2025-08-12T23:56:48.500484151Z" level=info msg="RemoveContainer for \"602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3\" returns successfully" Aug 12 23:56:48.500718 kubelet[2619]: I0812 23:56:48.500641 2619 scope.go:117] "RemoveContainer" containerID="602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3" Aug 12 23:56:48.500859 containerd[1508]: time="2025-08-12T23:56:48.500828728Z" level=error msg="ContainerStatus for \"602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3\": not found" Aug 12 23:56:48.500977 kubelet[2619]: E0812 23:56:48.500956 2619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3\": not found" containerID="602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3" Aug 12 23:56:48.501015 kubelet[2619]: I0812 23:56:48.500978 2619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3"} err="failed to get container status \"602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"602732a297a6b85824407a23ae4164f40bb1d3100d3135fb48e51b5032c903e3\": not found" Aug 12 23:56:48.654490 kubelet[2619]: I0812 23:56:48.654426 2619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23f86a9c-b7c0-4af7-b606-b295ca487d2e" path="/var/lib/kubelet/pods/23f86a9c-b7c0-4af7-b606-b295ca487d2e/volumes" Aug 12 23:56:48.655222 kubelet[2619]: I0812 23:56:48.655185 2619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3caca0bf-4d8d-40d9-849f-6151c3b93199" path="/var/lib/kubelet/pods/3caca0bf-4d8d-40d9-849f-6151c3b93199/volumes" Aug 12 23:56:48.811286 sshd[4334]: Connection closed by 10.0.0.1 port 59440 Aug 12 23:56:48.811960 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:48.823001 systemd[1]: sshd@26-10.0.0.52:22-10.0.0.1:59440.service: Deactivated successfully. Aug 12 23:56:48.825502 systemd[1]: session-27.scope: Deactivated successfully. Aug 12 23:56:48.827859 systemd-logind[1489]: Session 27 logged out. Waiting for processes to exit. Aug 12 23:56:48.836548 systemd[1]: Started sshd@27-10.0.0.52:22-10.0.0.1:44590.service - OpenSSH per-connection server daemon (10.0.0.1:44590). Aug 12 23:56:48.838494 systemd-logind[1489]: Removed session 27. Aug 12 23:56:48.876405 sshd[4493]: Accepted publickey for core from 10.0.0.1 port 44590 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:48.878432 sshd-session[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:48.884345 systemd-logind[1489]: New session 28 of user core. Aug 12 23:56:48.895329 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 12 23:56:49.566721 sshd[4497]: Connection closed by 10.0.0.1 port 44590 Aug 12 23:56:49.567439 sshd-session[4493]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:49.583175 systemd[1]: sshd@27-10.0.0.52:22-10.0.0.1:44590.service: Deactivated successfully. Aug 12 23:56:49.585982 systemd[1]: session-28.scope: Deactivated successfully. Aug 12 23:56:49.588294 kubelet[2619]: E0812 23:56:49.588247 2619 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3caca0bf-4d8d-40d9-849f-6151c3b93199" containerName="mount-cgroup" Aug 12 23:56:49.588294 kubelet[2619]: E0812 23:56:49.588287 2619 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3caca0bf-4d8d-40d9-849f-6151c3b93199" containerName="apply-sysctl-overwrites" Aug 12 23:56:49.588294 kubelet[2619]: E0812 23:56:49.588297 2619 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23f86a9c-b7c0-4af7-b606-b295ca487d2e" containerName="cilium-operator" Aug 12 23:56:49.588763 kubelet[2619]: E0812 23:56:49.588306 2619 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3caca0bf-4d8d-40d9-849f-6151c3b93199" containerName="clean-cilium-state" Aug 12 23:56:49.588763 kubelet[2619]: E0812 23:56:49.588314 2619 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3caca0bf-4d8d-40d9-849f-6151c3b93199" containerName="cilium-agent" Aug 12 23:56:49.588763 kubelet[2619]: E0812 23:56:49.588321 2619 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3caca0bf-4d8d-40d9-849f-6151c3b93199" containerName="mount-bpf-fs" Aug 12 23:56:49.588763 kubelet[2619]: I0812 23:56:49.588351 2619 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f86a9c-b7c0-4af7-b606-b295ca487d2e" containerName="cilium-operator" Aug 12 23:56:49.588763 kubelet[2619]: I0812 23:56:49.588359 2619 memory_manager.go:354] "RemoveStaleState removing state" podUID="3caca0bf-4d8d-40d9-849f-6151c3b93199" containerName="cilium-agent" Aug 12 23:56:49.591794 systemd-logind[1489]: Session 28 logged out. Waiting for processes to exit. Aug 12 23:56:49.607128 systemd[1]: Started sshd@28-10.0.0.52:22-10.0.0.1:44604.service - OpenSSH per-connection server daemon (10.0.0.1:44604). Aug 12 23:56:49.609793 systemd-logind[1489]: Removed session 28. Aug 12 23:56:49.621149 systemd[1]: Created slice kubepods-burstable-pod232c81ac_5ad5_4b48_a32d_16de65fafcd9.slice - libcontainer container kubepods-burstable-pod232c81ac_5ad5_4b48_a32d_16de65fafcd9.slice. Aug 12 23:56:49.650313 sshd[4509]: Accepted publickey for core from 10.0.0.1 port 44604 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:49.652136 sshd-session[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:49.657999 systemd-logind[1489]: New session 29 of user core. Aug 12 23:56:49.667300 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 12 23:56:49.719294 sshd[4513]: Connection closed by 10.0.0.1 port 44604 Aug 12 23:56:49.720092 kubelet[2619]: I0812 23:56:49.720027 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/232c81ac-5ad5-4b48-a32d-16de65fafcd9-host-proc-sys-kernel\") pod \"cilium-n82n7\" (UID: \"232c81ac-5ad5-4b48-a32d-16de65fafcd9\") " pod="kube-system/cilium-n82n7" Aug 12 23:56:49.720193 kubelet[2619]: I0812 23:56:49.720102 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp98t\" (UniqueName: \"kubernetes.io/projected/232c81ac-5ad5-4b48-a32d-16de65fafcd9-kube-api-access-dp98t\") pod \"cilium-n82n7\" (UID: \"232c81ac-5ad5-4b48-a32d-16de65fafcd9\") " pod="kube-system/cilium-n82n7" Aug 12 23:56:49.720193 kubelet[2619]: I0812 23:56:49.720129 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/232c81ac-5ad5-4b48-a32d-16de65fafcd9-lib-modules\") pod \"cilium-n82n7\" (UID: \"232c81ac-5ad5-4b48-a32d-16de65fafcd9\") " pod="kube-system/cilium-n82n7" Aug 12 23:56:49.720193 kubelet[2619]: I0812 23:56:49.720155 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/232c81ac-5ad5-4b48-a32d-16de65fafcd9-clustermesh-secrets\") pod \"cilium-n82n7\" (UID: \"232c81ac-5ad5-4b48-a32d-16de65fafcd9\") " pod="kube-system/cilium-n82n7" Aug 12 23:56:49.720193 kubelet[2619]: I0812 23:56:49.720173 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/232c81ac-5ad5-4b48-a32d-16de65fafcd9-hubble-tls\") pod \"cilium-n82n7\" (UID: \"232c81ac-5ad5-4b48-a32d-16de65fafcd9\") " pod="kube-system/cilium-n82n7" Aug 12 23:56:49.720193 kubelet[2619]: I0812 23:56:49.720191 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/232c81ac-5ad5-4b48-a32d-16de65fafcd9-cilium-run\") pod \"cilium-n82n7\" (UID: \"232c81ac-5ad5-4b48-a32d-16de65fafcd9\") " pod="kube-system/cilium-n82n7" Aug 12 23:56:49.720352 kubelet[2619]: I0812 23:56:49.720220 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/232c81ac-5ad5-4b48-a32d-16de65fafcd9-host-proc-sys-net\") pod \"cilium-n82n7\" (UID: \"232c81ac-5ad5-4b48-a32d-16de65fafcd9\") " pod="kube-system/cilium-n82n7" Aug 12 23:56:49.720352 kubelet[2619]: I0812 23:56:49.720238 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/232c81ac-5ad5-4b48-a32d-16de65fafcd9-xtables-lock\") pod \"cilium-n82n7\" (UID: \"232c81ac-5ad5-4b48-a32d-16de65fafcd9\") " pod="kube-system/cilium-n82n7" Aug 12 23:56:49.720352 kubelet[2619]: I0812 23:56:49.720255 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/232c81ac-5ad5-4b48-a32d-16de65fafcd9-cilium-config-path\") pod \"cilium-n82n7\" (UID: \"232c81ac-5ad5-4b48-a32d-16de65fafcd9\") " pod="kube-system/cilium-n82n7" Aug 12 23:56:49.720352 kubelet[2619]: I0812 23:56:49.720273 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/232c81ac-5ad5-4b48-a32d-16de65fafcd9-hostproc\") pod \"cilium-n82n7\" (UID: \"232c81ac-5ad5-4b48-a32d-16de65fafcd9\") " pod="kube-system/cilium-n82n7" Aug 12 23:56:49.720352 kubelet[2619]: I0812 23:56:49.720290 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/232c81ac-5ad5-4b48-a32d-16de65fafcd9-cni-path\") pod \"cilium-n82n7\" (UID: \"232c81ac-5ad5-4b48-a32d-16de65fafcd9\") " pod="kube-system/cilium-n82n7" Aug 12 23:56:49.720352 kubelet[2619]: I0812 23:56:49.720307 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/232c81ac-5ad5-4b48-a32d-16de65fafcd9-etc-cni-netd\") pod \"cilium-n82n7\" (UID: \"232c81ac-5ad5-4b48-a32d-16de65fafcd9\") " pod="kube-system/cilium-n82n7" Aug 12 23:56:49.720514 kubelet[2619]: I0812 23:56:49.720328 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/232c81ac-5ad5-4b48-a32d-16de65fafcd9-bpf-maps\") pod \"cilium-n82n7\" (UID: \"232c81ac-5ad5-4b48-a32d-16de65fafcd9\") " pod="kube-system/cilium-n82n7" Aug 12 23:56:49.720514 kubelet[2619]: I0812 23:56:49.720347 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/232c81ac-5ad5-4b48-a32d-16de65fafcd9-cilium-cgroup\") pod \"cilium-n82n7\" (UID: \"232c81ac-5ad5-4b48-a32d-16de65fafcd9\") " pod="kube-system/cilium-n82n7" Aug 12 23:56:49.720514 kubelet[2619]: I0812 23:56:49.720365 2619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/232c81ac-5ad5-4b48-a32d-16de65fafcd9-cilium-ipsec-secrets\") pod \"cilium-n82n7\" (UID: \"232c81ac-5ad5-4b48-a32d-16de65fafcd9\") " pod="kube-system/cilium-n82n7" Aug 12 23:56:49.721283 sshd-session[4509]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:49.733514 systemd[1]: sshd@28-10.0.0.52:22-10.0.0.1:44604.service: Deactivated successfully. Aug 12 23:56:49.735871 systemd[1]: session-29.scope: Deactivated successfully. Aug 12 23:56:49.736772 systemd-logind[1489]: Session 29 logged out. Waiting for processes to exit. Aug 12 23:56:49.752497 systemd[1]: Started sshd@29-10.0.0.52:22-10.0.0.1:44618.service - OpenSSH per-connection server daemon (10.0.0.1:44618). Aug 12 23:56:49.753846 systemd-logind[1489]: Removed session 29. Aug 12 23:56:49.791800 sshd[4519]: Accepted publickey for core from 10.0.0.1 port 44618 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:56:49.793497 sshd-session[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:49.798251 systemd-logind[1489]: New session 30 of user core. Aug 12 23:56:49.808196 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 12 23:56:49.927866 kubelet[2619]: E0812 23:56:49.927699 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:49.928483 containerd[1508]: time="2025-08-12T23:56:49.928412993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n82n7,Uid:232c81ac-5ad5-4b48-a32d-16de65fafcd9,Namespace:kube-system,Attempt:0,}" Aug 12 23:56:49.951927 containerd[1508]: time="2025-08-12T23:56:49.951796380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:56:49.951927 containerd[1508]: time="2025-08-12T23:56:49.951858388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:56:49.951927 containerd[1508]: time="2025-08-12T23:56:49.951869599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:49.952213 containerd[1508]: time="2025-08-12T23:56:49.951963177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:49.980333 systemd[1]: Started cri-containerd-9253ec86118ffaa9e0cc2dd408f2315ab81085bb494a36bc9dd78f07fab2e38d.scope - libcontainer container 9253ec86118ffaa9e0cc2dd408f2315ab81085bb494a36bc9dd78f07fab2e38d. Aug 12 23:56:50.006775 containerd[1508]: time="2025-08-12T23:56:50.006692028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n82n7,Uid:232c81ac-5ad5-4b48-a32d-16de65fafcd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9253ec86118ffaa9e0cc2dd408f2315ab81085bb494a36bc9dd78f07fab2e38d\"" Aug 12 23:56:50.007529 kubelet[2619]: E0812 23:56:50.007501 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:50.010715 containerd[1508]: time="2025-08-12T23:56:50.010686494Z" level=info msg="CreateContainer within sandbox \"9253ec86118ffaa9e0cc2dd408f2315ab81085bb494a36bc9dd78f07fab2e38d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 12 23:56:50.031190 containerd[1508]: time="2025-08-12T23:56:50.031126554Z" level=info msg="CreateContainer within sandbox \"9253ec86118ffaa9e0cc2dd408f2315ab81085bb494a36bc9dd78f07fab2e38d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e3be8cf1baf73210f7448d064e5f8ba69327212c75f038372871462327c1116f\"" Aug 12 23:56:50.031792 containerd[1508]: time="2025-08-12T23:56:50.031733069Z" level=info msg="StartContainer for \"e3be8cf1baf73210f7448d064e5f8ba69327212c75f038372871462327c1116f\"" Aug 12 23:56:50.067241 systemd[1]: Started cri-containerd-e3be8cf1baf73210f7448d064e5f8ba69327212c75f038372871462327c1116f.scope - libcontainer container e3be8cf1baf73210f7448d064e5f8ba69327212c75f038372871462327c1116f. Aug 12 23:56:50.097492 containerd[1508]: time="2025-08-12T23:56:50.097439911Z" level=info msg="StartContainer for \"e3be8cf1baf73210f7448d064e5f8ba69327212c75f038372871462327c1116f\" returns successfully" Aug 12 23:56:50.109848 systemd[1]: cri-containerd-e3be8cf1baf73210f7448d064e5f8ba69327212c75f038372871462327c1116f.scope: Deactivated successfully. Aug 12 23:56:50.152919 containerd[1508]: time="2025-08-12T23:56:50.152828255Z" level=info msg="shim disconnected" id=e3be8cf1baf73210f7448d064e5f8ba69327212c75f038372871462327c1116f namespace=k8s.io Aug 12 23:56:50.152919 containerd[1508]: time="2025-08-12T23:56:50.152889051Z" level=warning msg="cleaning up after shim disconnected" id=e3be8cf1baf73210f7448d064e5f8ba69327212c75f038372871462327c1116f namespace=k8s.io Aug 12 23:56:50.152919 containerd[1508]: time="2025-08-12T23:56:50.152897668Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:56:50.972940 kubelet[2619]: E0812 23:56:50.972859 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:50.975346 containerd[1508]: time="2025-08-12T23:56:50.975303062Z" level=info msg="CreateContainer within sandbox \"9253ec86118ffaa9e0cc2dd408f2315ab81085bb494a36bc9dd78f07fab2e38d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 12 23:56:51.247073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2393558472.mount: Deactivated successfully. Aug 12 23:56:51.385755 containerd[1508]: time="2025-08-12T23:56:51.385657030Z" level=info msg="CreateContainer within sandbox \"9253ec86118ffaa9e0cc2dd408f2315ab81085bb494a36bc9dd78f07fab2e38d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c60b3759ebeb7ae81291cf70928871e94f928bee2b8b1b00d6d3daff01ce7fa9\"" Aug 12 23:56:51.386443 containerd[1508]: time="2025-08-12T23:56:51.386383574Z" level=info msg="StartContainer for \"c60b3759ebeb7ae81291cf70928871e94f928bee2b8b1b00d6d3daff01ce7fa9\"" Aug 12 23:56:51.426234 systemd[1]: Started cri-containerd-c60b3759ebeb7ae81291cf70928871e94f928bee2b8b1b00d6d3daff01ce7fa9.scope - libcontainer container c60b3759ebeb7ae81291cf70928871e94f928bee2b8b1b00d6d3daff01ce7fa9. Aug 12 23:56:51.459148 containerd[1508]: time="2025-08-12T23:56:51.459091843Z" level=info msg="StartContainer for \"c60b3759ebeb7ae81291cf70928871e94f928bee2b8b1b00d6d3daff01ce7fa9\" returns successfully" Aug 12 23:56:51.468104 systemd[1]: cri-containerd-c60b3759ebeb7ae81291cf70928871e94f928bee2b8b1b00d6d3daff01ce7fa9.scope: Deactivated successfully. Aug 12 23:56:51.497078 containerd[1508]: time="2025-08-12T23:56:51.496985693Z" level=info msg="shim disconnected" id=c60b3759ebeb7ae81291cf70928871e94f928bee2b8b1b00d6d3daff01ce7fa9 namespace=k8s.io Aug 12 23:56:51.497078 containerd[1508]: time="2025-08-12T23:56:51.497073511Z" level=warning msg="cleaning up after shim disconnected" id=c60b3759ebeb7ae81291cf70928871e94f928bee2b8b1b00d6d3daff01ce7fa9 namespace=k8s.io Aug 12 23:56:51.497078 containerd[1508]: time="2025-08-12T23:56:51.497085604Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:56:51.828467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c60b3759ebeb7ae81291cf70928871e94f928bee2b8b1b00d6d3daff01ce7fa9-rootfs.mount: Deactivated successfully. Aug 12 23:56:51.976315 kubelet[2619]: E0812 23:56:51.976281 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:51.978238 containerd[1508]: time="2025-08-12T23:56:51.977873107Z" level=info msg="CreateContainer within sandbox \"9253ec86118ffaa9e0cc2dd408f2315ab81085bb494a36bc9dd78f07fab2e38d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 12 23:56:52.344751 containerd[1508]: time="2025-08-12T23:56:52.344678566Z" level=info msg="CreateContainer within sandbox \"9253ec86118ffaa9e0cc2dd408f2315ab81085bb494a36bc9dd78f07fab2e38d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3edcaee3fc102739a10b891a214063cea56506e498eb1443fcc54f9389109044\"" Aug 12 23:56:52.345354 containerd[1508]: time="2025-08-12T23:56:52.345316310Z" level=info msg="StartContainer for \"3edcaee3fc102739a10b891a214063cea56506e498eb1443fcc54f9389109044\"" Aug 12 23:56:52.381235 systemd[1]: Started cri-containerd-3edcaee3fc102739a10b891a214063cea56506e498eb1443fcc54f9389109044.scope - libcontainer container 3edcaee3fc102739a10b891a214063cea56506e498eb1443fcc54f9389109044. Aug 12 23:56:52.415566 containerd[1508]: time="2025-08-12T23:56:52.415485991Z" level=info msg="StartContainer for \"3edcaee3fc102739a10b891a214063cea56506e498eb1443fcc54f9389109044\" returns successfully" Aug 12 23:56:52.417624 systemd[1]: cri-containerd-3edcaee3fc102739a10b891a214063cea56506e498eb1443fcc54f9389109044.scope: Deactivated successfully. Aug 12 23:56:52.447487 containerd[1508]: time="2025-08-12T23:56:52.447413811Z" level=info msg="shim disconnected" id=3edcaee3fc102739a10b891a214063cea56506e498eb1443fcc54f9389109044 namespace=k8s.io Aug 12 23:56:52.447487 containerd[1508]: time="2025-08-12T23:56:52.447475287Z" level=warning msg="cleaning up after shim disconnected" id=3edcaee3fc102739a10b891a214063cea56506e498eb1443fcc54f9389109044 namespace=k8s.io Aug 12 23:56:52.447487 containerd[1508]: time="2025-08-12T23:56:52.447483955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:56:52.746083 kubelet[2619]: E0812 23:56:52.745910 2619 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 12 23:56:52.827755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3edcaee3fc102739a10b891a214063cea56506e498eb1443fcc54f9389109044-rootfs.mount: Deactivated successfully. Aug 12 23:56:52.980853 kubelet[2619]: E0812 23:56:52.980818 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:52.983732 containerd[1508]: time="2025-08-12T23:56:52.983665628Z" level=info msg="CreateContainer within sandbox \"9253ec86118ffaa9e0cc2dd408f2315ab81085bb494a36bc9dd78f07fab2e38d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 12 23:56:53.020482 containerd[1508]: time="2025-08-12T23:56:53.020335495Z" level=info msg="CreateContainer within sandbox \"9253ec86118ffaa9e0cc2dd408f2315ab81085bb494a36bc9dd78f07fab2e38d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"55415979940db85e4d48802adea3e67c8a1cf6849aeaf7d15879e997f31c2601\"" Aug 12 23:56:53.020892 containerd[1508]: time="2025-08-12T23:56:53.020852139Z" level=info msg="StartContainer for \"55415979940db85e4d48802adea3e67c8a1cf6849aeaf7d15879e997f31c2601\"" Aug 12 23:56:53.057396 systemd[1]: Started cri-containerd-55415979940db85e4d48802adea3e67c8a1cf6849aeaf7d15879e997f31c2601.scope - libcontainer container 55415979940db85e4d48802adea3e67c8a1cf6849aeaf7d15879e997f31c2601. Aug 12 23:56:53.088275 systemd[1]: cri-containerd-55415979940db85e4d48802adea3e67c8a1cf6849aeaf7d15879e997f31c2601.scope: Deactivated successfully. Aug 12 23:56:53.090859 containerd[1508]: time="2025-08-12T23:56:53.090757683Z" level=info msg="StartContainer for \"55415979940db85e4d48802adea3e67c8a1cf6849aeaf7d15879e997f31c2601\" returns successfully" Aug 12 23:56:53.119336 containerd[1508]: time="2025-08-12T23:56:53.119240371Z" level=info msg="shim disconnected" id=55415979940db85e4d48802adea3e67c8a1cf6849aeaf7d15879e997f31c2601 namespace=k8s.io Aug 12 23:56:53.119336 containerd[1508]: time="2025-08-12T23:56:53.119305766Z" level=warning msg="cleaning up after shim disconnected" id=55415979940db85e4d48802adea3e67c8a1cf6849aeaf7d15879e997f31c2601 namespace=k8s.io Aug 12 23:56:53.119336 containerd[1508]: time="2025-08-12T23:56:53.119318309Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:56:53.828013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55415979940db85e4d48802adea3e67c8a1cf6849aeaf7d15879e997f31c2601-rootfs.mount: Deactivated successfully. Aug 12 23:56:53.985390 kubelet[2619]: E0812 23:56:53.985351 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:53.988135 containerd[1508]: time="2025-08-12T23:56:53.988087679Z" level=info msg="CreateContainer within sandbox \"9253ec86118ffaa9e0cc2dd408f2315ab81085bb494a36bc9dd78f07fab2e38d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 12 23:56:54.079657 containerd[1508]: time="2025-08-12T23:56:54.079496238Z" level=info msg="CreateContainer within sandbox \"9253ec86118ffaa9e0cc2dd408f2315ab81085bb494a36bc9dd78f07fab2e38d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e872baac31909f4af5a67a662cdd55e1ceefa52b8df3ad4cd9cc91b499d67ec7\"" Aug 12 23:56:54.080088 containerd[1508]: time="2025-08-12T23:56:54.080045173Z" level=info msg="StartContainer for \"e872baac31909f4af5a67a662cdd55e1ceefa52b8df3ad4cd9cc91b499d67ec7\"" Aug 12 23:56:54.116269 systemd[1]: Started cri-containerd-e872baac31909f4af5a67a662cdd55e1ceefa52b8df3ad4cd9cc91b499d67ec7.scope - libcontainer container e872baac31909f4af5a67a662cdd55e1ceefa52b8df3ad4cd9cc91b499d67ec7. Aug 12 23:56:54.156724 containerd[1508]: time="2025-08-12T23:56:54.156662081Z" level=info msg="StartContainer for \"e872baac31909f4af5a67a662cdd55e1ceefa52b8df3ad4cd9cc91b499d67ec7\" returns successfully" Aug 12 23:56:54.743086 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 12 23:56:54.955942 kubelet[2619]: I0812 23:56:54.955030 2619 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-12T23:56:54Z","lastTransitionTime":"2025-08-12T23:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 12 23:56:54.989961 kubelet[2619]: E0812 23:56:54.989915 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:55.497922 kubelet[2619]: I0812 23:56:55.497839 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n82n7" podStartSLOduration=6.497813121 podStartE2EDuration="6.497813121s" podCreationTimestamp="2025-08-12 23:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:56:55.497741174 +0000 UTC m=+113.048364407" watchObservedRunningTime="2025-08-12 23:56:55.497813121 +0000 UTC m=+113.048436354" Aug 12 23:56:55.992077 kubelet[2619]: E0812 23:56:55.992004 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:56.470339 systemd[1]: run-containerd-runc-k8s.io-e872baac31909f4af5a67a662cdd55e1ceefa52b8df3ad4cd9cc91b499d67ec7-runc.Ljtugu.mount: Deactivated successfully. Aug 12 23:56:58.297182 systemd-networkd[1421]: lxc_health: Link UP Aug 12 23:56:58.299762 systemd-networkd[1421]: lxc_health: Gained carrier Aug 12 23:56:59.931093 kubelet[2619]: E0812 23:56:59.930283 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:59.999948 kubelet[2619]: E0812 23:56:59.999905 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:57:00.294352 systemd-networkd[1421]: lxc_health: Gained IPv6LL Aug 12 23:57:00.775432 kubelet[2619]: E0812 23:57:00.775366 2619 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59258->127.0.0.1:36919: write tcp 127.0.0.1:59258->127.0.0.1:36919: write: broken pipe Aug 12 23:57:01.001351 kubelet[2619]: E0812 23:57:01.001295 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:57:02.645663 containerd[1508]: time="2025-08-12T23:57:02.645610558Z" level=info msg="StopPodSandbox for \"45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7\"" Aug 12 23:57:02.646212 containerd[1508]: time="2025-08-12T23:57:02.645722331Z" level=info msg="TearDown network for sandbox \"45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7\" successfully" Aug 12 23:57:02.646212 containerd[1508]: time="2025-08-12T23:57:02.645736347Z" level=info msg="StopPodSandbox for \"45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7\" returns successfully" Aug 12 23:57:02.646212 containerd[1508]: time="2025-08-12T23:57:02.646022752Z" level=info msg="RemovePodSandbox for \"45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7\"" Aug 12 23:57:02.646212 containerd[1508]: time="2025-08-12T23:57:02.646068770Z" level=info msg="Forcibly stopping sandbox \"45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7\"" Aug 12 23:57:02.646212 containerd[1508]: time="2025-08-12T23:57:02.646116791Z" level=info msg="TearDown network for sandbox \"45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7\" successfully" Aug 12 23:57:02.748262 containerd[1508]: time="2025-08-12T23:57:02.748178873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 12 23:57:02.748439 containerd[1508]: time="2025-08-12T23:57:02.748279805Z" level=info msg="RemovePodSandbox \"45a22776c301b9e2dc12d269a1b9c1124f1758c82f359cae696a902ec073f5a7\" returns successfully" Aug 12 23:57:02.748777 containerd[1508]: time="2025-08-12T23:57:02.748748037Z" level=info msg="StopPodSandbox for \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\"" Aug 12 23:57:02.748888 containerd[1508]: time="2025-08-12T23:57:02.748859098Z" level=info msg="TearDown network for sandbox \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\" successfully" Aug 12 23:57:02.748888 containerd[1508]: time="2025-08-12T23:57:02.748878714Z" level=info msg="StopPodSandbox for \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\" returns successfully" Aug 12 23:57:02.749174 containerd[1508]: time="2025-08-12T23:57:02.749140503Z" level=info msg="RemovePodSandbox for \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\"" Aug 12 23:57:02.749239 containerd[1508]: time="2025-08-12T23:57:02.749177143Z" level=info msg="Forcibly stopping sandbox \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\"" Aug 12 23:57:02.749311 containerd[1508]: time="2025-08-12T23:57:02.749259900Z" level=info msg="TearDown network for sandbox \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\" successfully" Aug 12 23:57:02.928494 containerd[1508]: time="2025-08-12T23:57:02.928317060Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 12 23:57:02.928494 containerd[1508]: time="2025-08-12T23:57:02.928402071Z" level=info msg="RemovePodSandbox \"4542cdc213b74415df92b100cafa9737e2a213dfa6ba9a98e74f0735755a4f57\" returns successfully" Aug 12 23:57:07.072704 sshd[4522]: Connection closed by 10.0.0.1 port 44618 Aug 12 23:57:07.073212 sshd-session[4519]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:07.077254 systemd[1]: sshd@29-10.0.0.52:22-10.0.0.1:44618.service: Deactivated successfully. Aug 12 23:57:07.079647 systemd[1]: session-30.scope: Deactivated successfully. Aug 12 23:57:07.080476 systemd-logind[1489]: Session 30 logged out. Waiting for processes to exit. Aug 12 23:57:07.081479 systemd-logind[1489]: Removed session 30. Aug 12 23:57:07.652597 kubelet[2619]: E0812 23:57:07.652428 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"