Sep 10 00:09:28.016024 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 22:30:20 -00 2025 Sep 10 00:09:28.016048 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cb45ee9d6fbf7c87c04de861ba691f168ee946330fcd7b0ae66140314c138af Sep 10 00:09:28.016060 kernel: BIOS-provided physical RAM map: Sep 10 00:09:28.016067 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 10 00:09:28.016073 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 10 00:09:28.016080 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 10 00:09:28.016087 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 10 00:09:28.016098 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 10 00:09:28.016105 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 10 00:09:28.016114 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 10 00:09:28.016121 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 10 00:09:28.016128 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 10 00:09:28.016134 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 10 00:09:28.016141 kernel: NX (Execute Disable) protection: active Sep 10 00:09:28.016149 kernel: APIC: Static calls initialized Sep 10 00:09:28.016158 kernel: SMBIOS 2.8 present. Sep 10 00:09:28.016166 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 10 00:09:28.016173 kernel: Hypervisor detected: KVM Sep 10 00:09:28.016180 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 10 00:09:28.016187 kernel: kvm-clock: using sched offset of 2903159164 cycles Sep 10 00:09:28.016194 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 10 00:09:28.016202 kernel: tsc: Detected 2794.750 MHz processor Sep 10 00:09:28.016209 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 10 00:09:28.016217 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 10 00:09:28.016224 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 10 00:09:28.016234 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 10 00:09:28.016242 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 10 00:09:28.016249 kernel: Using GB pages for direct mapping Sep 10 00:09:28.016256 kernel: ACPI: Early table checksum verification disabled Sep 10 00:09:28.016263 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 10 00:09:28.016271 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:09:28.016278 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:09:28.016285 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:09:28.016293 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 10 00:09:28.016303 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:09:28.016310 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:09:28.016317 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:09:28.016324 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:09:28.016332 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 10 00:09:28.016339 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 10 00:09:28.016350 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 10 00:09:28.016359 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 10 00:09:28.016367 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 10 00:09:28.016374 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 10 00:09:28.016382 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 10 00:09:28.016389 kernel: No NUMA configuration found Sep 10 00:09:28.016397 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 10 00:09:28.016404 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 10 00:09:28.016414 kernel: Zone ranges: Sep 10 00:09:28.016422 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 10 00:09:28.016429 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 10 00:09:28.016436 kernel: Normal empty Sep 10 00:09:28.016444 kernel: Movable zone start for each node Sep 10 00:09:28.016451 kernel: Early memory node ranges Sep 10 00:09:28.016458 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 10 00:09:28.016466 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 10 00:09:28.016473 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 10 00:09:28.016483 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 00:09:28.016491 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 10 00:09:28.016498 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 10 00:09:28.016505 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 10 00:09:28.016513 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 10 00:09:28.016521 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 10 00:09:28.016528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 10 00:09:28.016535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 10 00:09:28.016543 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 10 00:09:28.016550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 10 00:09:28.016560 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 10 00:09:28.016568 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 10 00:09:28.016575 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 10 00:09:28.016582 kernel: TSC deadline timer available Sep 10 00:09:28.016594 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 10 00:09:28.016603 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 10 00:09:28.016612 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 10 00:09:28.016621 kernel: kvm-guest: setup PV sched yield Sep 10 00:09:28.016631 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 10 00:09:28.016643 kernel: Booting paravirtualized kernel on KVM Sep 10 00:09:28.016652 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 10 00:09:28.016665 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 10 00:09:28.016674 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 10 00:09:28.016684 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 10 00:09:28.016693 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 10 00:09:28.016702 kernel: kvm-guest: PV spinlocks enabled Sep 10 00:09:28.016711 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 10 00:09:28.016722 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cb45ee9d6fbf7c87c04de861ba691f168ee946330fcd7b0ae66140314c138af Sep 10 00:09:28.016735 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 00:09:28.016742 kernel: random: crng init done Sep 10 00:09:28.016750 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 00:09:28.016766 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 00:09:28.016774 kernel: Fallback order for Node 0: 0 Sep 10 00:09:28.016781 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 10 00:09:28.016788 kernel: Policy zone: DMA32 Sep 10 00:09:28.016865 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 00:09:28.016876 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43504K init, 1572K bss, 138948K reserved, 0K cma-reserved) Sep 10 00:09:28.016883 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 00:09:28.016891 kernel: ftrace: allocating 37943 entries in 149 pages Sep 10 00:09:28.016898 kernel: ftrace: allocated 149 pages with 4 groups Sep 10 00:09:28.016905 kernel: Dynamic Preempt: voluntary Sep 10 00:09:28.016913 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 00:09:28.016921 kernel: rcu: RCU event tracing is enabled. Sep 10 00:09:28.016929 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 00:09:28.016936 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 00:09:28.016946 kernel: Rude variant of Tasks RCU enabled. Sep 10 00:09:28.016953 kernel: Tracing variant of Tasks RCU enabled. Sep 10 00:09:28.016961 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 00:09:28.016968 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 00:09:28.016976 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 10 00:09:28.016983 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 00:09:28.016991 kernel: Console: colour VGA+ 80x25 Sep 10 00:09:28.016998 kernel: printk: console [ttyS0] enabled Sep 10 00:09:28.017005 kernel: ACPI: Core revision 20230628 Sep 10 00:09:28.017016 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 10 00:09:28.017023 kernel: APIC: Switch to symmetric I/O mode setup Sep 10 00:09:28.017030 kernel: x2apic enabled Sep 10 00:09:28.017038 kernel: APIC: Switched APIC routing to: physical x2apic Sep 10 00:09:28.017045 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 10 00:09:28.017053 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 10 00:09:28.017060 kernel: kvm-guest: setup PV IPIs Sep 10 00:09:28.017078 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 10 00:09:28.017086 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 10 00:09:28.017094 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 10 00:09:28.017101 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 10 00:09:28.017109 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 10 00:09:28.017119 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 10 00:09:28.017127 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 10 00:09:28.017135 kernel: Spectre V2 : Mitigation: Retpolines Sep 10 00:09:28.017142 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 10 00:09:28.017150 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 10 00:09:28.017161 kernel: active return thunk: retbleed_return_thunk Sep 10 00:09:28.017168 kernel: RETBleed: Mitigation: untrained return thunk Sep 10 00:09:28.017176 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 10 00:09:28.017184 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 10 00:09:28.017192 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 10 00:09:28.017200 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 10 00:09:28.017208 kernel: active return thunk: srso_return_thunk Sep 10 00:09:28.017216 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 10 00:09:28.017226 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 10 00:09:28.017234 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 10 00:09:28.017241 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 10 00:09:28.017252 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 10 00:09:28.017260 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 10 00:09:28.017268 kernel: Freeing SMP alternatives memory: 32K Sep 10 00:09:28.017275 kernel: pid_max: default: 32768 minimum: 301 Sep 10 00:09:28.017283 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 10 00:09:28.017291 kernel: landlock: Up and running. Sep 10 00:09:28.017301 kernel: SELinux: Initializing. Sep 10 00:09:28.017309 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:09:28.017317 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:09:28.017325 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 10 00:09:28.017334 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:09:28.017342 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:09:28.017350 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:09:28.017358 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 10 00:09:28.017366 kernel: ... version: 0 Sep 10 00:09:28.017376 kernel: ... bit width: 48 Sep 10 00:09:28.017384 kernel: ... generic registers: 6 Sep 10 00:09:28.017392 kernel: ... value mask: 0000ffffffffffff Sep 10 00:09:28.017399 kernel: ... max period: 00007fffffffffff Sep 10 00:09:28.017407 kernel: ... fixed-purpose events: 0 Sep 10 00:09:28.017414 kernel: ... event mask: 000000000000003f Sep 10 00:09:28.017422 kernel: signal: max sigframe size: 1776 Sep 10 00:09:28.017430 kernel: rcu: Hierarchical SRCU implementation. Sep 10 00:09:28.017438 kernel: rcu: Max phase no-delay instances is 400. Sep 10 00:09:28.017448 kernel: smp: Bringing up secondary CPUs ... Sep 10 00:09:28.017456 kernel: smpboot: x86: Booting SMP configuration: Sep 10 00:09:28.017464 kernel: .... node #0, CPUs: #1 #2 #3 Sep 10 00:09:28.017471 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 00:09:28.017479 kernel: smpboot: Max logical packages: 1 Sep 10 00:09:28.017487 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 10 00:09:28.017494 kernel: devtmpfs: initialized Sep 10 00:09:28.017502 kernel: x86/mm: Memory block size: 128MB Sep 10 00:09:28.017510 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 00:09:28.017520 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 00:09:28.017528 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 00:09:28.017536 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 00:09:28.017543 kernel: audit: initializing netlink subsys (disabled) Sep 10 00:09:28.017551 kernel: audit: type=2000 audit(1757462967.663:1): state=initialized audit_enabled=0 res=1 Sep 10 00:09:28.017559 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 00:09:28.017567 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 10 00:09:28.017574 kernel: cpuidle: using governor menu Sep 10 00:09:28.017582 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 00:09:28.017592 kernel: dca service started, version 1.12.1 Sep 10 00:09:28.017600 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 10 00:09:28.017608 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 10 00:09:28.017615 kernel: PCI: Using configuration type 1 for base access Sep 10 00:09:28.017623 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 10 00:09:28.017631 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 00:09:28.017639 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 00:09:28.017646 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 00:09:28.017654 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 00:09:28.017664 kernel: ACPI: Added _OSI(Module Device) Sep 10 00:09:28.017672 kernel: ACPI: Added _OSI(Processor Device) Sep 10 00:09:28.017680 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 00:09:28.017687 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 00:09:28.017695 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 10 00:09:28.017704 kernel: ACPI: Interpreter enabled Sep 10 00:09:28.017715 kernel: ACPI: PM: (supports S0 S3 S5) Sep 10 00:09:28.017723 kernel: ACPI: Using IOAPIC for interrupt routing Sep 10 00:09:28.017735 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 10 00:09:28.017769 kernel: PCI: Using E820 reservations for host bridge windows Sep 10 00:09:28.017779 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 10 00:09:28.017786 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 00:09:28.018016 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 00:09:28.018154 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 10 00:09:28.018284 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 10 00:09:28.018295 kernel: PCI host bridge to bus 0000:00 Sep 10 00:09:28.018438 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 10 00:09:28.018556 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 10 00:09:28.019105 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 10 00:09:28.019246 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 10 00:09:28.019376 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 10 00:09:28.019534 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 10 00:09:28.019702 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 00:09:28.019954 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 10 00:09:28.020143 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 10 00:09:28.020314 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 10 00:09:28.020482 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 10 00:09:28.020645 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 10 00:09:28.020839 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 10 00:09:28.021036 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 10 00:09:28.021207 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 10 00:09:28.021369 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 10 00:09:28.021500 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 10 00:09:28.021663 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 10 00:09:28.021825 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 10 00:09:28.021960 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 10 00:09:28.022096 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 10 00:09:28.022237 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 10 00:09:28.022368 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 10 00:09:28.022495 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 10 00:09:28.022627 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 10 00:09:28.022846 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 10 00:09:28.023000 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 10 00:09:28.023132 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 10 00:09:28.023304 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 10 00:09:28.023462 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 10 00:09:28.023603 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 10 00:09:28.023752 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 10 00:09:28.025224 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 10 00:09:28.025245 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 10 00:09:28.025259 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 10 00:09:28.025267 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 10 00:09:28.025275 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 10 00:09:28.025283 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 10 00:09:28.025294 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 10 00:09:28.025302 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 10 00:09:28.025310 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 10 00:09:28.025317 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 10 00:09:28.025325 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 10 00:09:28.025336 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 10 00:09:28.025344 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 10 00:09:28.025352 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 10 00:09:28.025360 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 10 00:09:28.025368 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 10 00:09:28.025376 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 10 00:09:28.025384 kernel: iommu: Default domain type: Translated Sep 10 00:09:28.025392 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 10 00:09:28.025403 kernel: PCI: Using ACPI for IRQ routing Sep 10 00:09:28.025413 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 10 00:09:28.025422 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 10 00:09:28.025430 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 10 00:09:28.025568 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 10 00:09:28.025694 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 10 00:09:28.025854 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 10 00:09:28.025866 kernel: vgaarb: loaded Sep 10 00:09:28.025874 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 10 00:09:28.025887 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 10 00:09:28.025895 kernel: clocksource: Switched to clocksource kvm-clock Sep 10 00:09:28.025903 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 00:09:28.025911 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 00:09:28.025919 kernel: pnp: PnP ACPI init Sep 10 00:09:28.026081 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 10 00:09:28.026093 kernel: pnp: PnP ACPI: found 6 devices Sep 10 00:09:28.026101 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 10 00:09:28.026113 kernel: NET: Registered PF_INET protocol family Sep 10 00:09:28.026121 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 00:09:28.026129 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 00:09:28.026138 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 00:09:28.026146 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 00:09:28.026154 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 00:09:28.026162 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 00:09:28.026170 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:09:28.026178 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:09:28.026189 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 00:09:28.026197 kernel: NET: Registered PF_XDP protocol family Sep 10 00:09:28.026320 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 10 00:09:28.026436 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 10 00:09:28.026555 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 10 00:09:28.026674 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 10 00:09:28.026822 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 10 00:09:28.026963 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 10 00:09:28.026978 kernel: PCI: CLS 0 bytes, default 64 Sep 10 00:09:28.026986 kernel: Initialise system trusted keyrings Sep 10 00:09:28.026994 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 00:09:28.027003 kernel: Key type asymmetric registered Sep 10 00:09:28.027010 kernel: Asymmetric key parser 'x509' registered Sep 10 00:09:28.027019 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 10 00:09:28.027026 kernel: io scheduler mq-deadline registered Sep 10 00:09:28.027035 kernel: io scheduler kyber registered Sep 10 00:09:28.027043 kernel: io scheduler bfq registered Sep 10 00:09:28.027054 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 10 00:09:28.027062 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 10 00:09:28.027071 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 10 00:09:28.027078 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 10 00:09:28.027086 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 00:09:28.027094 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 10 00:09:28.027103 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 10 00:09:28.027111 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 10 00:09:28.027119 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 10 00:09:28.027267 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 10 00:09:28.027282 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 10 00:09:28.027402 kernel: rtc_cmos 00:04: registered as rtc0 Sep 10 00:09:28.027521 kernel: rtc_cmos 00:04: setting system clock to 2025-09-10T00:09:27 UTC (1757462967) Sep 10 00:09:28.027639 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 10 00:09:28.027650 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 10 00:09:28.027658 kernel: NET: Registered PF_INET6 protocol family Sep 10 00:09:28.027666 kernel: Segment Routing with IPv6 Sep 10 00:09:28.027677 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 00:09:28.027685 kernel: NET: Registered PF_PACKET protocol family Sep 10 00:09:28.027693 kernel: Key type dns_resolver registered Sep 10 00:09:28.027701 kernel: IPI shorthand broadcast: enabled Sep 10 00:09:28.027709 kernel: sched_clock: Marking stable (689003566, 126428068)->(833148693, -17717059) Sep 10 00:09:28.027717 kernel: registered taskstats version 1 Sep 10 00:09:28.027725 kernel: Loading compiled-in X.509 certificates Sep 10 00:09:28.027733 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: 43ca9166750d5d496f0e6dfe8927b735967dfba4' Sep 10 00:09:28.027741 kernel: Key type .fscrypt registered Sep 10 00:09:28.027751 kernel: Key type fscrypt-provisioning registered Sep 10 00:09:28.027769 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 00:09:28.027777 kernel: ima: Allocated hash algorithm: sha1 Sep 10 00:09:28.027785 kernel: ima: No architecture policies found Sep 10 00:09:28.027943 kernel: clk: Disabling unused clocks Sep 10 00:09:28.027952 kernel: Freeing unused kernel image (initmem) memory: 43504K Sep 10 00:09:28.027960 kernel: Write protecting the kernel read-only data: 38912k Sep 10 00:09:28.027968 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 10 00:09:28.027976 kernel: Run /init as init process Sep 10 00:09:28.027987 kernel: with arguments: Sep 10 00:09:28.027995 kernel: /init Sep 10 00:09:28.028003 kernel: with environment: Sep 10 00:09:28.028011 kernel: HOME=/ Sep 10 00:09:28.028019 kernel: TERM=linux Sep 10 00:09:28.028026 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 00:09:28.028036 systemd[1]: Successfully made /usr/ read-only. Sep 10 00:09:28.028047 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 00:09:28.028059 systemd[1]: Detected virtualization kvm. Sep 10 00:09:28.028067 systemd[1]: Detected architecture x86-64. Sep 10 00:09:28.028076 systemd[1]: Running in initrd. Sep 10 00:09:28.028084 systemd[1]: No hostname configured, using default hostname. Sep 10 00:09:28.028093 systemd[1]: Hostname set to . Sep 10 00:09:28.028101 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:09:28.028110 systemd[1]: Queued start job for default target initrd.target. Sep 10 00:09:28.028118 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:09:28.028130 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:09:28.028155 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 00:09:28.028168 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:09:28.028177 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 00:09:28.028187 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 00:09:28.028201 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 00:09:28.028212 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 00:09:28.028221 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:09:28.028230 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:09:28.028238 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:09:28.028247 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:09:28.028256 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:09:28.028265 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:09:28.028276 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:09:28.028284 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:09:28.028293 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 00:09:28.028302 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 00:09:28.028311 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:09:28.028320 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:09:28.028328 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:09:28.028337 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:09:28.028346 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 00:09:28.028357 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:09:28.028365 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 00:09:28.028374 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 00:09:28.028383 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:09:28.028391 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:09:28.028400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:09:28.028409 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 00:09:28.028417 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:09:28.028433 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 00:09:28.028443 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:09:28.028488 systemd-journald[194]: Collecting audit messages is disabled. Sep 10 00:09:28.028512 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:09:28.028524 systemd-journald[194]: Journal started Sep 10 00:09:28.028547 systemd-journald[194]: Runtime Journal (/run/log/journal/91730c35d8b9467a8da3c1be88413619) is 6M, max 48.4M, 42.3M free. Sep 10 00:09:28.012722 systemd-modules-load[195]: Inserted module 'overlay' Sep 10 00:09:28.045586 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 00:09:28.048770 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:09:28.048820 kernel: Bridge firewalling registered Sep 10 00:09:28.048863 systemd-modules-load[195]: Inserted module 'br_netfilter' Sep 10 00:09:28.049413 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:09:28.051330 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:09:28.067147 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:09:28.068377 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:09:28.069345 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:09:28.073016 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:09:28.085878 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:09:28.088386 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:09:28.092862 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:09:28.105152 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 00:09:28.106743 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:09:28.111410 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:09:28.122268 dracut-cmdline[228]: dracut-dracut-053 Sep 10 00:09:28.126881 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cb45ee9d6fbf7c87c04de861ba691f168ee946330fcd7b0ae66140314c138af Sep 10 00:09:28.151524 systemd-resolved[236]: Positive Trust Anchors: Sep 10 00:09:28.151545 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:09:28.151575 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:09:28.154109 systemd-resolved[236]: Defaulting to hostname 'linux'. Sep 10 00:09:28.155218 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:09:28.163888 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:09:28.243853 kernel: SCSI subsystem initialized Sep 10 00:09:28.256834 kernel: Loading iSCSI transport class v2.0-870. Sep 10 00:09:28.268850 kernel: iscsi: registered transport (tcp) Sep 10 00:09:28.292284 kernel: iscsi: registered transport (qla4xxx) Sep 10 00:09:28.292379 kernel: QLogic iSCSI HBA Driver Sep 10 00:09:28.348611 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 00:09:28.358001 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 00:09:28.389004 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 00:09:28.389090 kernel: device-mapper: uevent: version 1.0.3 Sep 10 00:09:28.390026 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 10 00:09:28.439854 kernel: raid6: avx2x4 gen() 22287 MB/s Sep 10 00:09:28.456853 kernel: raid6: avx2x2 gen() 25174 MB/s Sep 10 00:09:28.473938 kernel: raid6: avx2x1 gen() 20638 MB/s Sep 10 00:09:28.473984 kernel: raid6: using algorithm avx2x2 gen() 25174 MB/s Sep 10 00:09:28.491978 kernel: raid6: .... xor() 18454 MB/s, rmw enabled Sep 10 00:09:28.492048 kernel: raid6: using avx2x2 recovery algorithm Sep 10 00:09:28.513836 kernel: xor: automatically using best checksumming function avx Sep 10 00:09:28.662841 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 00:09:28.677026 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:09:28.689010 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:09:28.707075 systemd-udevd[416]: Using default interface naming scheme 'v255'. Sep 10 00:09:28.714113 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:09:28.722975 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 00:09:28.739352 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Sep 10 00:09:28.773622 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:09:28.786939 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:09:28.864856 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:09:28.873209 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 00:09:28.886192 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 00:09:28.888807 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:09:28.890042 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:09:28.893526 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:09:28.898843 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 10 00:09:28.904169 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 00:09:28.909911 kernel: cryptd: max_cpu_qlen set to 1000 Sep 10 00:09:28.905472 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 00:09:28.915469 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 00:09:28.915494 kernel: GPT:9289727 != 19775487 Sep 10 00:09:28.915509 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 00:09:28.915523 kernel: GPT:9289727 != 19775487 Sep 10 00:09:28.915537 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 00:09:28.915551 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:09:28.919155 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:09:28.921019 kernel: AVX2 version of gcm_enc/dec engaged. Sep 10 00:09:28.921045 kernel: AES CTR mode by8 optimization enabled Sep 10 00:09:28.942822 kernel: libata version 3.00 loaded. Sep 10 00:09:28.951256 kernel: ahci 0000:00:1f.2: version 3.0 Sep 10 00:09:28.951527 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 10 00:09:28.954002 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 10 00:09:28.954200 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 10 00:09:28.956820 kernel: scsi host0: ahci Sep 10 00:09:28.957064 kernel: scsi host1: ahci Sep 10 00:09:28.965837 kernel: scsi host2: ahci Sep 10 00:09:28.966127 kernel: BTRFS: device fsid 003d077b-67cf-4f63-a44d-9ba6ca802913 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (465) Sep 10 00:09:28.966145 kernel: scsi host3: ahci Sep 10 00:09:28.967218 kernel: scsi host4: ahci Sep 10 00:09:28.976847 kernel: scsi host5: ahci Sep 10 00:09:28.977187 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 10 00:09:28.977204 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 10 00:09:28.977231 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 10 00:09:28.977245 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 10 00:09:28.977259 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 10 00:09:28.977276 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 10 00:09:28.981831 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (469) Sep 10 00:09:28.993975 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 00:09:28.994193 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 00:09:29.009869 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 00:09:29.041381 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:09:29.052972 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 00:09:29.065951 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 00:09:29.067372 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:09:29.067443 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:09:29.073944 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:09:29.077218 disk-uuid[563]: Primary Header is updated. Sep 10 00:09:29.077218 disk-uuid[563]: Secondary Entries is updated. Sep 10 00:09:29.077218 disk-uuid[563]: Secondary Header is updated. Sep 10 00:09:29.081666 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:09:29.077238 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:09:29.078240 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:09:29.082688 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:09:29.086821 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:09:29.096014 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:09:29.156334 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:09:29.166030 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:09:29.188906 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:09:29.290819 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 10 00:09:29.290897 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 10 00:09:29.290929 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 10 00:09:29.291842 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 10 00:09:29.291923 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 10 00:09:29.292830 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 10 00:09:29.293823 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 10 00:09:29.293839 kernel: ata3.00: applying bridge limits Sep 10 00:09:29.294962 kernel: ata3.00: configured for UDMA/100 Sep 10 00:09:29.295835 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 10 00:09:29.346835 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 10 00:09:29.347182 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 10 00:09:29.360819 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 10 00:09:30.114419 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:09:30.117556 disk-uuid[564]: The operation has completed successfully. Sep 10 00:09:30.200190 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 00:09:30.200371 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 00:09:30.259253 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 00:09:30.269282 sh[593]: Success Sep 10 00:09:30.294855 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 10 00:09:30.357338 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 00:09:30.371034 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 00:09:30.377056 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 00:09:30.394766 kernel: BTRFS info (device dm-0): first mount of filesystem 003d077b-67cf-4f63-a44d-9ba6ca802913 Sep 10 00:09:30.394862 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:09:30.397154 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 10 00:09:30.397192 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 00:09:30.398082 kernel: BTRFS info (device dm-0): using free space tree Sep 10 00:09:30.421611 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 00:09:30.424742 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 00:09:30.441178 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 00:09:30.446707 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 00:09:30.473639 kernel: BTRFS info (device vda6): first mount of filesystem d4b3ac37-cdfe-41c0-a8e6-d2a7d17cd0f8 Sep 10 00:09:30.473735 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:09:30.473753 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:09:30.493124 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:09:30.501826 kernel: BTRFS info (device vda6): last unmount of filesystem d4b3ac37-cdfe-41c0-a8e6-d2a7d17cd0f8 Sep 10 00:09:30.609043 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:09:30.617985 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:09:30.646029 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 00:09:30.651254 systemd-networkd[769]: lo: Link UP Sep 10 00:09:30.651266 systemd-networkd[769]: lo: Gained carrier Sep 10 00:09:30.653131 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 00:09:30.653771 systemd-networkd[769]: Enumeration completed Sep 10 00:09:30.655207 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:09:30.656303 systemd[1]: Reached target network.target - Network. Sep 10 00:09:30.659680 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:09:30.659686 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:09:30.660730 systemd-networkd[769]: eth0: Link UP Sep 10 00:09:30.660733 systemd-networkd[769]: eth0: Gained carrier Sep 10 00:09:30.660741 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:09:30.687869 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:09:31.427701 ignition[773]: Ignition 2.20.0 Sep 10 00:09:31.427713 ignition[773]: Stage: fetch-offline Sep 10 00:09:31.427767 ignition[773]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:09:31.427778 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:09:31.427902 ignition[773]: parsed url from cmdline: "" Sep 10 00:09:31.427907 ignition[773]: no config URL provided Sep 10 00:09:31.427913 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 00:09:31.427922 ignition[773]: no config at "/usr/lib/ignition/user.ign" Sep 10 00:09:31.427950 ignition[773]: op(1): [started] loading QEMU firmware config module Sep 10 00:09:31.427956 ignition[773]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 00:09:31.435681 ignition[773]: op(1): [finished] loading QEMU firmware config module Sep 10 00:09:31.435705 ignition[773]: QEMU firmware config was not found. Ignoring... Sep 10 00:09:31.474370 ignition[773]: parsing config with SHA512: cbff77310e1da65dfbd97001411e4e7232a654d02c76104d398c179c10074267b8466d4fb5a6173e5bd03b24dd9562e02f13f5626251e0898a9b2343ad00e346 Sep 10 00:09:31.487946 unknown[773]: fetched base config from "system" Sep 10 00:09:31.488426 ignition[773]: fetch-offline: fetch-offline passed Sep 10 00:09:31.487961 unknown[773]: fetched user config from "qemu" Sep 10 00:09:31.489243 ignition[773]: Ignition finished successfully Sep 10 00:09:31.494452 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:09:31.495870 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 00:09:31.506915 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 00:09:31.522904 ignition[783]: Ignition 2.20.0 Sep 10 00:09:31.522921 ignition[783]: Stage: kargs Sep 10 00:09:31.523139 ignition[783]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:09:31.523155 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:09:31.526945 ignition[783]: kargs: kargs passed Sep 10 00:09:31.526999 ignition[783]: Ignition finished successfully Sep 10 00:09:31.532100 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 00:09:31.544924 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 00:09:31.560850 ignition[792]: Ignition 2.20.0 Sep 10 00:09:31.560867 ignition[792]: Stage: disks Sep 10 00:09:31.561099 ignition[792]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:09:31.561116 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:09:31.562334 ignition[792]: disks: disks passed Sep 10 00:09:31.564885 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 00:09:31.562395 ignition[792]: Ignition finished successfully Sep 10 00:09:31.566152 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 00:09:31.568016 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 00:09:31.569956 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:09:31.572072 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:09:31.574225 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:09:31.586925 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 00:09:31.600049 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 10 00:09:31.606132 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 00:09:31.624914 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 00:09:31.709826 kernel: EXT4-fs (vda9): mounted filesystem 3a835d73-5617-45b5-8047-be5dcc4564d7 r/w with ordered data mode. Quota mode: none. Sep 10 00:09:31.710929 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 00:09:31.712423 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 00:09:31.725896 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:09:31.727898 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 00:09:31.728329 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 00:09:31.728380 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 00:09:31.738430 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (810) Sep 10 00:09:31.738459 kernel: BTRFS info (device vda6): first mount of filesystem d4b3ac37-cdfe-41c0-a8e6-d2a7d17cd0f8 Sep 10 00:09:31.738476 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:09:31.728409 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:09:31.744250 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:09:31.744267 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:09:31.735236 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 00:09:31.743215 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 00:09:31.747658 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:09:31.781982 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 00:09:31.785590 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Sep 10 00:09:31.789426 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 00:09:31.794219 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 00:09:31.930171 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 00:09:31.946887 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 00:09:31.949410 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 00:09:31.957943 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 00:09:31.959978 kernel: BTRFS info (device vda6): last unmount of filesystem d4b3ac37-cdfe-41c0-a8e6-d2a7d17cd0f8 Sep 10 00:09:31.978350 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 00:09:31.987550 ignition[926]: INFO : Ignition 2.20.0 Sep 10 00:09:31.987550 ignition[926]: INFO : Stage: mount Sep 10 00:09:31.992775 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:09:31.992775 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:09:31.992775 ignition[926]: INFO : mount: mount passed Sep 10 00:09:31.992775 ignition[926]: INFO : Ignition finished successfully Sep 10 00:09:31.991060 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 00:09:31.999012 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 00:09:32.274956 systemd-networkd[769]: eth0: Gained IPv6LL Sep 10 00:09:32.729020 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:09:32.738736 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (938) Sep 10 00:09:32.738810 kernel: BTRFS info (device vda6): first mount of filesystem d4b3ac37-cdfe-41c0-a8e6-d2a7d17cd0f8 Sep 10 00:09:32.738832 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:09:32.740272 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:09:32.742823 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:09:32.744952 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:09:32.862305 ignition[955]: INFO : Ignition 2.20.0 Sep 10 00:09:32.862305 ignition[955]: INFO : Stage: files Sep 10 00:09:32.864455 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:09:32.864455 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:09:32.864455 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Sep 10 00:09:32.864455 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 00:09:32.864455 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 00:09:32.871994 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 00:09:32.871994 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 00:09:32.871994 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 00:09:32.871994 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 10 00:09:32.871994 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 10 00:09:32.867423 unknown[955]: wrote ssh authorized keys file for user: core Sep 10 00:09:33.083943 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 00:09:33.928440 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 10 00:09:33.930930 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 00:09:33.930930 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 10 00:09:34.160408 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 10 00:09:34.545040 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 00:09:34.547060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 10 00:09:34.547060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 00:09:34.547060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:09:34.547060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:09:34.547060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:09:34.547060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:09:34.547060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:09:34.547060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:09:34.547060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:09:34.547060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:09:34.547060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 10 00:09:34.547060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 10 00:09:34.547060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 10 00:09:34.547060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 10 00:09:34.808165 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 10 00:09:35.402472 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 10 00:09:35.402472 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 10 00:09:35.406409 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:09:35.408728 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:09:35.408728 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 10 00:09:35.408728 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 10 00:09:35.412889 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:09:35.414741 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:09:35.414741 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 10 00:09:35.417862 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 00:09:35.436320 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:09:35.441771 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:09:35.443772 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 00:09:35.443772 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 10 00:09:35.446931 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 00:09:35.448614 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:09:35.450659 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:09:35.452550 ignition[955]: INFO : files: files passed Sep 10 00:09:35.453420 ignition[955]: INFO : Ignition finished successfully Sep 10 00:09:35.456102 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 00:09:35.470048 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 00:09:35.471663 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 00:09:35.477232 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 00:09:35.477373 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 00:09:35.483837 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 00:09:35.487871 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:09:35.487871 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:09:35.491917 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:09:35.495438 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:09:35.496912 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 00:09:35.508966 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 00:09:35.533278 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 00:09:35.534358 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 00:09:35.537214 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 00:09:35.539284 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 00:09:35.541420 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 00:09:35.553143 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 00:09:35.568218 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:09:35.571420 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 00:09:35.586008 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:09:35.588510 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:09:35.589948 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 00:09:35.592051 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 00:09:35.592220 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:09:35.594825 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 00:09:35.596542 systemd[1]: Stopped target basic.target - Basic System. Sep 10 00:09:35.598817 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 00:09:35.601068 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:09:35.603313 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 00:09:35.605705 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 00:09:35.608050 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:09:35.610552 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 00:09:35.612775 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 00:09:35.615178 systemd[1]: Stopped target swap.target - Swaps. Sep 10 00:09:35.617128 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 00:09:35.617295 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:09:35.619871 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:09:35.621428 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:09:35.623760 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 00:09:35.623970 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:09:35.626042 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 00:09:35.626186 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 00:09:35.628625 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 00:09:35.628887 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:09:35.630590 systemd[1]: Stopped target paths.target - Path Units. Sep 10 00:09:35.632210 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 00:09:35.636855 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:09:35.639117 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 00:09:35.641159 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 00:09:35.643389 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 00:09:35.643507 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:09:35.645744 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 00:09:35.645863 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:09:35.647603 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 00:09:35.647744 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:09:35.649624 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 00:09:35.649756 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 00:09:35.661957 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 00:09:35.662883 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 00:09:35.663017 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:09:35.665828 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 00:09:35.666836 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 00:09:35.667110 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:09:35.669328 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 00:09:35.669466 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:09:35.677268 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 00:09:35.677423 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 00:09:35.682146 ignition[1009]: INFO : Ignition 2.20.0 Sep 10 00:09:35.682146 ignition[1009]: INFO : Stage: umount Sep 10 00:09:35.682146 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:09:35.682146 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:09:35.682146 ignition[1009]: INFO : umount: umount passed Sep 10 00:09:35.682146 ignition[1009]: INFO : Ignition finished successfully Sep 10 00:09:35.684064 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 00:09:35.684274 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 00:09:35.686899 systemd[1]: Stopped target network.target - Network. Sep 10 00:09:35.688085 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 00:09:35.688195 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 00:09:35.689939 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 00:09:35.690005 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 00:09:35.691718 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 00:09:35.691786 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 00:09:35.693842 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 00:09:35.693931 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 00:09:35.696014 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 00:09:35.698035 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 00:09:35.701158 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 00:09:35.704029 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 00:09:35.704165 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 00:09:35.709156 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 00:09:35.709489 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 00:09:35.709659 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 00:09:35.713061 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 00:09:35.713366 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 00:09:35.713490 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 00:09:35.716174 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 00:09:35.716254 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:09:35.718597 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 00:09:35.718651 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 00:09:35.730956 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 00:09:35.732851 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 00:09:35.732922 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:09:35.735155 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:09:35.735205 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:09:35.737701 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 00:09:35.737750 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 00:09:35.739855 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 00:09:35.739913 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:09:35.742081 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:09:35.745415 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 00:09:35.745487 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 00:09:35.753971 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 00:09:35.754098 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 00:09:35.759490 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 00:09:35.759686 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:09:35.761848 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 00:09:35.761905 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 00:09:35.763866 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 00:09:35.763904 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:09:35.765836 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 00:09:35.765892 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:09:35.767938 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 00:09:35.767986 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 00:09:35.769845 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:09:35.769902 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:09:35.779028 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 00:09:35.780205 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 00:09:35.780289 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:09:35.782979 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:09:35.783036 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:09:35.786373 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 10 00:09:35.786441 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 10 00:09:35.788537 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 00:09:35.788705 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 00:09:35.791457 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 00:09:35.794709 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 00:09:35.806163 systemd[1]: Switching root. Sep 10 00:09:35.842331 systemd-journald[194]: Journal stopped Sep 10 00:09:37.249609 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Sep 10 00:09:37.249672 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 00:09:37.249696 kernel: SELinux: policy capability open_perms=1 Sep 10 00:09:37.249708 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 00:09:37.249719 kernel: SELinux: policy capability always_check_network=0 Sep 10 00:09:37.249734 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 00:09:37.249756 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 00:09:37.249768 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 00:09:37.249779 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 00:09:37.249791 kernel: audit: type=1403 audit(1757462976.428:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 00:09:37.249817 systemd[1]: Successfully loaded SELinux policy in 41.684ms. Sep 10 00:09:37.249841 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.048ms. Sep 10 00:09:37.249854 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 00:09:37.249867 systemd[1]: Detected virtualization kvm. Sep 10 00:09:37.249882 systemd[1]: Detected architecture x86-64. Sep 10 00:09:37.249894 systemd[1]: Detected first boot. Sep 10 00:09:37.249906 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:09:37.249918 zram_generator::config[1055]: No configuration found. Sep 10 00:09:37.249932 kernel: Guest personality initialized and is inactive Sep 10 00:09:37.249944 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 10 00:09:37.249955 kernel: Initialized host personality Sep 10 00:09:37.249966 kernel: NET: Registered PF_VSOCK protocol family Sep 10 00:09:37.249981 systemd[1]: Populated /etc with preset unit settings. Sep 10 00:09:37.249994 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 00:09:37.250007 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 00:09:37.250019 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 00:09:37.250032 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 00:09:37.250057 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 00:09:37.250072 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 00:09:37.250087 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 00:09:37.250102 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 00:09:37.250121 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 00:09:37.250136 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 00:09:37.250149 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 00:09:37.250161 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 00:09:37.250173 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:09:37.250185 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:09:37.250197 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 00:09:37.250209 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 00:09:37.250221 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 00:09:37.250237 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:09:37.250249 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 10 00:09:37.250261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:09:37.250273 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 00:09:37.250285 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 00:09:37.250297 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 00:09:37.250309 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 00:09:37.250324 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:09:37.250343 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:09:37.250355 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:09:37.250367 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:09:37.250379 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 00:09:37.250391 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 00:09:37.250403 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 00:09:37.250415 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:09:37.250427 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:09:37.250442 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:09:37.250454 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 00:09:37.250466 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 00:09:37.250478 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 00:09:37.250490 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 00:09:37.250502 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:09:37.250514 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 00:09:37.250542 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 00:09:37.250554 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 00:09:37.250569 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 00:09:37.250581 systemd[1]: Reached target machines.target - Containers. Sep 10 00:09:37.250594 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 00:09:37.250606 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:09:37.250625 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:09:37.250638 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 00:09:37.250650 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:09:37.250662 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:09:37.250678 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:09:37.250841 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 00:09:37.250854 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:09:37.250867 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 00:09:37.250879 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 00:09:37.250891 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 00:09:37.250903 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 00:09:37.250915 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 00:09:37.250928 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 00:09:37.250947 kernel: fuse: init (API version 7.39) Sep 10 00:09:37.250959 kernel: loop: module loaded Sep 10 00:09:37.250971 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:09:37.250983 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:09:37.250996 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 00:09:37.251009 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 00:09:37.251024 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 00:09:37.251063 systemd-journald[1126]: Collecting audit messages is disabled. Sep 10 00:09:37.251100 systemd-journald[1126]: Journal started Sep 10 00:09:37.251128 systemd-journald[1126]: Runtime Journal (/run/log/journal/91730c35d8b9467a8da3c1be88413619) is 6M, max 48.4M, 42.3M free. Sep 10 00:09:37.023845 systemd[1]: Queued start job for default target multi-user.target. Sep 10 00:09:37.036021 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 00:09:37.036624 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 00:09:37.253953 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:09:37.256857 kernel: ACPI: bus type drm_connector registered Sep 10 00:09:37.256893 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 00:09:37.256916 systemd[1]: Stopped verity-setup.service. Sep 10 00:09:37.260827 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:09:37.265880 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:09:37.266693 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 00:09:37.268018 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 00:09:37.269221 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 00:09:37.270321 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 00:09:37.271488 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 00:09:37.272665 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 00:09:37.273962 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 00:09:37.275404 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:09:37.276966 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 00:09:37.277182 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 00:09:37.278682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:09:37.278907 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:09:37.280329 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:09:37.280554 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:09:37.282054 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:09:37.282266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:09:37.283861 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 00:09:37.284071 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 00:09:37.285632 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:09:37.285858 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:09:37.287442 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:09:37.288930 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 00:09:37.290499 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 00:09:37.292192 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 00:09:37.306008 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 00:09:37.318954 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 00:09:37.321553 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 00:09:37.322702 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 00:09:37.322742 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:09:37.325147 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 00:09:37.327772 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 00:09:37.333908 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 00:09:37.335165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:09:37.336994 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 00:09:37.339274 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 00:09:37.340569 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:09:37.344933 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 00:09:37.346280 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:09:37.351062 systemd-journald[1126]: Time spent on flushing to /var/log/journal/91730c35d8b9467a8da3c1be88413619 is 17.674ms for 966 entries. Sep 10 00:09:37.351062 systemd-journald[1126]: System Journal (/var/log/journal/91730c35d8b9467a8da3c1be88413619) is 8M, max 195.6M, 187.6M free. Sep 10 00:09:37.375219 systemd-journald[1126]: Received client request to flush runtime journal. Sep 10 00:09:37.350995 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:09:37.356026 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 00:09:37.360000 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 00:09:37.365354 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 00:09:37.366695 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 00:09:37.370947 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 00:09:37.372638 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 00:09:37.378653 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 00:09:37.386377 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 00:09:37.393203 kernel: loop0: detected capacity change from 0 to 138176 Sep 10 00:09:37.398300 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 00:09:37.405639 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:09:37.420199 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 10 00:09:37.422263 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:09:37.427107 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 00:09:37.430972 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 00:09:37.436928 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 00:09:37.452059 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:09:37.454197 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 10 00:09:37.459061 kernel: loop1: detected capacity change from 0 to 147912 Sep 10 00:09:37.477363 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Sep 10 00:09:37.477390 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Sep 10 00:09:37.485169 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:09:37.505842 kernel: loop2: detected capacity change from 0 to 224512 Sep 10 00:09:37.543832 kernel: loop3: detected capacity change from 0 to 138176 Sep 10 00:09:37.788395 kernel: loop4: detected capacity change from 0 to 147912 Sep 10 00:09:37.807824 kernel: loop5: detected capacity change from 0 to 224512 Sep 10 00:09:37.820071 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 00:09:37.821157 (sd-merge)[1200]: Merged extensions into '/usr'. Sep 10 00:09:37.827725 systemd[1]: Reload requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 00:09:37.827741 systemd[1]: Reloading... Sep 10 00:09:37.931923 zram_generator::config[1228]: No configuration found. Sep 10 00:09:38.089400 ldconfig[1170]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 00:09:38.300485 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:09:38.368089 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 00:09:38.368221 systemd[1]: Reloading finished in 539 ms. Sep 10 00:09:38.389698 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 00:09:38.391278 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 00:09:38.411504 systemd[1]: Starting ensure-sysext.service... Sep 10 00:09:38.413653 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:09:38.426783 systemd[1]: Reload requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Sep 10 00:09:38.426820 systemd[1]: Reloading... Sep 10 00:09:38.450967 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 00:09:38.451374 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 00:09:38.452705 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 00:09:38.453149 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Sep 10 00:09:38.453245 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Sep 10 00:09:38.458936 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:09:38.458951 systemd-tmpfiles[1266]: Skipping /boot Sep 10 00:09:38.525830 zram_generator::config[1292]: No configuration found. Sep 10 00:09:38.533348 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:09:38.533368 systemd-tmpfiles[1266]: Skipping /boot Sep 10 00:09:38.687361 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:09:38.777819 systemd[1]: Reloading finished in 350 ms. Sep 10 00:09:38.793458 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 00:09:38.817550 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:09:38.841275 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 00:09:38.844632 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 00:09:38.848071 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 00:09:38.853142 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:09:38.857111 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:09:38.863655 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 00:09:38.870527 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:09:38.870990 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:09:38.872919 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:09:38.876666 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:09:38.881250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:09:38.882760 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:09:38.883315 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 00:09:38.889060 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 00:09:38.890315 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:09:38.892530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:09:38.893757 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:09:38.895909 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:09:38.896200 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:09:38.900166 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:09:38.900445 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:09:38.907462 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 00:09:38.915384 systemd-udevd[1342]: Using default interface naming scheme 'v255'. Sep 10 00:09:38.922104 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 00:09:38.927709 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:09:38.927985 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:09:38.935179 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:09:38.939085 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:09:38.944215 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:09:38.946356 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:09:38.946517 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 00:09:38.949000 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 00:09:38.950616 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:09:38.952619 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:09:38.952972 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:09:38.955578 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:09:38.955978 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:09:38.957770 augenrules[1372]: No rules Sep 10 00:09:38.958343 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:09:38.958762 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:09:38.961357 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 00:09:38.962002 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 00:09:38.968578 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:09:38.972351 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 00:09:38.984111 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 00:09:38.988859 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 00:09:39.001234 systemd[1]: Finished ensure-sysext.service. Sep 10 00:09:39.014656 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:09:39.026818 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1378) Sep 10 00:09:39.025630 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 00:09:39.027280 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:09:39.029539 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:09:39.034062 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:09:39.037316 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:09:39.039987 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:09:39.041411 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:09:39.041458 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 00:09:39.043974 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:09:39.049666 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 00:09:39.051122 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:09:39.051159 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:09:39.052055 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:09:39.052369 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:09:39.055024 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:09:39.055313 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:09:39.057326 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:09:39.057613 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:09:39.075784 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:09:39.141994 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:09:39.142598 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:09:39.143652 augenrules[1408]: /sbin/augenrules: No change Sep 10 00:09:39.145496 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:09:39.159524 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:09:39.169015 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 00:09:39.181972 augenrules[1441]: No rules Sep 10 00:09:39.206326 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 00:09:39.209006 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 00:09:39.211520 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 00:09:39.215110 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 10 00:09:39.227109 systemd-resolved[1338]: Positive Trust Anchors: Sep 10 00:09:39.228844 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:09:39.228936 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:09:39.240811 systemd-resolved[1338]: Defaulting to hostname 'linux'. Sep 10 00:09:39.243151 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 10 00:09:39.244170 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:09:39.245488 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:09:39.253213 systemd-networkd[1414]: lo: Link UP Sep 10 00:09:39.253226 systemd-networkd[1414]: lo: Gained carrier Sep 10 00:09:39.255007 systemd-networkd[1414]: Enumeration completed Sep 10 00:09:39.255096 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:09:39.255558 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:09:39.255563 systemd-networkd[1414]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:09:39.256624 systemd-networkd[1414]: eth0: Link UP Sep 10 00:09:39.256633 systemd-networkd[1414]: eth0: Gained carrier Sep 10 00:09:39.256646 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:09:39.256709 systemd[1]: Reached target network.target - Network. Sep 10 00:09:39.260830 kernel: ACPI: button: Power Button [PWRF] Sep 10 00:09:39.267067 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 00:09:39.269703 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 00:09:39.271971 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 00:09:39.272937 systemd-networkd[1414]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:09:39.273372 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 00:09:39.275323 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Sep 10 00:09:40.145955 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 10 00:09:40.146688 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 10 00:09:40.146912 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 10 00:09:40.142360 systemd-timesyncd[1415]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 00:09:40.142435 systemd-timesyncd[1415]: Initial clock synchronization to Wed 2025-09-10 00:09:40.142242 UTC. Sep 10 00:09:40.143155 systemd-resolved[1338]: Clock change detected. Flushing caches. Sep 10 00:09:40.155843 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 10 00:09:40.156434 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 00:09:40.247572 kernel: mousedev: PS/2 mouse device common for all mice Sep 10 00:09:40.287627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:09:40.320136 kernel: kvm_amd: TSC scaling supported Sep 10 00:09:40.320216 kernel: kvm_amd: Nested Virtualization enabled Sep 10 00:09:40.320230 kernel: kvm_amd: Nested Paging enabled Sep 10 00:09:40.320242 kernel: kvm_amd: LBR virtualization supported Sep 10 00:09:40.321393 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 10 00:09:40.321422 kernel: kvm_amd: Virtual GIF supported Sep 10 00:09:40.342828 kernel: EDAC MC: Ver: 3.0.0 Sep 10 00:09:40.381469 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 10 00:09:40.403251 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 10 00:09:40.405122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:09:40.415741 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:09:40.455643 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 10 00:09:40.457314 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:09:40.458528 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:09:40.459783 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 00:09:40.461319 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 00:09:40.462965 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 00:09:40.464345 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 00:09:40.465646 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 00:09:40.466950 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 00:09:40.466999 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:09:40.468085 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:09:40.470645 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 00:09:40.474268 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 00:09:40.479118 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 00:09:40.480732 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 00:09:40.482293 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 00:09:40.488193 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 00:09:40.490224 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 00:09:40.494131 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 10 00:09:40.496495 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 00:09:40.498013 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:09:40.499305 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:09:40.500915 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:09:40.500969 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:09:40.502495 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 00:09:40.505175 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 00:09:40.510331 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 00:09:40.514050 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 00:09:40.515497 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 00:09:40.517909 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:09:40.518023 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 00:09:40.519635 jq[1476]: false Sep 10 00:09:40.522420 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 00:09:40.528992 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 00:09:40.532352 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 00:09:40.540068 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 00:09:40.542095 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 00:09:40.543122 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 00:09:40.546950 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 00:09:40.549178 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 00:09:40.549662 dbus-daemon[1475]: [system] SELinux support is enabled Sep 10 00:09:40.554004 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 00:09:40.559116 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 00:09:40.559400 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 00:09:40.559602 jq[1488]: true Sep 10 00:09:40.563118 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 00:09:40.563399 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 00:09:40.566822 extend-filesystems[1477]: Found loop3 Sep 10 00:09:40.566822 extend-filesystems[1477]: Found loop4 Sep 10 00:09:40.566822 extend-filesystems[1477]: Found loop5 Sep 10 00:09:40.566822 extend-filesystems[1477]: Found sr0 Sep 10 00:09:40.566822 extend-filesystems[1477]: Found vda Sep 10 00:09:40.566822 extend-filesystems[1477]: Found vda1 Sep 10 00:09:40.566822 extend-filesystems[1477]: Found vda2 Sep 10 00:09:40.566822 extend-filesystems[1477]: Found vda3 Sep 10 00:09:40.566822 extend-filesystems[1477]: Found usr Sep 10 00:09:40.566822 extend-filesystems[1477]: Found vda4 Sep 10 00:09:40.566822 extend-filesystems[1477]: Found vda6 Sep 10 00:09:40.566822 extend-filesystems[1477]: Found vda7 Sep 10 00:09:40.566822 extend-filesystems[1477]: Found vda9 Sep 10 00:09:40.566822 extend-filesystems[1477]: Checking size of /dev/vda9 Sep 10 00:09:40.583483 jq[1492]: true Sep 10 00:09:40.591393 (ntainerd)[1498]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 00:09:40.597113 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 00:09:40.597170 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 00:09:40.604130 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 00:09:40.604171 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 00:09:40.622664 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 00:09:40.622975 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 00:09:40.624707 tar[1490]: linux-amd64/LICENSE Sep 10 00:09:40.625139 tar[1490]: linux-amd64/helm Sep 10 00:09:40.629946 extend-filesystems[1477]: Resized partition /dev/vda9 Sep 10 00:09:40.631136 update_engine[1486]: I20250910 00:09:40.630215 1486 main.cc:92] Flatcar Update Engine starting Sep 10 00:09:40.633104 update_engine[1486]: I20250910 00:09:40.632323 1486 update_check_scheduler.cc:74] Next update check in 7m5s Sep 10 00:09:40.632348 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 10 00:09:40.635721 systemd[1]: Started update-engine.service - Update Engine. Sep 10 00:09:40.640605 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Sep 10 00:09:40.697724 extend-filesystems[1526]: resize2fs 1.47.1 (20-May-2024) Sep 10 00:09:40.709284 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 00:09:40.706097 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 00:09:40.709971 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 00:09:40.715831 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1398) Sep 10 00:09:40.739109 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 00:09:40.808410 systemd-logind[1484]: Watching system buttons on /dev/input/event1 (Power Button) Sep 10 00:09:40.808447 systemd-logind[1484]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 10 00:09:40.812532 systemd-logind[1484]: New seat seat0. Sep 10 00:09:40.814209 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 00:09:40.832826 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 00:09:40.856920 locksmithd[1528]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 00:09:40.865235 extend-filesystems[1526]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 00:09:40.865235 extend-filesystems[1526]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 00:09:40.865235 extend-filesystems[1526]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 00:09:40.869360 extend-filesystems[1477]: Resized filesystem in /dev/vda9 Sep 10 00:09:40.866843 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 00:09:40.867293 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 00:09:40.943052 sshd_keygen[1523]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 00:09:40.971546 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 00:09:40.980264 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 00:09:41.081544 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 00:09:41.082000 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 00:09:41.093166 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 00:09:41.200114 containerd[1498]: time="2025-09-10T00:09:41.192491223Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 10 00:09:41.235060 containerd[1498]: time="2025-09-10T00:09:41.234759456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:09:41.237180 containerd[1498]: time="2025-09-10T00:09:41.237126655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:09:41.237180 containerd[1498]: time="2025-09-10T00:09:41.237173883Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 10 00:09:41.237270 containerd[1498]: time="2025-09-10T00:09:41.237199391Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 10 00:09:41.237417 containerd[1498]: time="2025-09-10T00:09:41.237397723Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 10 00:09:41.237439 containerd[1498]: time="2025-09-10T00:09:41.237425766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 10 00:09:41.237531 containerd[1498]: time="2025-09-10T00:09:41.237512528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:09:41.237561 containerd[1498]: time="2025-09-10T00:09:41.237530632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:09:41.237846 containerd[1498]: time="2025-09-10T00:09:41.237823812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:09:41.237846 containerd[1498]: time="2025-09-10T00:09:41.237843238Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 10 00:09:41.237903 containerd[1498]: time="2025-09-10T00:09:41.237859238Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:09:41.237903 containerd[1498]: time="2025-09-10T00:09:41.237870890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 10 00:09:41.237988 containerd[1498]: time="2025-09-10T00:09:41.237969956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:09:41.238246 containerd[1498]: time="2025-09-10T00:09:41.238218652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:09:41.238405 containerd[1498]: time="2025-09-10T00:09:41.238379474Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:09:41.238405 containerd[1498]: time="2025-09-10T00:09:41.238396055Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 10 00:09:41.238521 containerd[1498]: time="2025-09-10T00:09:41.238503476Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 10 00:09:41.238602 containerd[1498]: time="2025-09-10T00:09:41.238579459Z" level=info msg="metadata content store policy set" policy=shared Sep 10 00:09:41.239281 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 00:09:41.252823 containerd[1498]: time="2025-09-10T00:09:41.251060050Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 10 00:09:41.252823 containerd[1498]: time="2025-09-10T00:09:41.251153355Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 10 00:09:41.252823 containerd[1498]: time="2025-09-10T00:09:41.251175917Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 10 00:09:41.252823 containerd[1498]: time="2025-09-10T00:09:41.251200203Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 10 00:09:41.252823 containerd[1498]: time="2025-09-10T00:09:41.251220080Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 10 00:09:41.252823 containerd[1498]: time="2025-09-10T00:09:41.251439431Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 10 00:09:41.251338 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 00:09:41.254550 containerd[1498]: time="2025-09-10T00:09:41.254494179Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 10 00:09:41.254630 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 10 00:09:41.254689 containerd[1498]: time="2025-09-10T00:09:41.254633650Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 10 00:09:41.254689 containerd[1498]: time="2025-09-10T00:09:41.254649420Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 10 00:09:41.254689 containerd[1498]: time="2025-09-10T00:09:41.254663126Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 10 00:09:41.254689 containerd[1498]: time="2025-09-10T00:09:41.254676962Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 10 00:09:41.254762 containerd[1498]: time="2025-09-10T00:09:41.254691950Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 10 00:09:41.254762 containerd[1498]: time="2025-09-10T00:09:41.254704583Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 10 00:09:41.254762 containerd[1498]: time="2025-09-10T00:09:41.254721194Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 10 00:09:41.254762 containerd[1498]: time="2025-09-10T00:09:41.254737445Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 10 00:09:41.254762 containerd[1498]: time="2025-09-10T00:09:41.254751952Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 10 00:09:41.254762 containerd[1498]: time="2025-09-10T00:09:41.254763874Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 10 00:09:41.254895 containerd[1498]: time="2025-09-10T00:09:41.254775717Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 10 00:09:41.254895 containerd[1498]: time="2025-09-10T00:09:41.254836260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.254895 containerd[1498]: time="2025-09-10T00:09:41.254856789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.254895 containerd[1498]: time="2025-09-10T00:09:41.254873520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.254895 containerd[1498]: time="2025-09-10T00:09:41.254885763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.254895 containerd[1498]: time="2025-09-10T00:09:41.254897525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.255028 containerd[1498]: time="2025-09-10T00:09:41.254911682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.255028 containerd[1498]: time="2025-09-10T00:09:41.254924025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.255028 containerd[1498]: time="2025-09-10T00:09:41.254936157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.255028 containerd[1498]: time="2025-09-10T00:09:41.254954622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.255028 containerd[1498]: time="2025-09-10T00:09:41.254968999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.255028 containerd[1498]: time="2025-09-10T00:09:41.254980210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.255028 containerd[1498]: time="2025-09-10T00:09:41.254991080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.255028 containerd[1498]: time="2025-09-10T00:09:41.255003764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.255028 containerd[1498]: time="2025-09-10T00:09:41.255024824Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 10 00:09:41.255379 containerd[1498]: time="2025-09-10T00:09:41.255045322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.255379 containerd[1498]: time="2025-09-10T00:09:41.255060541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.255379 containerd[1498]: time="2025-09-10T00:09:41.255072864Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 10 00:09:41.255745 containerd[1498]: time="2025-09-10T00:09:41.255719065Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 10 00:09:41.255778 containerd[1498]: time="2025-09-10T00:09:41.255750013Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 10 00:09:41.255778 containerd[1498]: time="2025-09-10T00:09:41.255761094Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 10 00:09:41.255778 containerd[1498]: time="2025-09-10T00:09:41.255773327Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 10 00:09:41.255858 containerd[1498]: time="2025-09-10T00:09:41.255784568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.255858 containerd[1498]: time="2025-09-10T00:09:41.255800147Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 10 00:09:41.255858 containerd[1498]: time="2025-09-10T00:09:41.255829663Z" level=info msg="NRI interface is disabled by configuration." Sep 10 00:09:41.255858 containerd[1498]: time="2025-09-10T00:09:41.255841365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 10 00:09:41.256243 containerd[1498]: time="2025-09-10T00:09:41.256196731Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 10 00:09:41.256243 containerd[1498]: time="2025-09-10T00:09:41.256250492Z" level=info msg="Connect containerd service" Sep 10 00:09:41.256490 containerd[1498]: time="2025-09-10T00:09:41.256275889Z" level=info msg="using legacy CRI server" Sep 10 00:09:41.256490 containerd[1498]: time="2025-09-10T00:09:41.256282572Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 00:09:41.256490 containerd[1498]: time="2025-09-10T00:09:41.256397868Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 10 00:09:41.256475 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 00:09:41.257053 containerd[1498]: time="2025-09-10T00:09:41.257025605Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:09:41.257328 containerd[1498]: time="2025-09-10T00:09:41.257305530Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 00:09:41.257382 containerd[1498]: time="2025-09-10T00:09:41.257359431Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 00:09:41.257470 containerd[1498]: time="2025-09-10T00:09:41.257416688Z" level=info msg="Start subscribing containerd event" Sep 10 00:09:41.257501 containerd[1498]: time="2025-09-10T00:09:41.257479657Z" level=info msg="Start recovering state" Sep 10 00:09:41.257558 containerd[1498]: time="2025-09-10T00:09:41.257539348Z" level=info msg="Start event monitor" Sep 10 00:09:41.257585 containerd[1498]: time="2025-09-10T00:09:41.257566640Z" level=info msg="Start snapshots syncer" Sep 10 00:09:41.257585 containerd[1498]: time="2025-09-10T00:09:41.257578632Z" level=info msg="Start cni network conf syncer for default" Sep 10 00:09:41.257631 containerd[1498]: time="2025-09-10T00:09:41.257587048Z" level=info msg="Start streaming server" Sep 10 00:09:41.257652 containerd[1498]: time="2025-09-10T00:09:41.257646039Z" level=info msg="containerd successfully booted in 0.066966s" Sep 10 00:09:41.258036 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 00:09:41.522118 systemd-networkd[1414]: eth0: Gained IPv6LL Sep 10 00:09:41.526039 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 00:09:41.528384 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 00:09:41.545301 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 00:09:41.549452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:09:41.552078 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 00:09:41.567329 tar[1490]: linux-amd64/README.md Sep 10 00:09:41.580161 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 00:09:41.582105 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 00:09:41.582426 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 00:09:41.589302 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 00:09:41.592324 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 00:09:41.951117 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 00:09:41.954099 systemd[1]: Started sshd@0-10.0.0.58:22-10.0.0.1:37772.service - OpenSSH per-connection server daemon (10.0.0.1:37772). Sep 10 00:09:42.147934 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 37772 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:09:42.151622 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:09:42.163578 systemd-logind[1484]: New session 1 of user core. Sep 10 00:09:42.165359 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 00:09:42.180296 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 00:09:42.411878 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 00:09:42.451553 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 00:09:42.457301 (systemd)[1588]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:09:42.460481 systemd-logind[1484]: New session c1 of user core. Sep 10 00:09:42.645587 systemd[1588]: Queued start job for default target default.target. Sep 10 00:09:42.661139 systemd[1588]: Created slice app.slice - User Application Slice. Sep 10 00:09:42.661167 systemd[1588]: Reached target paths.target - Paths. Sep 10 00:09:42.661209 systemd[1588]: Reached target timers.target - Timers. Sep 10 00:09:42.662840 systemd[1588]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 00:09:42.675583 systemd[1588]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 00:09:42.675740 systemd[1588]: Reached target sockets.target - Sockets. Sep 10 00:09:42.675801 systemd[1588]: Reached target basic.target - Basic System. Sep 10 00:09:42.675880 systemd[1588]: Reached target default.target - Main User Target. Sep 10 00:09:42.675921 systemd[1588]: Startup finished in 201ms. Sep 10 00:09:42.676052 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 00:09:42.685003 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 00:09:42.760008 systemd[1]: Started sshd@1-10.0.0.58:22-10.0.0.1:37786.service - OpenSSH per-connection server daemon (10.0.0.1:37786). Sep 10 00:09:42.809080 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 37786 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:09:42.809946 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:09:42.814478 systemd-logind[1484]: New session 2 of user core. Sep 10 00:09:42.824950 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 00:09:42.880271 sshd[1601]: Connection closed by 10.0.0.1 port 37786 Sep 10 00:09:42.880576 sshd-session[1599]: pam_unix(sshd:session): session closed for user core Sep 10 00:09:42.895611 systemd[1]: sshd@1-10.0.0.58:22-10.0.0.1:37786.service: Deactivated successfully. Sep 10 00:09:42.897448 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 00:09:42.898824 systemd-logind[1484]: Session 2 logged out. Waiting for processes to exit. Sep 10 00:09:42.913140 systemd[1]: Started sshd@2-10.0.0.58:22-10.0.0.1:37792.service - OpenSSH per-connection server daemon (10.0.0.1:37792). Sep 10 00:09:42.915836 systemd-logind[1484]: Removed session 2. Sep 10 00:09:42.954616 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 37792 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:09:42.956451 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:09:42.961020 systemd-logind[1484]: New session 3 of user core. Sep 10 00:09:42.974042 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 00:09:43.030563 sshd[1609]: Connection closed by 10.0.0.1 port 37792 Sep 10 00:09:43.030946 sshd-session[1606]: pam_unix(sshd:session): session closed for user core Sep 10 00:09:43.035079 systemd[1]: sshd@2-10.0.0.58:22-10.0.0.1:37792.service: Deactivated successfully. Sep 10 00:09:43.037039 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 00:09:43.037662 systemd-logind[1484]: Session 3 logged out. Waiting for processes to exit. Sep 10 00:09:43.038470 systemd-logind[1484]: Removed session 3. Sep 10 00:09:43.118291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:09:43.120185 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 00:09:43.121626 systemd[1]: Startup finished in 857ms (kernel) + 8.689s (initrd) + 5.870s (userspace) = 15.417s. Sep 10 00:09:43.149164 (kubelet)[1619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:09:43.962548 kubelet[1619]: E0910 00:09:43.962471 1619 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:09:43.966720 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:09:43.966954 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:09:43.967351 systemd[1]: kubelet.service: Consumed 2.245s CPU time, 266.6M memory peak. Sep 10 00:09:53.041799 systemd[1]: Started sshd@3-10.0.0.58:22-10.0.0.1:33846.service - OpenSSH per-connection server daemon (10.0.0.1:33846). Sep 10 00:09:53.085711 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 33846 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:09:53.087612 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:09:53.092570 systemd-logind[1484]: New session 4 of user core. Sep 10 00:09:53.101928 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 00:09:53.155982 sshd[1634]: Connection closed by 10.0.0.1 port 33846 Sep 10 00:09:53.156390 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Sep 10 00:09:53.167740 systemd[1]: sshd@3-10.0.0.58:22-10.0.0.1:33846.service: Deactivated successfully. Sep 10 00:09:53.169653 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 00:09:53.171085 systemd-logind[1484]: Session 4 logged out. Waiting for processes to exit. Sep 10 00:09:53.184142 systemd[1]: Started sshd@4-10.0.0.58:22-10.0.0.1:33862.service - OpenSSH per-connection server daemon (10.0.0.1:33862). Sep 10 00:09:53.185329 systemd-logind[1484]: Removed session 4. Sep 10 00:09:53.221045 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 33862 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:09:53.222566 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:09:53.227198 systemd-logind[1484]: New session 5 of user core. Sep 10 00:09:53.243949 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 00:09:53.294738 sshd[1642]: Connection closed by 10.0.0.1 port 33862 Sep 10 00:09:53.295405 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Sep 10 00:09:53.316893 systemd[1]: sshd@4-10.0.0.58:22-10.0.0.1:33862.service: Deactivated successfully. Sep 10 00:09:53.319423 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 00:09:53.320386 systemd-logind[1484]: Session 5 logged out. Waiting for processes to exit. Sep 10 00:09:53.330120 systemd[1]: Started sshd@5-10.0.0.58:22-10.0.0.1:33864.service - OpenSSH per-connection server daemon (10.0.0.1:33864). Sep 10 00:09:53.330881 systemd-logind[1484]: Removed session 5. Sep 10 00:09:53.366658 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 33864 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:09:53.368350 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:09:53.372537 systemd-logind[1484]: New session 6 of user core. Sep 10 00:09:53.381945 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 00:09:53.438171 sshd[1650]: Connection closed by 10.0.0.1 port 33864 Sep 10 00:09:53.438536 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Sep 10 00:09:53.455162 systemd[1]: sshd@5-10.0.0.58:22-10.0.0.1:33864.service: Deactivated successfully. Sep 10 00:09:53.457436 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 00:09:53.459119 systemd-logind[1484]: Session 6 logged out. Waiting for processes to exit. Sep 10 00:09:53.460577 systemd[1]: Started sshd@6-10.0.0.58:22-10.0.0.1:33868.service - OpenSSH per-connection server daemon (10.0.0.1:33868). Sep 10 00:09:53.461464 systemd-logind[1484]: Removed session 6. Sep 10 00:09:53.501926 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 33868 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:09:53.503377 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:09:53.507546 systemd-logind[1484]: New session 7 of user core. Sep 10 00:09:53.516951 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 00:09:53.573927 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 00:09:53.574333 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:09:53.594846 sudo[1659]: pam_unix(sudo:session): session closed for user root Sep 10 00:09:53.596399 sshd[1658]: Connection closed by 10.0.0.1 port 33868 Sep 10 00:09:53.596835 sshd-session[1655]: pam_unix(sshd:session): session closed for user core Sep 10 00:09:53.607304 systemd[1]: sshd@6-10.0.0.58:22-10.0.0.1:33868.service: Deactivated successfully. Sep 10 00:09:53.608869 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 00:09:53.610169 systemd-logind[1484]: Session 7 logged out. Waiting for processes to exit. Sep 10 00:09:53.621118 systemd[1]: Started sshd@7-10.0.0.58:22-10.0.0.1:33872.service - OpenSSH per-connection server daemon (10.0.0.1:33872). Sep 10 00:09:53.622073 systemd-logind[1484]: Removed session 7. Sep 10 00:09:53.657323 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 33872 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:09:53.658701 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:09:53.662831 systemd-logind[1484]: New session 8 of user core. Sep 10 00:09:53.672929 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 00:09:53.725782 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 00:09:53.726119 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:09:53.729673 sudo[1669]: pam_unix(sudo:session): session closed for user root Sep 10 00:09:53.735698 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 00:09:53.736107 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:09:53.752127 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 00:09:53.780767 augenrules[1691]: No rules Sep 10 00:09:53.782323 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 00:09:53.782584 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 00:09:53.783651 sudo[1668]: pam_unix(sudo:session): session closed for user root Sep 10 00:09:53.785015 sshd[1667]: Connection closed by 10.0.0.1 port 33872 Sep 10 00:09:53.785319 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Sep 10 00:09:53.797413 systemd[1]: sshd@7-10.0.0.58:22-10.0.0.1:33872.service: Deactivated successfully. Sep 10 00:09:53.799080 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 00:09:53.800424 systemd-logind[1484]: Session 8 logged out. Waiting for processes to exit. Sep 10 00:09:53.809184 systemd[1]: Started sshd@8-10.0.0.58:22-10.0.0.1:33874.service - OpenSSH per-connection server daemon (10.0.0.1:33874). Sep 10 00:09:53.810291 systemd-logind[1484]: Removed session 8. Sep 10 00:09:53.845155 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 33874 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:09:53.846639 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:09:53.850655 systemd-logind[1484]: New session 9 of user core. Sep 10 00:09:53.869942 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 00:09:53.922233 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 00:09:53.922558 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:09:54.217364 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 00:09:54.225022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:09:54.232917 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 00:09:54.237237 (dockerd)[1726]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 00:09:54.476883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:09:54.511017 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:09:54.577608 kubelet[1732]: E0910 00:09:54.577547 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:09:54.584535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:09:54.584743 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:09:54.585235 systemd[1]: kubelet.service: Consumed 353ms CPU time, 110.7M memory peak. Sep 10 00:09:54.860331 dockerd[1726]: time="2025-09-10T00:09:54.860236482Z" level=info msg="Starting up" Sep 10 00:09:55.489773 dockerd[1726]: time="2025-09-10T00:09:55.489714423Z" level=info msg="Loading containers: start." Sep 10 00:09:55.675845 kernel: Initializing XFRM netlink socket Sep 10 00:09:55.761778 systemd-networkd[1414]: docker0: Link UP Sep 10 00:09:55.808292 dockerd[1726]: time="2025-09-10T00:09:55.808247072Z" level=info msg="Loading containers: done." Sep 10 00:09:55.827267 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1503090831-merged.mount: Deactivated successfully. Sep 10 00:09:55.828996 dockerd[1726]: time="2025-09-10T00:09:55.828939079Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 00:09:55.829120 dockerd[1726]: time="2025-09-10T00:09:55.829100181Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 10 00:09:55.829282 dockerd[1726]: time="2025-09-10T00:09:55.829257707Z" level=info msg="Daemon has completed initialization" Sep 10 00:09:55.869026 dockerd[1726]: time="2025-09-10T00:09:55.868945882Z" level=info msg="API listen on /run/docker.sock" Sep 10 00:09:55.869571 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 00:09:56.887839 containerd[1498]: time="2025-09-10T00:09:56.887664699Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 10 00:09:58.331386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2578071356.mount: Deactivated successfully. Sep 10 00:10:04.788150 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 00:10:04.806165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:10:05.293172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:10:05.315213 (kubelet)[1999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:10:05.592290 kubelet[1999]: E0910 00:10:05.592113 1999 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:10:05.604535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:10:05.605166 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:10:05.607940 systemd[1]: kubelet.service: Consumed 463ms CPU time, 112.5M memory peak. Sep 10 00:10:06.023342 containerd[1498]: time="2025-09-10T00:10:06.023123402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:06.036042 containerd[1498]: time="2025-09-10T00:10:06.035718077Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 10 00:10:06.058094 containerd[1498]: time="2025-09-10T00:10:06.057722946Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:06.063747 containerd[1498]: time="2025-09-10T00:10:06.062631249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:06.066245 containerd[1498]: time="2025-09-10T00:10:06.065427282Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 9.177685007s" Sep 10 00:10:06.066245 containerd[1498]: time="2025-09-10T00:10:06.065520036Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 10 00:10:06.068170 containerd[1498]: time="2025-09-10T00:10:06.068105443Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 10 00:10:12.231046 containerd[1498]: time="2025-09-10T00:10:12.230955636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:12.236043 containerd[1498]: time="2025-09-10T00:10:12.235940593Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 10 00:10:12.249039 containerd[1498]: time="2025-09-10T00:10:12.248883100Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:12.385390 containerd[1498]: time="2025-09-10T00:10:12.385270476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:12.388035 containerd[1498]: time="2025-09-10T00:10:12.387741219Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 6.319572577s" Sep 10 00:10:12.388035 containerd[1498]: time="2025-09-10T00:10:12.387833792Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 10 00:10:12.391505 containerd[1498]: time="2025-09-10T00:10:12.391318286Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 10 00:10:15.786324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 10 00:10:15.819632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:10:16.785036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:10:16.792538 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:10:17.202415 kubelet[2024]: E0910 00:10:17.201437 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:10:17.212785 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:10:17.213168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:10:17.214568 systemd[1]: kubelet.service: Consumed 1.042s CPU time, 112.5M memory peak. Sep 10 00:10:17.995908 containerd[1498]: time="2025-09-10T00:10:17.995786082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:18.000482 containerd[1498]: time="2025-09-10T00:10:18.000057179Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 10 00:10:18.002837 containerd[1498]: time="2025-09-10T00:10:18.002364102Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:18.009501 containerd[1498]: time="2025-09-10T00:10:18.009406528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:18.013646 containerd[1498]: time="2025-09-10T00:10:18.013331383Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 5.621899422s" Sep 10 00:10:18.013646 containerd[1498]: time="2025-09-10T00:10:18.013402589Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 10 00:10:18.014798 containerd[1498]: time="2025-09-10T00:10:18.014056199Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 10 00:10:21.151153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3740076214.mount: Deactivated successfully. Sep 10 00:10:21.656619 kernel: hrtimer: interrupt took 6013108 ns Sep 10 00:10:23.664002 containerd[1498]: time="2025-09-10T00:10:23.663695109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:23.665252 containerd[1498]: time="2025-09-10T00:10:23.665174503Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 10 00:10:23.667234 containerd[1498]: time="2025-09-10T00:10:23.667172853Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:23.670394 containerd[1498]: time="2025-09-10T00:10:23.670321191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:23.671200 containerd[1498]: time="2025-09-10T00:10:23.671127043Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 5.657030518s" Sep 10 00:10:23.671200 containerd[1498]: time="2025-09-10T00:10:23.671171669Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 10 00:10:23.673657 containerd[1498]: time="2025-09-10T00:10:23.672741374Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 00:10:24.475899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount848817805.mount: Deactivated successfully. Sep 10 00:10:25.722950 update_engine[1486]: I20250910 00:10:25.722735 1486 update_attempter.cc:509] Updating boot flags... Sep 10 00:10:25.793832 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2070) Sep 10 00:10:27.283317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 10 00:10:27.316469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:10:27.657256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:10:27.663646 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:10:27.798529 kubelet[2116]: E0910 00:10:27.798438 2116 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:10:27.806069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:10:27.807601 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:10:27.808183 systemd[1]: kubelet.service: Consumed 367ms CPU time, 110.1M memory peak. Sep 10 00:10:28.423361 containerd[1498]: time="2025-09-10T00:10:28.423271268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:28.424880 containerd[1498]: time="2025-09-10T00:10:28.424795446Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 10 00:10:28.426465 containerd[1498]: time="2025-09-10T00:10:28.426414372Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:28.430621 containerd[1498]: time="2025-09-10T00:10:28.430553795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:28.432276 containerd[1498]: time="2025-09-10T00:10:28.432193190Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.758906449s" Sep 10 00:10:28.432276 containerd[1498]: time="2025-09-10T00:10:28.432250409Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 10 00:10:28.433037 containerd[1498]: time="2025-09-10T00:10:28.432984569Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 00:10:28.978198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount772835300.mount: Deactivated successfully. Sep 10 00:10:28.984688 containerd[1498]: time="2025-09-10T00:10:28.984619303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:28.985559 containerd[1498]: time="2025-09-10T00:10:28.985501945Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 10 00:10:28.987091 containerd[1498]: time="2025-09-10T00:10:28.987044728Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:28.989661 containerd[1498]: time="2025-09-10T00:10:28.989602553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:28.990535 containerd[1498]: time="2025-09-10T00:10:28.990493771Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 557.459258ms" Sep 10 00:10:28.990588 containerd[1498]: time="2025-09-10T00:10:28.990534158Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 10 00:10:28.991113 containerd[1498]: time="2025-09-10T00:10:28.991087978Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 10 00:10:29.624001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2174581293.mount: Deactivated successfully. Sep 10 00:10:32.215133 containerd[1498]: time="2025-09-10T00:10:32.215069245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:32.215962 containerd[1498]: time="2025-09-10T00:10:32.215921586Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 10 00:10:32.217430 containerd[1498]: time="2025-09-10T00:10:32.217378870Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:32.220429 containerd[1498]: time="2025-09-10T00:10:32.220368190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:10:32.221627 containerd[1498]: time="2025-09-10T00:10:32.221590710Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.230469991s" Sep 10 00:10:32.221686 containerd[1498]: time="2025-09-10T00:10:32.221626298Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 10 00:10:34.206159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:10:34.206395 systemd[1]: kubelet.service: Consumed 367ms CPU time, 110.1M memory peak. Sep 10 00:10:34.222056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:10:34.255348 systemd[1]: Reload requested from client PID 2213 ('systemctl') (unit session-9.scope)... Sep 10 00:10:34.255370 systemd[1]: Reloading... Sep 10 00:10:34.344833 zram_generator::config[2260]: No configuration found. Sep 10 00:10:34.600127 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:10:34.706618 systemd[1]: Reloading finished in 450 ms. Sep 10 00:10:34.762251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:10:34.767056 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:10:34.768352 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:10:34.768720 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:10:34.768767 systemd[1]: kubelet.service: Consumed 172ms CPU time, 98.3M memory peak. Sep 10 00:10:34.770647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:10:34.951442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:10:34.971292 (kubelet)[2308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:10:35.101950 kubelet[2308]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:10:35.102399 kubelet[2308]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 00:10:35.102399 kubelet[2308]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:10:35.102399 kubelet[2308]: I0910 00:10:35.102195 2308 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:10:35.358406 kubelet[2308]: I0910 00:10:35.358352 2308 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 00:10:35.358406 kubelet[2308]: I0910 00:10:35.358387 2308 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:10:35.358741 kubelet[2308]: I0910 00:10:35.358718 2308 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 00:10:35.388959 kubelet[2308]: E0910 00:10:35.388892 2308 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:10:35.390429 kubelet[2308]: I0910 00:10:35.390391 2308 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:10:35.396783 kubelet[2308]: E0910 00:10:35.396748 2308 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:10:35.396783 kubelet[2308]: I0910 00:10:35.396782 2308 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:10:35.403531 kubelet[2308]: I0910 00:10:35.403496 2308 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:10:35.405634 kubelet[2308]: I0910 00:10:35.405585 2308 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:10:35.405893 kubelet[2308]: I0910 00:10:35.405631 2308 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:10:35.406087 kubelet[2308]: I0910 00:10:35.405920 2308 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:10:35.406087 kubelet[2308]: I0910 00:10:35.405932 2308 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 00:10:35.406149 kubelet[2308]: I0910 00:10:35.406133 2308 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:10:35.409114 kubelet[2308]: I0910 00:10:35.409092 2308 kubelet.go:446] "Attempting to sync node with API server" Sep 10 00:10:35.409157 kubelet[2308]: I0910 00:10:35.409131 2308 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:10:35.409208 kubelet[2308]: I0910 00:10:35.409190 2308 kubelet.go:352] "Adding apiserver pod source" Sep 10 00:10:35.409233 kubelet[2308]: I0910 00:10:35.409214 2308 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:10:35.413096 kubelet[2308]: I0910 00:10:35.413068 2308 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 10 00:10:35.413795 kubelet[2308]: I0910 00:10:35.413766 2308 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:10:35.414453 kubelet[2308]: W0910 00:10:35.414425 2308 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 00:10:35.417391 kubelet[2308]: W0910 00:10:35.416909 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 10 00:10:35.417391 kubelet[2308]: E0910 00:10:35.416988 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:10:35.417391 kubelet[2308]: W0910 00:10:35.417301 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 10 00:10:35.417391 kubelet[2308]: E0910 00:10:35.417351 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:10:35.418171 kubelet[2308]: I0910 00:10:35.418147 2308 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 00:10:35.418271 kubelet[2308]: I0910 00:10:35.418255 2308 server.go:1287] "Started kubelet" Sep 10 00:10:35.419187 kubelet[2308]: I0910 00:10:35.419131 2308 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:10:35.419569 kubelet[2308]: I0910 00:10:35.419510 2308 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:10:35.419856 kubelet[2308]: I0910 00:10:35.419837 2308 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:10:35.422656 kubelet[2308]: I0910 00:10:35.422629 2308 server.go:479] "Adding debug handlers to kubelet server" Sep 10 00:10:35.426001 kubelet[2308]: I0910 00:10:35.423779 2308 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:10:35.426001 kubelet[2308]: E0910 00:10:35.423937 2308 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:10:35.426001 kubelet[2308]: I0910 00:10:35.423984 2308 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:10:35.426001 kubelet[2308]: E0910 00:10:35.424411 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:10:35.426001 kubelet[2308]: I0910 00:10:35.424445 2308 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 00:10:35.426001 kubelet[2308]: I0910 00:10:35.424637 2308 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 00:10:35.426001 kubelet[2308]: I0910 00:10:35.424703 2308 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:10:35.426001 kubelet[2308]: W0910 00:10:35.425124 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 10 00:10:35.426001 kubelet[2308]: E0910 00:10:35.425170 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:10:35.426001 kubelet[2308]: E0910 00:10:35.425393 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="200ms" Sep 10 00:10:35.426001 kubelet[2308]: I0910 00:10:35.425566 2308 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:10:35.426353 kubelet[2308]: I0910 00:10:35.425643 2308 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:10:35.426637 kubelet[2308]: E0910 00:10:35.424916 2308 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.58:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.58:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c351bbfabbbd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:10:35.418164157 +0000 UTC m=+0.439515715,LastTimestamp:2025-09-10 00:10:35.418164157 +0000 UTC m=+0.439515715,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:10:35.427337 kubelet[2308]: I0910 00:10:35.427312 2308 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:10:35.554644 kubelet[2308]: E0910 00:10:35.553742 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:10:35.557380 kubelet[2308]: I0910 00:10:35.557358 2308 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 00:10:35.557380 kubelet[2308]: I0910 00:10:35.557377 2308 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 00:10:35.557478 kubelet[2308]: I0910 00:10:35.557399 2308 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:10:35.561887 kubelet[2308]: I0910 00:10:35.561826 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:10:35.563569 kubelet[2308]: I0910 00:10:35.563534 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:10:35.563617 kubelet[2308]: I0910 00:10:35.563579 2308 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 00:10:35.563617 kubelet[2308]: I0910 00:10:35.563613 2308 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 00:10:35.563698 kubelet[2308]: I0910 00:10:35.563626 2308 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 00:10:35.563732 kubelet[2308]: E0910 00:10:35.563710 2308 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:10:35.564848 kubelet[2308]: W0910 00:10:35.564268 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 10 00:10:35.564848 kubelet[2308]: E0910 00:10:35.564313 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:10:35.626901 kubelet[2308]: E0910 00:10:35.626731 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="400ms" Sep 10 00:10:35.653951 kubelet[2308]: E0910 00:10:35.653880 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:10:35.664176 kubelet[2308]: E0910 00:10:35.664100 2308 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:10:35.754920 kubelet[2308]: E0910 00:10:35.754858 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:10:35.855500 kubelet[2308]: E0910 00:10:35.855418 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:10:35.864744 kubelet[2308]: E0910 00:10:35.864668 2308 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:10:35.956144 kubelet[2308]: E0910 00:10:35.955963 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:10:36.028397 kubelet[2308]: E0910 00:10:36.028319 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="800ms" Sep 10 00:10:36.056285 kubelet[2308]: E0910 00:10:36.056222 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:10:36.156890 kubelet[2308]: E0910 00:10:36.156780 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:10:36.171721 kubelet[2308]: I0910 00:10:36.171644 2308 policy_none.go:49] "None policy: Start" Sep 10 00:10:36.171721 kubelet[2308]: I0910 00:10:36.171705 2308 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 00:10:36.171721 kubelet[2308]: I0910 00:10:36.171734 2308 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:10:36.257577 kubelet[2308]: E0910 00:10:36.257438 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:10:36.265620 kubelet[2308]: E0910 00:10:36.265580 2308 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:10:36.277220 kubelet[2308]: W0910 00:10:36.277162 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 10 00:10:36.277266 kubelet[2308]: E0910 00:10:36.277235 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:10:36.330597 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 00:10:36.349042 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 00:10:36.352204 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 00:10:36.357845 kubelet[2308]: E0910 00:10:36.357816 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:10:36.362387 kubelet[2308]: E0910 00:10:36.362296 2308 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.58:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.58:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c351bbfabbbd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:10:35.418164157 +0000 UTC m=+0.439515715,LastTimestamp:2025-09-10 00:10:35.418164157 +0000 UTC m=+0.439515715,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:10:36.365847 kubelet[2308]: I0910 00:10:36.365825 2308 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:10:36.366094 kubelet[2308]: I0910 00:10:36.366076 2308 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:10:36.366151 kubelet[2308]: I0910 00:10:36.366095 2308 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:10:36.366577 kubelet[2308]: I0910 00:10:36.366397 2308 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:10:36.367317 kubelet[2308]: E0910 00:10:36.367275 2308 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 00:10:36.367367 kubelet[2308]: E0910 00:10:36.367355 2308 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 00:10:36.468331 kubelet[2308]: I0910 00:10:36.468292 2308 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:10:36.468700 kubelet[2308]: E0910 00:10:36.468646 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Sep 10 00:10:36.554504 kubelet[2308]: W0910 00:10:36.554411 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 10 00:10:36.554504 kubelet[2308]: E0910 00:10:36.554481 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:10:36.613258 kubelet[2308]: W0910 00:10:36.613173 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 10 00:10:36.613258 kubelet[2308]: E0910 00:10:36.613240 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:10:36.670501 kubelet[2308]: I0910 00:10:36.670455 2308 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:10:36.670903 kubelet[2308]: E0910 00:10:36.670869 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Sep 10 00:10:36.829742 kubelet[2308]: E0910 00:10:36.829600 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="1.6s" Sep 10 00:10:36.974324 kubelet[2308]: W0910 00:10:36.974235 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 10 00:10:36.974483 kubelet[2308]: E0910 00:10:36.974337 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:10:37.072261 kubelet[2308]: I0910 00:10:37.072219 2308 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:10:37.072617 kubelet[2308]: E0910 00:10:37.072586 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Sep 10 00:10:37.074411 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 10 00:10:37.097961 kubelet[2308]: E0910 00:10:37.097842 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:10:37.100228 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 10 00:10:37.111258 kubelet[2308]: E0910 00:10:37.111207 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:10:37.114229 systemd[1]: Created slice kubepods-burstable-pod7c308489908dfc16b9dacc8c28578275.slice - libcontainer container kubepods-burstable-pod7c308489908dfc16b9dacc8c28578275.slice. Sep 10 00:10:37.115960 kubelet[2308]: E0910 00:10:37.115938 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:10:37.162582 kubelet[2308]: I0910 00:10:37.162517 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:10:37.162582 kubelet[2308]: I0910 00:10:37.162568 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:10:37.163099 kubelet[2308]: I0910 00:10:37.162602 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:10:37.163099 kubelet[2308]: I0910 00:10:37.162628 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c308489908dfc16b9dacc8c28578275-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7c308489908dfc16b9dacc8c28578275\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:10:37.163099 kubelet[2308]: I0910 00:10:37.162689 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:10:37.163099 kubelet[2308]: I0910 00:10:37.162724 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:10:37.163099 kubelet[2308]: I0910 00:10:37.162743 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:10:37.163213 kubelet[2308]: I0910 00:10:37.162770 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c308489908dfc16b9dacc8c28578275-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7c308489908dfc16b9dacc8c28578275\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:10:37.163213 kubelet[2308]: I0910 00:10:37.162797 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c308489908dfc16b9dacc8c28578275-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7c308489908dfc16b9dacc8c28578275\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:10:37.399373 kubelet[2308]: E0910 00:10:37.399229 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:37.400204 containerd[1498]: time="2025-09-10T00:10:37.400159674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 10 00:10:37.412362 kubelet[2308]: E0910 00:10:37.412332 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:37.412738 containerd[1498]: time="2025-09-10T00:10:37.412713648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 10 00:10:37.417120 kubelet[2308]: E0910 00:10:37.417093 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:37.417383 containerd[1498]: time="2025-09-10T00:10:37.417358969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7c308489908dfc16b9dacc8c28578275,Namespace:kube-system,Attempt:0,}" Sep 10 00:10:37.423688 kubelet[2308]: E0910 00:10:37.423639 2308 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:10:37.874948 kubelet[2308]: I0910 00:10:37.874903 2308 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:10:37.875316 kubelet[2308]: E0910 00:10:37.875280 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Sep 10 00:10:38.430553 kubelet[2308]: E0910 00:10:38.430490 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="3.2s" Sep 10 00:10:38.726865 kubelet[2308]: W0910 00:10:38.726728 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 10 00:10:38.726865 kubelet[2308]: E0910 00:10:38.726777 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:10:39.142573 kubelet[2308]: W0910 00:10:39.142519 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 10 00:10:39.142573 kubelet[2308]: E0910 00:10:39.142574 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:10:39.383439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832162732.mount: Deactivated successfully. Sep 10 00:10:39.394505 containerd[1498]: time="2025-09-10T00:10:39.394398937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:10:39.395644 containerd[1498]: time="2025-09-10T00:10:39.395589031Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:10:39.398130 containerd[1498]: time="2025-09-10T00:10:39.398074093Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 10 00:10:39.399115 containerd[1498]: time="2025-09-10T00:10:39.398977175Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:10:39.400154 containerd[1498]: time="2025-09-10T00:10:39.400114208Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:10:39.402861 containerd[1498]: time="2025-09-10T00:10:39.402709690Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:10:39.402861 containerd[1498]: time="2025-09-10T00:10:39.402819797Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:10:39.406380 containerd[1498]: time="2025-09-10T00:10:39.406328108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:10:39.408834 containerd[1498]: time="2025-09-10T00:10:39.408758218Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.99132565s" Sep 10 00:10:39.409653 containerd[1498]: time="2025-09-10T00:10:39.409579095Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.009252927s" Sep 10 00:10:39.412579 containerd[1498]: time="2025-09-10T00:10:39.412536829Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.999755814s" Sep 10 00:10:39.512462 kubelet[2308]: I0910 00:10:39.512354 2308 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:10:39.513013 kubelet[2308]: E0910 00:10:39.512891 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Sep 10 00:10:39.621643 containerd[1498]: time="2025-09-10T00:10:39.621312787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:10:39.621643 containerd[1498]: time="2025-09-10T00:10:39.621367832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:10:39.621643 containerd[1498]: time="2025-09-10T00:10:39.621378752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:10:39.621643 containerd[1498]: time="2025-09-10T00:10:39.621450738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:10:39.623863 containerd[1498]: time="2025-09-10T00:10:39.620501568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:10:39.623863 containerd[1498]: time="2025-09-10T00:10:39.621834400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:10:39.623863 containerd[1498]: time="2025-09-10T00:10:39.621853707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:10:39.623863 containerd[1498]: time="2025-09-10T00:10:39.621942374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:10:39.630227 containerd[1498]: time="2025-09-10T00:10:39.630076814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:10:39.630227 containerd[1498]: time="2025-09-10T00:10:39.630150523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:10:39.630227 containerd[1498]: time="2025-09-10T00:10:39.630162135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:10:39.630827 containerd[1498]: time="2025-09-10T00:10:39.630698025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:10:39.650987 systemd[1]: Started cri-containerd-a3dd11d455b8234dae9a43064b3ddc57591897fc8b5ad8a036ac5d16254503f6.scope - libcontainer container a3dd11d455b8234dae9a43064b3ddc57591897fc8b5ad8a036ac5d16254503f6. Sep 10 00:10:39.656074 systemd[1]: Started cri-containerd-28ad9fd959859c0a2a6f946b1c846c3dcbc8e6fd91accdf9fb508885ec73ab28.scope - libcontainer container 28ad9fd959859c0a2a6f946b1c846c3dcbc8e6fd91accdf9fb508885ec73ab28. Sep 10 00:10:39.662000 systemd[1]: Started cri-containerd-7af1777d0eea68a68d584ff487d8382cde57dce8bc3eedec7d1d68f1345ffaf1.scope - libcontainer container 7af1777d0eea68a68d584ff487d8382cde57dce8bc3eedec7d1d68f1345ffaf1. Sep 10 00:10:39.717377 kubelet[2308]: W0910 00:10:39.717218 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 10 00:10:39.717377 kubelet[2308]: E0910 00:10:39.717267 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:10:39.728109 containerd[1498]: time="2025-09-10T00:10:39.728069887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"28ad9fd959859c0a2a6f946b1c846c3dcbc8e6fd91accdf9fb508885ec73ab28\"" Sep 10 00:10:39.730792 kubelet[2308]: E0910 00:10:39.730550 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:39.733283 containerd[1498]: time="2025-09-10T00:10:39.733245881Z" level=info msg="CreateContainer within sandbox \"28ad9fd959859c0a2a6f946b1c846c3dcbc8e6fd91accdf9fb508885ec73ab28\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 00:10:39.734039 containerd[1498]: time="2025-09-10T00:10:39.734008058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7af1777d0eea68a68d584ff487d8382cde57dce8bc3eedec7d1d68f1345ffaf1\"" Sep 10 00:10:39.735975 containerd[1498]: time="2025-09-10T00:10:39.735861590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7c308489908dfc16b9dacc8c28578275,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3dd11d455b8234dae9a43064b3ddc57591897fc8b5ad8a036ac5d16254503f6\"" Sep 10 00:10:39.736581 kubelet[2308]: E0910 00:10:39.736555 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:39.736686 kubelet[2308]: E0910 00:10:39.736563 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:39.738296 containerd[1498]: time="2025-09-10T00:10:39.738263307Z" level=info msg="CreateContainer within sandbox \"7af1777d0eea68a68d584ff487d8382cde57dce8bc3eedec7d1d68f1345ffaf1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 00:10:39.738412 containerd[1498]: time="2025-09-10T00:10:39.738356422Z" level=info msg="CreateContainer within sandbox \"a3dd11d455b8234dae9a43064b3ddc57591897fc8b5ad8a036ac5d16254503f6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 00:10:39.761715 containerd[1498]: time="2025-09-10T00:10:39.761680157Z" level=info msg="CreateContainer within sandbox \"28ad9fd959859c0a2a6f946b1c846c3dcbc8e6fd91accdf9fb508885ec73ab28\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"89748d56ffc76d8f8394fce00a0508c7fbee068e3de05872de911ae938519585\"" Sep 10 00:10:39.762371 containerd[1498]: time="2025-09-10T00:10:39.762334160Z" level=info msg="StartContainer for \"89748d56ffc76d8f8394fce00a0508c7fbee068e3de05872de911ae938519585\"" Sep 10 00:10:39.767219 containerd[1498]: time="2025-09-10T00:10:39.767177257Z" level=info msg="CreateContainer within sandbox \"7af1777d0eea68a68d584ff487d8382cde57dce8bc3eedec7d1d68f1345ffaf1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ef741916aa96ec58245f748b0ca34701892bc1207fe4974f2b377d217577fa8d\"" Sep 10 00:10:39.767546 containerd[1498]: time="2025-09-10T00:10:39.767526685Z" level=info msg="StartContainer for \"ef741916aa96ec58245f748b0ca34701892bc1207fe4974f2b377d217577fa8d\"" Sep 10 00:10:39.767902 containerd[1498]: time="2025-09-10T00:10:39.767875572Z" level=info msg="CreateContainer within sandbox \"a3dd11d455b8234dae9a43064b3ddc57591897fc8b5ad8a036ac5d16254503f6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b6cc85640d238f6718066b35c3f19ce00b96ef49f8abf7c2375232ef03a1335e\"" Sep 10 00:10:39.768257 containerd[1498]: time="2025-09-10T00:10:39.768225943Z" level=info msg="StartContainer for \"b6cc85640d238f6718066b35c3f19ce00b96ef49f8abf7c2375232ef03a1335e\"" Sep 10 00:10:39.798970 systemd[1]: Started cri-containerd-89748d56ffc76d8f8394fce00a0508c7fbee068e3de05872de911ae938519585.scope - libcontainer container 89748d56ffc76d8f8394fce00a0508c7fbee068e3de05872de911ae938519585. Sep 10 00:10:39.800364 systemd[1]: Started cri-containerd-ef741916aa96ec58245f748b0ca34701892bc1207fe4974f2b377d217577fa8d.scope - libcontainer container ef741916aa96ec58245f748b0ca34701892bc1207fe4974f2b377d217577fa8d. Sep 10 00:10:39.804051 systemd[1]: Started cri-containerd-b6cc85640d238f6718066b35c3f19ce00b96ef49f8abf7c2375232ef03a1335e.scope - libcontainer container b6cc85640d238f6718066b35c3f19ce00b96ef49f8abf7c2375232ef03a1335e. Sep 10 00:10:39.865637 containerd[1498]: time="2025-09-10T00:10:39.865581374Z" level=info msg="StartContainer for \"89748d56ffc76d8f8394fce00a0508c7fbee068e3de05872de911ae938519585\" returns successfully" Sep 10 00:10:39.865791 containerd[1498]: time="2025-09-10T00:10:39.865685490Z" level=info msg="StartContainer for \"b6cc85640d238f6718066b35c3f19ce00b96ef49f8abf7c2375232ef03a1335e\" returns successfully" Sep 10 00:10:39.883314 containerd[1498]: time="2025-09-10T00:10:39.883263777Z" level=info msg="StartContainer for \"ef741916aa96ec58245f748b0ca34701892bc1207fe4974f2b377d217577fa8d\" returns successfully" Sep 10 00:10:40.576473 kubelet[2308]: E0910 00:10:40.576212 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:10:40.576473 kubelet[2308]: E0910 00:10:40.576377 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:40.579472 kubelet[2308]: E0910 00:10:40.579432 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:10:40.579626 kubelet[2308]: E0910 00:10:40.579601 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:40.581780 kubelet[2308]: E0910 00:10:40.581763 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:10:40.581892 kubelet[2308]: E0910 00:10:40.581877 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:41.416577 kubelet[2308]: I0910 00:10:41.416513 2308 apiserver.go:52] "Watching apiserver" Sep 10 00:10:41.425330 kubelet[2308]: I0910 00:10:41.425291 2308 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 00:10:41.584061 kubelet[2308]: E0910 00:10:41.584032 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:10:41.584499 kubelet[2308]: E0910 00:10:41.584137 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:10:41.584499 kubelet[2308]: E0910 00:10:41.584162 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:41.584499 kubelet[2308]: E0910 00:10:41.584275 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:41.660501 kubelet[2308]: E0910 00:10:41.660445 2308 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 10 00:10:41.747022 kubelet[2308]: E0910 00:10:41.746868 2308 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 00:10:42.080207 kubelet[2308]: E0910 00:10:42.080158 2308 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 10 00:10:42.511159 kubelet[2308]: E0910 00:10:42.511035 2308 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 10 00:10:42.585855 kubelet[2308]: E0910 00:10:42.585797 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:10:42.586275 kubelet[2308]: E0910 00:10:42.585976 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:42.714880 kubelet[2308]: I0910 00:10:42.714828 2308 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:10:42.720861 kubelet[2308]: I0910 00:10:42.720818 2308 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 00:10:42.726188 kubelet[2308]: I0910 00:10:42.726159 2308 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:10:42.733550 kubelet[2308]: I0910 00:10:42.733522 2308 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:10:42.734167 kubelet[2308]: E0910 00:10:42.734128 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:42.738277 kubelet[2308]: I0910 00:10:42.738244 2308 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:10:42.743260 kubelet[2308]: E0910 00:10:42.743226 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:43.483474 systemd[1]: Reload requested from client PID 2585 ('systemctl') (unit session-9.scope)... Sep 10 00:10:43.483488 systemd[1]: Reloading... Sep 10 00:10:43.586630 kubelet[2308]: E0910 00:10:43.586595 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:43.598840 zram_generator::config[2633]: No configuration found. Sep 10 00:10:43.710094 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:10:43.836196 systemd[1]: Reloading finished in 352 ms. Sep 10 00:10:43.864050 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:10:43.891170 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:10:43.891531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:10:43.891606 systemd[1]: kubelet.service: Consumed 1.289s CPU time, 133.2M memory peak. Sep 10 00:10:43.902042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:10:44.099228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:10:44.104079 (kubelet)[2674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:10:44.149880 kubelet[2674]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:10:44.149880 kubelet[2674]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 00:10:44.149880 kubelet[2674]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:10:44.149880 kubelet[2674]: I0910 00:10:44.148740 2674 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:10:44.155760 kubelet[2674]: I0910 00:10:44.155709 2674 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 00:10:44.155760 kubelet[2674]: I0910 00:10:44.155740 2674 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:10:44.156064 kubelet[2674]: I0910 00:10:44.156046 2674 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 00:10:44.157430 kubelet[2674]: I0910 00:10:44.157411 2674 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 00:10:44.160651 kubelet[2674]: I0910 00:10:44.160450 2674 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:10:44.165950 kubelet[2674]: E0910 00:10:44.165912 2674 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:10:44.165950 kubelet[2674]: I0910 00:10:44.165946 2674 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:10:44.171407 kubelet[2674]: I0910 00:10:44.171373 2674 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:10:44.171667 kubelet[2674]: I0910 00:10:44.171632 2674 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:10:44.171869 kubelet[2674]: I0910 00:10:44.171663 2674 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:10:44.171981 kubelet[2674]: I0910 00:10:44.171879 2674 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:10:44.171981 kubelet[2674]: I0910 00:10:44.171889 2674 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 00:10:44.171981 kubelet[2674]: I0910 00:10:44.171939 2674 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:10:44.172112 kubelet[2674]: I0910 00:10:44.172095 2674 kubelet.go:446] "Attempting to sync node with API server" Sep 10 00:10:44.172144 kubelet[2674]: I0910 00:10:44.172120 2674 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:10:44.172144 kubelet[2674]: I0910 00:10:44.172140 2674 kubelet.go:352] "Adding apiserver pod source" Sep 10 00:10:44.172336 kubelet[2674]: I0910 00:10:44.172154 2674 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:10:44.175437 kubelet[2674]: I0910 00:10:44.175297 2674 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 10 00:10:44.176049 kubelet[2674]: I0910 00:10:44.176034 2674 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:10:44.176737 kubelet[2674]: I0910 00:10:44.176722 2674 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 00:10:44.177043 kubelet[2674]: I0910 00:10:44.176821 2674 server.go:1287] "Started kubelet" Sep 10 00:10:44.177043 kubelet[2674]: I0910 00:10:44.176886 2674 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:10:44.177180 kubelet[2674]: I0910 00:10:44.177025 2674 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:10:44.177579 kubelet[2674]: I0910 00:10:44.177541 2674 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:10:44.178878 kubelet[2674]: I0910 00:10:44.178860 2674 server.go:479] "Adding debug handlers to kubelet server" Sep 10 00:10:44.180492 kubelet[2674]: I0910 00:10:44.180478 2674 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:10:44.181766 kubelet[2674]: E0910 00:10:44.181750 2674 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:10:44.182725 kubelet[2674]: I0910 00:10:44.182028 2674 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:10:44.182725 kubelet[2674]: E0910 00:10:44.182033 2674 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:10:44.182725 kubelet[2674]: I0910 00:10:44.182073 2674 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 00:10:44.182725 kubelet[2674]: I0910 00:10:44.182230 2674 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 00:10:44.187421 kubelet[2674]: I0910 00:10:44.187389 2674 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:10:44.187753 kubelet[2674]: I0910 00:10:44.187727 2674 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:10:44.188469 kubelet[2674]: I0910 00:10:44.188439 2674 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:10:44.188969 kubelet[2674]: I0910 00:10:44.188943 2674 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:10:44.199087 kubelet[2674]: I0910 00:10:44.199002 2674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:10:44.200515 kubelet[2674]: I0910 00:10:44.200479 2674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:10:44.200515 kubelet[2674]: I0910 00:10:44.200503 2674 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 00:10:44.200630 kubelet[2674]: I0910 00:10:44.200525 2674 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 00:10:44.200630 kubelet[2674]: I0910 00:10:44.200534 2674 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 00:10:44.200630 kubelet[2674]: E0910 00:10:44.200596 2674 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:10:44.225102 kubelet[2674]: I0910 00:10:44.225075 2674 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 00:10:44.225102 kubelet[2674]: I0910 00:10:44.225091 2674 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 00:10:44.225102 kubelet[2674]: I0910 00:10:44.225113 2674 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:10:44.225304 kubelet[2674]: I0910 00:10:44.225271 2674 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 00:10:44.225304 kubelet[2674]: I0910 00:10:44.225282 2674 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 00:10:44.225304 kubelet[2674]: I0910 00:10:44.225300 2674 policy_none.go:49] "None policy: Start" Sep 10 00:10:44.225375 kubelet[2674]: I0910 00:10:44.225309 2674 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 00:10:44.225375 kubelet[2674]: I0910 00:10:44.225319 2674 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:10:44.225467 kubelet[2674]: I0910 00:10:44.225451 2674 state_mem.go:75] "Updated machine memory state" Sep 10 00:10:44.234824 kubelet[2674]: I0910 00:10:44.234759 2674 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:10:44.235413 kubelet[2674]: I0910 00:10:44.235384 2674 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:10:44.235470 kubelet[2674]: I0910 00:10:44.235406 2674 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:10:44.235764 kubelet[2674]: I0910 00:10:44.235674 2674 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:10:44.240826 kubelet[2674]: E0910 00:10:44.238779 2674 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 00:10:44.302512 kubelet[2674]: I0910 00:10:44.302426 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:10:44.302512 kubelet[2674]: I0910 00:10:44.302506 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:10:44.302764 kubelet[2674]: I0910 00:10:44.302745 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:10:44.310468 kubelet[2674]: E0910 00:10:44.310421 2674 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:10:44.310886 kubelet[2674]: E0910 00:10:44.310846 2674 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:10:44.310886 kubelet[2674]: E0910 00:10:44.310870 2674 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 10 00:10:44.343599 kubelet[2674]: I0910 00:10:44.343505 2674 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:10:44.353873 kubelet[2674]: I0910 00:10:44.353705 2674 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 10 00:10:44.353873 kubelet[2674]: I0910 00:10:44.353828 2674 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 00:10:44.390363 kubelet[2674]: I0910 00:10:44.390253 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:10:44.390363 kubelet[2674]: I0910 00:10:44.390340 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:10:44.390363 kubelet[2674]: I0910 00:10:44.390373 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:10:44.390881 kubelet[2674]: I0910 00:10:44.390407 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:10:44.390881 kubelet[2674]: I0910 00:10:44.390429 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c308489908dfc16b9dacc8c28578275-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7c308489908dfc16b9dacc8c28578275\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:10:44.390881 kubelet[2674]: I0910 00:10:44.390457 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c308489908dfc16b9dacc8c28578275-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7c308489908dfc16b9dacc8c28578275\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:10:44.390881 kubelet[2674]: I0910 00:10:44.390481 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:10:44.390881 kubelet[2674]: I0910 00:10:44.390509 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:10:44.391073 kubelet[2674]: I0910 00:10:44.390559 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c308489908dfc16b9dacc8c28578275-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7c308489908dfc16b9dacc8c28578275\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:10:44.488256 sudo[2710]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 10 00:10:44.488739 sudo[2710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 10 00:10:44.612728 kubelet[2674]: E0910 00:10:44.611566 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:44.612728 kubelet[2674]: E0910 00:10:44.611617 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:44.612728 kubelet[2674]: E0910 00:10:44.611802 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:45.044628 sudo[2710]: pam_unix(sudo:session): session closed for user root Sep 10 00:10:45.173159 kubelet[2674]: I0910 00:10:45.173111 2674 apiserver.go:52] "Watching apiserver" Sep 10 00:10:45.183147 kubelet[2674]: I0910 00:10:45.183099 2674 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 00:10:45.211027 kubelet[2674]: I0910 00:10:45.210822 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:10:45.211027 kubelet[2674]: I0910 00:10:45.210919 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:10:45.211290 kubelet[2674]: E0910 00:10:45.211272 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:45.374360 kubelet[2674]: E0910 00:10:45.374216 2674 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 10 00:10:45.374512 kubelet[2674]: E0910 00:10:45.374417 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:45.374902 kubelet[2674]: E0910 00:10:45.374884 2674 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:10:45.374991 kubelet[2674]: E0910 00:10:45.374978 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:45.913651 kubelet[2674]: I0910 00:10:45.913524 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.9132087220000003 podStartE2EDuration="3.913208722s" podCreationTimestamp="2025-09-10 00:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:10:45.864010676 +0000 UTC m=+1.753655799" watchObservedRunningTime="2025-09-10 00:10:45.913208722 +0000 UTC m=+1.802853835" Sep 10 00:10:45.940585 kubelet[2674]: I0910 00:10:45.940505 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.9404762250000003 podStartE2EDuration="3.940476225s" podCreationTimestamp="2025-09-10 00:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:10:45.913852523 +0000 UTC m=+1.803497636" watchObservedRunningTime="2025-09-10 00:10:45.940476225 +0000 UTC m=+1.830121338" Sep 10 00:10:46.081527 kubelet[2674]: I0910 00:10:46.081393 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.081366305 podStartE2EDuration="4.081366305s" podCreationTimestamp="2025-09-10 00:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:10:45.940729201 +0000 UTC m=+1.830374314" watchObservedRunningTime="2025-09-10 00:10:46.081366305 +0000 UTC m=+1.971011418" Sep 10 00:10:46.212750 kubelet[2674]: E0910 00:10:46.211948 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:46.213389 kubelet[2674]: E0910 00:10:46.213353 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:46.786437 sudo[1703]: pam_unix(sudo:session): session closed for user root Sep 10 00:10:46.788206 sshd[1702]: Connection closed by 10.0.0.1 port 33874 Sep 10 00:10:46.788660 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:46.793550 systemd[1]: sshd@8-10.0.0.58:22-10.0.0.1:33874.service: Deactivated successfully. Sep 10 00:10:46.796004 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 00:10:46.796244 systemd[1]: session-9.scope: Consumed 4.465s CPU time, 252.6M memory peak. Sep 10 00:10:46.797486 systemd-logind[1484]: Session 9 logged out. Waiting for processes to exit. Sep 10 00:10:46.798416 systemd-logind[1484]: Removed session 9. Sep 10 00:10:50.411274 kubelet[2674]: I0910 00:10:50.411228 2674 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 00:10:50.411893 kubelet[2674]: I0910 00:10:50.411756 2674 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 00:10:50.411930 containerd[1498]: time="2025-09-10T00:10:50.411573121Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 00:10:51.773490 systemd[1]: Created slice kubepods-burstable-pod1ed673c9_f1f6_483a_9e4c_3c7b3c708d64.slice - libcontainer container kubepods-burstable-pod1ed673c9_f1f6_483a_9e4c_3c7b3c708d64.slice. Sep 10 00:10:51.785972 systemd[1]: Created slice kubepods-besteffort-pod59e53a0e_e50a_4f75_b01c_d2c75f495c0d.slice - libcontainer container kubepods-besteffort-pod59e53a0e_e50a_4f75_b01c_d2c75f495c0d.slice. Sep 10 00:10:51.799637 systemd[1]: Created slice kubepods-besteffort-podb26d1c48_ebfa_48d1_8300_f94572dffefc.slice - libcontainer container kubepods-besteffort-podb26d1c48_ebfa_48d1_8300_f94572dffefc.slice. Sep 10 00:10:51.835464 kubelet[2674]: I0910 00:10:51.835390 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-cilium-cgroup\") pod \"cilium-cw8dl\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " pod="kube-system/cilium-cw8dl" Sep 10 00:10:51.835464 kubelet[2674]: I0910 00:10:51.835448 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-host-proc-sys-kernel\") pod \"cilium-cw8dl\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " pod="kube-system/cilium-cw8dl" Sep 10 00:10:51.835464 kubelet[2674]: I0910 00:10:51.835470 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-cilium-run\") pod \"cilium-cw8dl\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " pod="kube-system/cilium-cw8dl" Sep 10 00:10:51.836110 kubelet[2674]: I0910 00:10:51.835536 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-lib-modules\") pod \"cilium-cw8dl\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " pod="kube-system/cilium-cw8dl" Sep 10 00:10:51.836110 kubelet[2674]: I0910 00:10:51.835591 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-cilium-config-path\") pod \"cilium-cw8dl\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " pod="kube-system/cilium-cw8dl" Sep 10 00:10:51.836110 kubelet[2674]: I0910 00:10:51.835618 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b26d1c48-ebfa-48d1-8300-f94572dffefc-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bdrgf\" (UID: \"b26d1c48-ebfa-48d1-8300-f94572dffefc\") " pod="kube-system/cilium-operator-6c4d7847fc-bdrgf" Sep 10 00:10:51.836110 kubelet[2674]: I0910 00:10:51.835654 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/59e53a0e-e50a-4f75-b01c-d2c75f495c0d-kube-proxy\") pod \"kube-proxy-zv8bb\" (UID: \"59e53a0e-e50a-4f75-b01c-d2c75f495c0d\") " pod="kube-system/kube-proxy-zv8bb" Sep 10 00:10:51.836110 kubelet[2674]: I0910 00:10:51.835675 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-xtables-lock\") pod \"cilium-cw8dl\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " pod="kube-system/cilium-cw8dl" Sep 10 00:10:51.836272 kubelet[2674]: I0910 00:10:51.835704 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csrnb\" (UniqueName: \"kubernetes.io/projected/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-kube-api-access-csrnb\") pod \"cilium-cw8dl\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " pod="kube-system/cilium-cw8dl" Sep 10 00:10:51.836272 kubelet[2674]: I0910 00:10:51.835729 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-hostproc\") pod \"cilium-cw8dl\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " pod="kube-system/cilium-cw8dl" Sep 10 00:10:51.836272 kubelet[2674]: I0910 00:10:51.835750 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-host-proc-sys-net\") pod \"cilium-cw8dl\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " pod="kube-system/cilium-cw8dl" Sep 10 00:10:51.836272 kubelet[2674]: I0910 00:10:51.835839 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-hubble-tls\") pod \"cilium-cw8dl\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " pod="kube-system/cilium-cw8dl" Sep 10 00:10:51.836272 kubelet[2674]: I0910 00:10:51.835863 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8tct\" (UniqueName: \"kubernetes.io/projected/b26d1c48-ebfa-48d1-8300-f94572dffefc-kube-api-access-z8tct\") pod \"cilium-operator-6c4d7847fc-bdrgf\" (UID: \"b26d1c48-ebfa-48d1-8300-f94572dffefc\") " pod="kube-system/cilium-operator-6c4d7847fc-bdrgf" Sep 10 00:10:51.836531 kubelet[2674]: I0910 00:10:51.835887 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-bpf-maps\") pod \"cilium-cw8dl\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " pod="kube-system/cilium-cw8dl" Sep 10 00:10:51.836531 kubelet[2674]: I0910 00:10:51.835908 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59e53a0e-e50a-4f75-b01c-d2c75f495c0d-xtables-lock\") pod \"kube-proxy-zv8bb\" (UID: \"59e53a0e-e50a-4f75-b01c-d2c75f495c0d\") " pod="kube-system/kube-proxy-zv8bb" Sep 10 00:10:51.836531 kubelet[2674]: I0910 00:10:51.835929 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59e53a0e-e50a-4f75-b01c-d2c75f495c0d-lib-modules\") pod \"kube-proxy-zv8bb\" (UID: \"59e53a0e-e50a-4f75-b01c-d2c75f495c0d\") " pod="kube-system/kube-proxy-zv8bb" Sep 10 00:10:51.836531 kubelet[2674]: I0910 00:10:51.835955 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sfsf\" (UniqueName: \"kubernetes.io/projected/59e53a0e-e50a-4f75-b01c-d2c75f495c0d-kube-api-access-7sfsf\") pod \"kube-proxy-zv8bb\" (UID: \"59e53a0e-e50a-4f75-b01c-d2c75f495c0d\") " pod="kube-system/kube-proxy-zv8bb" Sep 10 00:10:51.836531 kubelet[2674]: I0910 00:10:51.836012 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-cni-path\") pod \"cilium-cw8dl\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " pod="kube-system/cilium-cw8dl" Sep 10 00:10:51.836531 kubelet[2674]: I0910 00:10:51.836034 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-etc-cni-netd\") pod \"cilium-cw8dl\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " pod="kube-system/cilium-cw8dl" Sep 10 00:10:51.836721 kubelet[2674]: I0910 00:10:51.836058 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-clustermesh-secrets\") pod \"cilium-cw8dl\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " pod="kube-system/cilium-cw8dl" Sep 10 00:10:52.080054 kubelet[2674]: E0910 00:10:52.079954 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:52.080893 containerd[1498]: time="2025-09-10T00:10:52.080828913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cw8dl,Uid:1ed673c9-f1f6-483a-9e4c-3c7b3c708d64,Namespace:kube-system,Attempt:0,}" Sep 10 00:10:52.096674 kubelet[2674]: E0910 00:10:52.096599 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:52.097227 containerd[1498]: time="2025-09-10T00:10:52.097174430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zv8bb,Uid:59e53a0e-e50a-4f75-b01c-d2c75f495c0d,Namespace:kube-system,Attempt:0,}" Sep 10 00:10:52.102510 kubelet[2674]: E0910 00:10:52.102460 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:52.103033 containerd[1498]: time="2025-09-10T00:10:52.102976703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bdrgf,Uid:b26d1c48-ebfa-48d1-8300-f94572dffefc,Namespace:kube-system,Attempt:0,}" Sep 10 00:10:52.296609 containerd[1498]: time="2025-09-10T00:10:52.296422863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:10:52.296609 containerd[1498]: time="2025-09-10T00:10:52.296583625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:10:52.297020 containerd[1498]: time="2025-09-10T00:10:52.296599765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:10:52.297020 containerd[1498]: time="2025-09-10T00:10:52.296821011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:10:52.324442 containerd[1498]: time="2025-09-10T00:10:52.324276614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:10:52.324442 containerd[1498]: time="2025-09-10T00:10:52.324380859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:10:52.324442 containerd[1498]: time="2025-09-10T00:10:52.324409493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:10:52.324753 containerd[1498]: time="2025-09-10T00:10:52.324563803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:10:52.357182 systemd[1]: Started cri-containerd-15e630d6d39e337f00ba623b1082efce28a2a71389117d259b01a8c9a3ad4c0c.scope - libcontainer container 15e630d6d39e337f00ba623b1082efce28a2a71389117d259b01a8c9a3ad4c0c. Sep 10 00:10:52.359898 containerd[1498]: time="2025-09-10T00:10:52.359582969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:10:52.359898 containerd[1498]: time="2025-09-10T00:10:52.359654884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:10:52.359898 containerd[1498]: time="2025-09-10T00:10:52.359668740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:10:52.359898 containerd[1498]: time="2025-09-10T00:10:52.359755032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:10:52.361985 systemd[1]: Started cri-containerd-3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0.scope - libcontainer container 3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0. Sep 10 00:10:52.386002 systemd[1]: Started cri-containerd-b83301c982a67298236d054765bae02ff9e068e84c9eb8694ddd95796df7c0ef.scope - libcontainer container b83301c982a67298236d054765bae02ff9e068e84c9eb8694ddd95796df7c0ef. Sep 10 00:10:52.400508 containerd[1498]: time="2025-09-10T00:10:52.400024644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cw8dl,Uid:1ed673c9-f1f6-483a-9e4c-3c7b3c708d64,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0\"" Sep 10 00:10:52.400764 kubelet[2674]: E0910 00:10:52.400737 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:52.406130 containerd[1498]: time="2025-09-10T00:10:52.406084471Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 10 00:10:52.412828 containerd[1498]: time="2025-09-10T00:10:52.412397354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bdrgf,Uid:b26d1c48-ebfa-48d1-8300-f94572dffefc,Namespace:kube-system,Attempt:0,} returns sandbox id \"15e630d6d39e337f00ba623b1082efce28a2a71389117d259b01a8c9a3ad4c0c\"" Sep 10 00:10:52.413013 kubelet[2674]: E0910 00:10:52.412989 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:52.419535 containerd[1498]: time="2025-09-10T00:10:52.419501984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zv8bb,Uid:59e53a0e-e50a-4f75-b01c-d2c75f495c0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b83301c982a67298236d054765bae02ff9e068e84c9eb8694ddd95796df7c0ef\"" Sep 10 00:10:52.420533 kubelet[2674]: E0910 00:10:52.420399 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:52.422559 containerd[1498]: time="2025-09-10T00:10:52.422516038Z" level=info msg="CreateContainer within sandbox \"b83301c982a67298236d054765bae02ff9e068e84c9eb8694ddd95796df7c0ef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 00:10:52.920214 kubelet[2674]: E0910 00:10:52.920153 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:52.951322 containerd[1498]: time="2025-09-10T00:10:52.951254905Z" level=info msg="CreateContainer within sandbox \"b83301c982a67298236d054765bae02ff9e068e84c9eb8694ddd95796df7c0ef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7898628f4303d6162a24455a0a2682f70c9d59eb5aef1d99df3cd92349300792\"" Sep 10 00:10:52.951961 containerd[1498]: time="2025-09-10T00:10:52.951874770Z" level=info msg="StartContainer for \"7898628f4303d6162a24455a0a2682f70c9d59eb5aef1d99df3cd92349300792\"" Sep 10 00:10:52.982960 systemd[1]: Started cri-containerd-7898628f4303d6162a24455a0a2682f70c9d59eb5aef1d99df3cd92349300792.scope - libcontainer container 7898628f4303d6162a24455a0a2682f70c9d59eb5aef1d99df3cd92349300792. Sep 10 00:10:53.018178 containerd[1498]: time="2025-09-10T00:10:53.018138403Z" level=info msg="StartContainer for \"7898628f4303d6162a24455a0a2682f70c9d59eb5aef1d99df3cd92349300792\" returns successfully" Sep 10 00:10:53.224616 kubelet[2674]: E0910 00:10:53.224403 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:53.225533 kubelet[2674]: E0910 00:10:53.225511 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:53.233986 kubelet[2674]: I0910 00:10:53.233903 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zv8bb" podStartSLOduration=2.233879352 podStartE2EDuration="2.233879352s" podCreationTimestamp="2025-09-10 00:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:10:53.233528743 +0000 UTC m=+9.123173867" watchObservedRunningTime="2025-09-10 00:10:53.233879352 +0000 UTC m=+9.123524465" Sep 10 00:10:54.139794 kubelet[2674]: E0910 00:10:54.139759 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:54.227229 kubelet[2674]: E0910 00:10:54.227199 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:54.665717 kubelet[2674]: E0910 00:10:54.665683 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:55.228955 kubelet[2674]: E0910 00:10:55.228901 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:01.854062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1019624119.mount: Deactivated successfully. Sep 10 00:11:05.415330 containerd[1498]: time="2025-09-10T00:11:05.415266261Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:11:05.416315 containerd[1498]: time="2025-09-10T00:11:05.416257941Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 10 00:11:05.417348 containerd[1498]: time="2025-09-10T00:11:05.417307501Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:11:05.418782 containerd[1498]: time="2025-09-10T00:11:05.418727106Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.012600285s" Sep 10 00:11:05.418782 containerd[1498]: time="2025-09-10T00:11:05.418774174Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 10 00:11:05.420028 containerd[1498]: time="2025-09-10T00:11:05.419843481Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 10 00:11:05.421533 containerd[1498]: time="2025-09-10T00:11:05.421471336Z" level=info msg="CreateContainer within sandbox \"3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:11:05.436188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3582974420.mount: Deactivated successfully. Sep 10 00:11:05.436599 containerd[1498]: time="2025-09-10T00:11:05.436452141Z" level=info msg="CreateContainer within sandbox \"3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589\"" Sep 10 00:11:05.437350 containerd[1498]: time="2025-09-10T00:11:05.437324819Z" level=info msg="StartContainer for \"35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589\"" Sep 10 00:11:05.468253 systemd[1]: run-containerd-runc-k8s.io-35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589-runc.RDO88Y.mount: Deactivated successfully. Sep 10 00:11:05.481981 systemd[1]: Started cri-containerd-35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589.scope - libcontainer container 35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589. Sep 10 00:11:06.121488 systemd[1]: cri-containerd-35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589.scope: Deactivated successfully. Sep 10 00:11:06.121951 systemd[1]: cri-containerd-35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589.scope: Consumed 28ms CPU time, 7M memory peak, 3.2M written to disk. Sep 10 00:11:06.222412 containerd[1498]: time="2025-09-10T00:11:06.222355818Z" level=info msg="StartContainer for \"35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589\" returns successfully" Sep 10 00:11:06.433712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589-rootfs.mount: Deactivated successfully. Sep 10 00:11:06.692007 containerd[1498]: time="2025-09-10T00:11:06.691832541Z" level=info msg="shim disconnected" id=35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589 namespace=k8s.io Sep 10 00:11:06.692007 containerd[1498]: time="2025-09-10T00:11:06.691916048Z" level=warning msg="cleaning up after shim disconnected" id=35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589 namespace=k8s.io Sep 10 00:11:06.692007 containerd[1498]: time="2025-09-10T00:11:06.691927078Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:11:07.407742 kubelet[2674]: E0910 00:11:07.407703 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:07.409447 containerd[1498]: time="2025-09-10T00:11:07.409411518Z" level=info msg="CreateContainer within sandbox \"3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:11:07.551984 containerd[1498]: time="2025-09-10T00:11:07.551937658Z" level=info msg="CreateContainer within sandbox \"3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8\"" Sep 10 00:11:07.552555 containerd[1498]: time="2025-09-10T00:11:07.552532715Z" level=info msg="StartContainer for \"098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8\"" Sep 10 00:11:07.582942 systemd[1]: Started cri-containerd-098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8.scope - libcontainer container 098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8. Sep 10 00:11:07.608001 containerd[1498]: time="2025-09-10T00:11:07.607959627Z" level=info msg="StartContainer for \"098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8\" returns successfully" Sep 10 00:11:07.622206 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:11:07.622549 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:11:07.622913 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:11:07.630741 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:11:07.634108 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 00:11:07.634908 systemd[1]: cri-containerd-098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8.scope: Deactivated successfully. Sep 10 00:11:07.652656 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:11:07.653939 containerd[1498]: time="2025-09-10T00:11:07.653887784Z" level=info msg="shim disconnected" id=098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8 namespace=k8s.io Sep 10 00:11:07.654046 containerd[1498]: time="2025-09-10T00:11:07.653939081Z" level=warning msg="cleaning up after shim disconnected" id=098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8 namespace=k8s.io Sep 10 00:11:07.654046 containerd[1498]: time="2025-09-10T00:11:07.653949600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:11:08.411179 kubelet[2674]: E0910 00:11:08.411071 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:08.413212 containerd[1498]: time="2025-09-10T00:11:08.413165040Z" level=info msg="CreateContainer within sandbox \"3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:11:08.435431 containerd[1498]: time="2025-09-10T00:11:08.435381458Z" level=info msg="CreateContainer within sandbox \"3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670\"" Sep 10 00:11:08.436029 containerd[1498]: time="2025-09-10T00:11:08.435993747Z" level=info msg="StartContainer for \"237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670\"" Sep 10 00:11:08.480098 systemd[1]: Started cri-containerd-237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670.scope - libcontainer container 237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670. Sep 10 00:11:08.526796 systemd[1]: cri-containerd-237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670.scope: Deactivated successfully. Sep 10 00:11:08.543269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8-rootfs.mount: Deactivated successfully. Sep 10 00:11:08.786922 containerd[1498]: time="2025-09-10T00:11:08.786879894Z" level=info msg="StartContainer for \"237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670\" returns successfully" Sep 10 00:11:08.806713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670-rootfs.mount: Deactivated successfully. Sep 10 00:11:08.913648 containerd[1498]: time="2025-09-10T00:11:08.913574353Z" level=info msg="shim disconnected" id=237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670 namespace=k8s.io Sep 10 00:11:08.913648 containerd[1498]: time="2025-09-10T00:11:08.913636339Z" level=warning msg="cleaning up after shim disconnected" id=237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670 namespace=k8s.io Sep 10 00:11:08.913648 containerd[1498]: time="2025-09-10T00:11:08.913645186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:11:09.417862 kubelet[2674]: E0910 00:11:09.414675 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:09.420460 containerd[1498]: time="2025-09-10T00:11:09.420425164Z" level=info msg="CreateContainer within sandbox \"3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:11:09.932265 systemd[1]: Started sshd@9-10.0.0.58:22-10.0.0.1:56278.service - OpenSSH per-connection server daemon (10.0.0.1:56278). Sep 10 00:11:10.016318 sshd[3274]: Accepted publickey for core from 10.0.0.1 port 56278 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:11:10.018038 sshd-session[3274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:11:10.030723 systemd-logind[1484]: New session 10 of user core. Sep 10 00:11:10.045949 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 00:11:10.277388 containerd[1498]: time="2025-09-10T00:11:10.277242383Z" level=info msg="CreateContainer within sandbox \"3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33\"" Sep 10 00:11:10.278578 containerd[1498]: time="2025-09-10T00:11:10.278488600Z" level=info msg="StartContainer for \"811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33\"" Sep 10 00:11:10.313052 systemd[1]: Started cri-containerd-811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33.scope - libcontainer container 811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33. Sep 10 00:11:10.355019 systemd[1]: cri-containerd-811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33.scope: Deactivated successfully. Sep 10 00:11:10.507844 sshd[3276]: Connection closed by 10.0.0.1 port 56278 Sep 10 00:11:10.508253 sshd-session[3274]: pam_unix(sshd:session): session closed for user core Sep 10 00:11:10.513623 systemd[1]: sshd@9-10.0.0.58:22-10.0.0.1:56278.service: Deactivated successfully. Sep 10 00:11:10.516452 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 00:11:10.517388 systemd-logind[1484]: Session 10 logged out. Waiting for processes to exit. Sep 10 00:11:10.518412 systemd-logind[1484]: Removed session 10. Sep 10 00:11:10.535784 containerd[1498]: time="2025-09-10T00:11:10.535727205Z" level=info msg="StartContainer for \"811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33\" returns successfully" Sep 10 00:11:10.540679 kubelet[2674]: E0910 00:11:10.540646 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:10.554867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33-rootfs.mount: Deactivated successfully. Sep 10 00:11:11.212306 containerd[1498]: time="2025-09-10T00:11:11.212237719Z" level=info msg="shim disconnected" id=811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33 namespace=k8s.io Sep 10 00:11:11.212306 containerd[1498]: time="2025-09-10T00:11:11.212293915Z" level=warning msg="cleaning up after shim disconnected" id=811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33 namespace=k8s.io Sep 10 00:11:11.212306 containerd[1498]: time="2025-09-10T00:11:11.212302461Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:11:11.220870 containerd[1498]: time="2025-09-10T00:11:11.220107865Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:11:11.222275 containerd[1498]: time="2025-09-10T00:11:11.222229626Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 10 00:11:11.224643 containerd[1498]: time="2025-09-10T00:11:11.224611976Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:11:11.226211 containerd[1498]: time="2025-09-10T00:11:11.226171561Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.806288166s" Sep 10 00:11:11.226278 containerd[1498]: time="2025-09-10T00:11:11.226219571Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 10 00:11:11.229507 containerd[1498]: time="2025-09-10T00:11:11.229466163Z" level=info msg="CreateContainer within sandbox \"15e630d6d39e337f00ba623b1082efce28a2a71389117d259b01a8c9a3ad4c0c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 10 00:11:11.262960 containerd[1498]: time="2025-09-10T00:11:11.262911463Z" level=info msg="CreateContainer within sandbox \"15e630d6d39e337f00ba623b1082efce28a2a71389117d259b01a8c9a3ad4c0c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e\"" Sep 10 00:11:11.263673 containerd[1498]: time="2025-09-10T00:11:11.263559960Z" level=info msg="StartContainer for \"d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e\"" Sep 10 00:11:11.291989 systemd[1]: Started cri-containerd-d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e.scope - libcontainer container d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e. Sep 10 00:11:11.322176 containerd[1498]: time="2025-09-10T00:11:11.322025805Z" level=info msg="StartContainer for \"d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e\" returns successfully" Sep 10 00:11:11.543342 kubelet[2674]: E0910 00:11:11.543312 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:11.546355 kubelet[2674]: E0910 00:11:11.546329 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:11.548156 containerd[1498]: time="2025-09-10T00:11:11.548113703Z" level=info msg="CreateContainer within sandbox \"3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:11:11.956347 containerd[1498]: time="2025-09-10T00:11:11.955950020Z" level=info msg="CreateContainer within sandbox \"3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3\"" Sep 10 00:11:11.957534 containerd[1498]: time="2025-09-10T00:11:11.956902777Z" level=info msg="StartContainer for \"84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3\"" Sep 10 00:11:12.027011 systemd[1]: Started cri-containerd-84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3.scope - libcontainer container 84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3. Sep 10 00:11:12.254975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3018094757.mount: Deactivated successfully. Sep 10 00:11:12.477189 containerd[1498]: time="2025-09-10T00:11:12.477145025Z" level=info msg="StartContainer for \"84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3\" returns successfully" Sep 10 00:11:12.503667 systemd[1]: run-containerd-runc-k8s.io-84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3-runc.FNCIUa.mount: Deactivated successfully. Sep 10 00:11:12.550682 kubelet[2674]: E0910 00:11:12.550636 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:12.610797 kubelet[2674]: I0910 00:11:12.589318 2674 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 00:11:12.769001 kubelet[2674]: I0910 00:11:12.768848 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bdrgf" podStartSLOduration=2.955892815 podStartE2EDuration="21.768819806s" podCreationTimestamp="2025-09-10 00:10:51 +0000 UTC" firstStartedPulling="2025-09-10 00:10:52.414831468 +0000 UTC m=+8.304476581" lastFinishedPulling="2025-09-10 00:11:11.227758459 +0000 UTC m=+27.117403572" observedRunningTime="2025-09-10 00:11:11.957128741 +0000 UTC m=+27.846773874" watchObservedRunningTime="2025-09-10 00:11:12.768819806 +0000 UTC m=+28.658464919" Sep 10 00:11:13.577607 kubelet[2674]: E0910 00:11:13.575972 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:14.028533 kubelet[2674]: I0910 00:11:14.005149 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cw8dl" podStartSLOduration=9.986568066 podStartE2EDuration="23.001081695s" podCreationTimestamp="2025-09-10 00:10:51 +0000 UTC" firstStartedPulling="2025-09-10 00:10:52.405156647 +0000 UTC m=+8.294801760" lastFinishedPulling="2025-09-10 00:11:05.419670276 +0000 UTC m=+21.309315389" observedRunningTime="2025-09-10 00:11:13.991777079 +0000 UTC m=+29.881422212" watchObservedRunningTime="2025-09-10 00:11:14.001081695 +0000 UTC m=+29.890726828" Sep 10 00:11:14.206797 kubelet[2674]: I0910 00:11:14.206718 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnmnj\" (UniqueName: \"kubernetes.io/projected/a15b6c99-4df5-4c4c-ba1b-1000533f245c-kube-api-access-nnmnj\") pod \"coredns-668d6bf9bc-vvthb\" (UID: \"a15b6c99-4df5-4c4c-ba1b-1000533f245c\") " pod="kube-system/coredns-668d6bf9bc-vvthb" Sep 10 00:11:14.209838 kubelet[2674]: I0910 00:11:14.208598 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/632fe779-e79b-44cc-a6ff-4af4de13872d-config-volume\") pod \"coredns-668d6bf9bc-gp72m\" (UID: \"632fe779-e79b-44cc-a6ff-4af4de13872d\") " pod="kube-system/coredns-668d6bf9bc-gp72m" Sep 10 00:11:14.209838 kubelet[2674]: I0910 00:11:14.208665 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48vk2\" (UniqueName: \"kubernetes.io/projected/632fe779-e79b-44cc-a6ff-4af4de13872d-kube-api-access-48vk2\") pod \"coredns-668d6bf9bc-gp72m\" (UID: \"632fe779-e79b-44cc-a6ff-4af4de13872d\") " pod="kube-system/coredns-668d6bf9bc-gp72m" Sep 10 00:11:14.209838 kubelet[2674]: I0910 00:11:14.208696 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a15b6c99-4df5-4c4c-ba1b-1000533f245c-config-volume\") pod \"coredns-668d6bf9bc-vvthb\" (UID: \"a15b6c99-4df5-4c4c-ba1b-1000533f245c\") " pod="kube-system/coredns-668d6bf9bc-vvthb" Sep 10 00:11:14.255115 systemd[1]: Created slice kubepods-burstable-pod632fe779_e79b_44cc_a6ff_4af4de13872d.slice - libcontainer container kubepods-burstable-pod632fe779_e79b_44cc_a6ff_4af4de13872d.slice. Sep 10 00:11:14.276911 systemd[1]: Created slice kubepods-burstable-poda15b6c99_4df5_4c4c_ba1b_1000533f245c.slice - libcontainer container kubepods-burstable-poda15b6c99_4df5_4c4c_ba1b_1000533f245c.slice. Sep 10 00:11:14.566137 kubelet[2674]: E0910 00:11:14.566089 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:14.578567 kubelet[2674]: E0910 00:11:14.578522 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:14.582281 kubelet[2674]: E0910 00:11:14.582261 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:14.584782 containerd[1498]: time="2025-09-10T00:11:14.584707014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vvthb,Uid:a15b6c99-4df5-4c4c-ba1b-1000533f245c,Namespace:kube-system,Attempt:0,}" Sep 10 00:11:14.585783 containerd[1498]: time="2025-09-10T00:11:14.585741856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gp72m,Uid:632fe779-e79b-44cc-a6ff-4af4de13872d,Namespace:kube-system,Attempt:0,}" Sep 10 00:11:15.485006 systemd-networkd[1414]: cilium_host: Link UP Sep 10 00:11:15.485903 systemd-networkd[1414]: cilium_net: Link UP Sep 10 00:11:15.486374 systemd-networkd[1414]: cilium_net: Gained carrier Sep 10 00:11:15.486944 systemd-networkd[1414]: cilium_host: Gained carrier Sep 10 00:11:15.529185 systemd[1]: Started sshd@10-10.0.0.58:22-10.0.0.1:56284.service - OpenSSH per-connection server daemon (10.0.0.1:56284). Sep 10 00:11:15.554021 systemd-networkd[1414]: cilium_net: Gained IPv6LL Sep 10 00:11:15.579343 sshd[3565]: Accepted publickey for core from 10.0.0.1 port 56284 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:11:15.581094 sshd-session[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:11:15.588486 systemd-logind[1484]: New session 11 of user core. Sep 10 00:11:15.597049 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 00:11:15.612195 systemd-networkd[1414]: cilium_vxlan: Link UP Sep 10 00:11:15.612206 systemd-networkd[1414]: cilium_vxlan: Gained carrier Sep 10 00:11:15.737723 sshd[3607]: Connection closed by 10.0.0.1 port 56284 Sep 10 00:11:15.740017 sshd-session[3565]: pam_unix(sshd:session): session closed for user core Sep 10 00:11:15.746166 systemd[1]: sshd@10-10.0.0.58:22-10.0.0.1:56284.service: Deactivated successfully. Sep 10 00:11:15.749043 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 00:11:15.751176 systemd-logind[1484]: Session 11 logged out. Waiting for processes to exit. Sep 10 00:11:15.752520 systemd-logind[1484]: Removed session 11. Sep 10 00:11:15.859855 kernel: NET: Registered PF_ALG protocol family Sep 10 00:11:16.435095 systemd-networkd[1414]: cilium_host: Gained IPv6LL Sep 10 00:11:16.621314 systemd-networkd[1414]: lxc_health: Link UP Sep 10 00:11:16.629235 systemd-networkd[1414]: lxc_health: Gained carrier Sep 10 00:11:16.882059 systemd-networkd[1414]: cilium_vxlan: Gained IPv6LL Sep 10 00:11:17.169887 kernel: eth0: renamed from tmp0704b Sep 10 00:11:17.176491 systemd-networkd[1414]: lxcb57d0a494949: Link UP Sep 10 00:11:17.179452 systemd-networkd[1414]: lxcb57d0a494949: Gained carrier Sep 10 00:11:17.195481 systemd-networkd[1414]: lxcb190349608a6: Link UP Sep 10 00:11:17.199839 kernel: eth0: renamed from tmpbf0c1 Sep 10 00:11:17.207158 systemd-networkd[1414]: lxcb190349608a6: Gained carrier Sep 10 00:11:18.082297 kubelet[2674]: E0910 00:11:18.082158 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:18.097986 systemd-networkd[1414]: lxc_health: Gained IPv6LL Sep 10 00:11:18.546018 systemd-networkd[1414]: lxcb190349608a6: Gained IPv6LL Sep 10 00:11:18.578089 kubelet[2674]: E0910 00:11:18.578039 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:19.058069 systemd-networkd[1414]: lxcb57d0a494949: Gained IPv6LL Sep 10 00:11:19.580513 kubelet[2674]: E0910 00:11:19.580474 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:20.762059 systemd[1]: Started sshd@11-10.0.0.58:22-10.0.0.1:33334.service - OpenSSH per-connection server daemon (10.0.0.1:33334). Sep 10 00:11:20.806774 sshd[3926]: Accepted publickey for core from 10.0.0.1 port 33334 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:11:20.808618 sshd-session[3926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:11:20.814349 systemd-logind[1484]: New session 12 of user core. Sep 10 00:11:20.823285 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 00:11:20.982504 sshd[3931]: Connection closed by 10.0.0.1 port 33334 Sep 10 00:11:20.984739 sshd-session[3926]: pam_unix(sshd:session): session closed for user core Sep 10 00:11:20.989016 systemd[1]: sshd@11-10.0.0.58:22-10.0.0.1:33334.service: Deactivated successfully. Sep 10 00:11:20.992449 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 00:11:20.995681 systemd-logind[1484]: Session 12 logged out. Waiting for processes to exit. Sep 10 00:11:20.997407 systemd-logind[1484]: Removed session 12. Sep 10 00:11:21.000530 containerd[1498]: time="2025-09-10T00:11:21.000362117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:11:21.000530 containerd[1498]: time="2025-09-10T00:11:21.000435679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:11:21.000530 containerd[1498]: time="2025-09-10T00:11:21.000451119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:11:21.001103 containerd[1498]: time="2025-09-10T00:11:21.000673947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:11:21.001103 containerd[1498]: time="2025-09-10T00:11:21.000728723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:11:21.001103 containerd[1498]: time="2025-09-10T00:11:21.000743531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:11:21.001103 containerd[1498]: time="2025-09-10T00:11:21.000874544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:11:21.001290 containerd[1498]: time="2025-09-10T00:11:21.001219948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:11:21.038975 systemd[1]: Started cri-containerd-0704b4acf6be650cd3b3d454a3a3ef785d6e97503748314f7bf3d62943e1a254.scope - libcontainer container 0704b4acf6be650cd3b3d454a3a3ef785d6e97503748314f7bf3d62943e1a254. Sep 10 00:11:21.040536 systemd[1]: Started cri-containerd-bf0c10b2f3f2206afe298512a67d49bf82c4f814b2832ddc9db5517ec7595ee1.scope - libcontainer container bf0c10b2f3f2206afe298512a67d49bf82c4f814b2832ddc9db5517ec7595ee1. Sep 10 00:11:21.054588 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:11:21.056541 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:11:21.086871 containerd[1498]: time="2025-09-10T00:11:21.086736868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gp72m,Uid:632fe779-e79b-44cc-a6ff-4af4de13872d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0704b4acf6be650cd3b3d454a3a3ef785d6e97503748314f7bf3d62943e1a254\"" Sep 10 00:11:21.088323 kubelet[2674]: E0910 00:11:21.088293 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:21.090995 containerd[1498]: time="2025-09-10T00:11:21.090900288Z" level=info msg="CreateContainer within sandbox \"0704b4acf6be650cd3b3d454a3a3ef785d6e97503748314f7bf3d62943e1a254\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:11:21.100279 containerd[1498]: time="2025-09-10T00:11:21.100223715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vvthb,Uid:a15b6c99-4df5-4c4c-ba1b-1000533f245c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf0c10b2f3f2206afe298512a67d49bf82c4f814b2832ddc9db5517ec7595ee1\"" Sep 10 00:11:21.101571 kubelet[2674]: E0910 00:11:21.101534 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:21.118822 containerd[1498]: time="2025-09-10T00:11:21.118760870Z" level=info msg="CreateContainer within sandbox \"0704b4acf6be650cd3b3d454a3a3ef785d6e97503748314f7bf3d62943e1a254\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b875ca0a08c0a5acab42916749ae9cd868faa8edf05ceea8ebe9a56ad9bcfb75\"" Sep 10 00:11:21.119384 containerd[1498]: time="2025-09-10T00:11:21.119356025Z" level=info msg="StartContainer for \"b875ca0a08c0a5acab42916749ae9cd868faa8edf05ceea8ebe9a56ad9bcfb75\"" Sep 10 00:11:21.131579 containerd[1498]: time="2025-09-10T00:11:21.131511602Z" level=info msg="CreateContainer within sandbox \"bf0c10b2f3f2206afe298512a67d49bf82c4f814b2832ddc9db5517ec7595ee1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:11:21.150044 systemd[1]: Started cri-containerd-b875ca0a08c0a5acab42916749ae9cd868faa8edf05ceea8ebe9a56ad9bcfb75.scope - libcontainer container b875ca0a08c0a5acab42916749ae9cd868faa8edf05ceea8ebe9a56ad9bcfb75. Sep 10 00:11:21.152184 containerd[1498]: time="2025-09-10T00:11:21.152131764Z" level=info msg="CreateContainer within sandbox \"bf0c10b2f3f2206afe298512a67d49bf82c4f814b2832ddc9db5517ec7595ee1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"583318d4765b8e1f40a76332999ebcfbf49b57effbbd32cf5751f6836cb27eca\"" Sep 10 00:11:21.156230 containerd[1498]: time="2025-09-10T00:11:21.155189687Z" level=info msg="StartContainer for \"583318d4765b8e1f40a76332999ebcfbf49b57effbbd32cf5751f6836cb27eca\"" Sep 10 00:11:21.183263 systemd[1]: Started cri-containerd-583318d4765b8e1f40a76332999ebcfbf49b57effbbd32cf5751f6836cb27eca.scope - libcontainer container 583318d4765b8e1f40a76332999ebcfbf49b57effbbd32cf5751f6836cb27eca. Sep 10 00:11:21.189597 containerd[1498]: time="2025-09-10T00:11:21.189278872Z" level=info msg="StartContainer for \"b875ca0a08c0a5acab42916749ae9cd868faa8edf05ceea8ebe9a56ad9bcfb75\" returns successfully" Sep 10 00:11:21.217095 containerd[1498]: time="2025-09-10T00:11:21.217032980Z" level=info msg="StartContainer for \"583318d4765b8e1f40a76332999ebcfbf49b57effbbd32cf5751f6836cb27eca\" returns successfully" Sep 10 00:11:21.585591 kubelet[2674]: E0910 00:11:21.585552 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:21.587256 kubelet[2674]: E0910 00:11:21.587217 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:21.606840 kubelet[2674]: I0910 00:11:21.606747 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gp72m" podStartSLOduration=30.606722329 podStartE2EDuration="30.606722329s" podCreationTimestamp="2025-09-10 00:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:11:21.596055377 +0000 UTC m=+37.485700490" watchObservedRunningTime="2025-09-10 00:11:21.606722329 +0000 UTC m=+37.496367442" Sep 10 00:11:21.607067 kubelet[2674]: I0910 00:11:21.606941 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vvthb" podStartSLOduration=30.606924568 podStartE2EDuration="30.606924568s" podCreationTimestamp="2025-09-10 00:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:11:21.606515111 +0000 UTC m=+37.496160234" watchObservedRunningTime="2025-09-10 00:11:21.606924568 +0000 UTC m=+37.496569682" Sep 10 00:11:22.589386 kubelet[2674]: E0910 00:11:22.589349 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:22.589900 kubelet[2674]: E0910 00:11:22.589528 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:23.591731 kubelet[2674]: E0910 00:11:23.591683 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:23.591731 kubelet[2674]: E0910 00:11:23.591728 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:11:25.997654 systemd[1]: Started sshd@12-10.0.0.58:22-10.0.0.1:33342.service - OpenSSH per-connection server daemon (10.0.0.1:33342). Sep 10 00:11:26.049439 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 33342 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:11:26.051725 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:11:26.056655 systemd-logind[1484]: New session 13 of user core. Sep 10 00:11:26.071988 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 00:11:26.211045 sshd[4113]: Connection closed by 10.0.0.1 port 33342 Sep 10 00:11:26.211428 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Sep 10 00:11:26.215286 systemd[1]: sshd@12-10.0.0.58:22-10.0.0.1:33342.service: Deactivated successfully. Sep 10 00:11:26.217302 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 00:11:26.218103 systemd-logind[1484]: Session 13 logged out. Waiting for processes to exit. Sep 10 00:11:26.219123 systemd-logind[1484]: Removed session 13. Sep 10 00:11:31.224519 systemd[1]: Started sshd@13-10.0.0.58:22-10.0.0.1:38066.service - OpenSSH per-connection server daemon (10.0.0.1:38066). Sep 10 00:11:31.265635 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 38066 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:11:31.267490 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:11:31.271857 systemd-logind[1484]: New session 14 of user core. Sep 10 00:11:31.280999 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 00:11:31.413920 sshd[4129]: Connection closed by 10.0.0.1 port 38066 Sep 10 00:11:31.414358 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Sep 10 00:11:31.419078 systemd[1]: sshd@13-10.0.0.58:22-10.0.0.1:38066.service: Deactivated successfully. Sep 10 00:11:31.421471 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 00:11:31.422220 systemd-logind[1484]: Session 14 logged out. Waiting for processes to exit. Sep 10 00:11:31.423233 systemd-logind[1484]: Removed session 14. Sep 10 00:11:36.427225 systemd[1]: Started sshd@14-10.0.0.58:22-10.0.0.1:38080.service - OpenSSH per-connection server daemon (10.0.0.1:38080). Sep 10 00:11:36.470659 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 38080 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:11:36.472502 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:11:36.477485 systemd-logind[1484]: New session 15 of user core. Sep 10 00:11:36.498019 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 00:11:36.609973 sshd[4145]: Connection closed by 10.0.0.1 port 38080 Sep 10 00:11:36.610347 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Sep 10 00:11:36.625009 systemd[1]: sshd@14-10.0.0.58:22-10.0.0.1:38080.service: Deactivated successfully. Sep 10 00:11:36.627198 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 00:11:36.628770 systemd-logind[1484]: Session 15 logged out. Waiting for processes to exit. Sep 10 00:11:36.640144 systemd[1]: Started sshd@15-10.0.0.58:22-10.0.0.1:38094.service - OpenSSH per-connection server daemon (10.0.0.1:38094). Sep 10 00:11:36.641231 systemd-logind[1484]: Removed session 15. Sep 10 00:11:36.683304 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 38094 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:11:36.684713 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:11:36.688846 systemd-logind[1484]: New session 16 of user core. Sep 10 00:11:36.703940 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 00:11:36.916702 sshd[4162]: Connection closed by 10.0.0.1 port 38094 Sep 10 00:11:36.917129 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Sep 10 00:11:36.928053 systemd[1]: sshd@15-10.0.0.58:22-10.0.0.1:38094.service: Deactivated successfully. Sep 10 00:11:36.931361 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 00:11:36.933031 systemd-logind[1484]: Session 16 logged out. Waiting for processes to exit. Sep 10 00:11:36.943340 systemd[1]: Started sshd@16-10.0.0.58:22-10.0.0.1:38102.service - OpenSSH per-connection server daemon (10.0.0.1:38102). Sep 10 00:11:36.946495 systemd-logind[1484]: Removed session 16. Sep 10 00:11:36.982977 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 38102 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:11:36.984564 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:11:36.989496 systemd-logind[1484]: New session 17 of user core. Sep 10 00:11:36.998993 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 00:11:37.122063 sshd[4175]: Connection closed by 10.0.0.1 port 38102 Sep 10 00:11:37.122532 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Sep 10 00:11:37.127427 systemd[1]: sshd@16-10.0.0.58:22-10.0.0.1:38102.service: Deactivated successfully. Sep 10 00:11:37.129788 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 00:11:37.130738 systemd-logind[1484]: Session 17 logged out. Waiting for processes to exit. Sep 10 00:11:37.131734 systemd-logind[1484]: Removed session 17. Sep 10 00:11:42.136899 systemd[1]: Started sshd@17-10.0.0.58:22-10.0.0.1:58964.service - OpenSSH per-connection server daemon (10.0.0.1:58964). Sep 10 00:11:42.178422 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 58964 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:11:42.180241 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:11:42.184732 systemd-logind[1484]: New session 18 of user core. Sep 10 00:11:42.194984 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 00:11:42.311024 sshd[4190]: Connection closed by 10.0.0.1 port 58964 Sep 10 00:11:42.311429 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Sep 10 00:11:42.315746 systemd[1]: sshd@17-10.0.0.58:22-10.0.0.1:58964.service: Deactivated successfully. Sep 10 00:11:42.318314 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:11:42.319151 systemd-logind[1484]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:11:42.320785 systemd-logind[1484]: Removed session 18. Sep 10 00:11:47.323923 systemd[1]: Started sshd@18-10.0.0.58:22-10.0.0.1:58976.service - OpenSSH per-connection server daemon (10.0.0.1:58976). Sep 10 00:11:47.366274 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 58976 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:11:47.367943 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:11:47.373389 systemd-logind[1484]: New session 19 of user core. Sep 10 00:11:47.382965 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 00:11:47.497305 sshd[4207]: Connection closed by 10.0.0.1 port 58976 Sep 10 00:11:47.497682 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Sep 10 00:11:47.501930 systemd[1]: sshd@18-10.0.0.58:22-10.0.0.1:58976.service: Deactivated successfully. Sep 10 00:11:47.504122 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:11:47.505075 systemd-logind[1484]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:11:47.506059 systemd-logind[1484]: Removed session 19. Sep 10 00:11:52.513916 systemd[1]: Started sshd@19-10.0.0.58:22-10.0.0.1:59732.service - OpenSSH per-connection server daemon (10.0.0.1:59732). Sep 10 00:11:52.558929 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 59732 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:11:52.560778 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:11:52.565278 systemd-logind[1484]: New session 20 of user core. Sep 10 00:11:52.576957 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 00:11:52.690828 sshd[4224]: Connection closed by 10.0.0.1 port 59732 Sep 10 00:11:52.691364 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Sep 10 00:11:52.705918 systemd[1]: sshd@19-10.0.0.58:22-10.0.0.1:59732.service: Deactivated successfully. Sep 10 00:11:52.707950 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 00:11:52.709543 systemd-logind[1484]: Session 20 logged out. Waiting for processes to exit. Sep 10 00:11:52.719150 systemd[1]: Started sshd@20-10.0.0.58:22-10.0.0.1:59740.service - OpenSSH per-connection server daemon (10.0.0.1:59740). Sep 10 00:11:52.720334 systemd-logind[1484]: Removed session 20. Sep 10 00:11:52.763727 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 59740 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:11:52.765686 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:11:52.771526 systemd-logind[1484]: New session 21 of user core. Sep 10 00:11:52.780939 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 00:11:53.015041 sshd[4239]: Connection closed by 10.0.0.1 port 59740 Sep 10 00:11:53.015409 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Sep 10 00:11:53.026725 systemd[1]: sshd@20-10.0.0.58:22-10.0.0.1:59740.service: Deactivated successfully. Sep 10 00:11:53.029021 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 00:11:53.030691 systemd-logind[1484]: Session 21 logged out. Waiting for processes to exit. Sep 10 00:11:53.040389 systemd[1]: Started sshd@21-10.0.0.58:22-10.0.0.1:59746.service - OpenSSH per-connection server daemon (10.0.0.1:59746). Sep 10 00:11:53.041646 systemd-logind[1484]: Removed session 21. Sep 10 00:11:53.081260 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 59746 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:11:53.082780 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:11:53.087662 systemd-logind[1484]: New session 22 of user core. Sep 10 00:11:53.096927 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 00:11:53.810508 sshd[4254]: Connection closed by 10.0.0.1 port 59746 Sep 10 00:11:53.811979 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Sep 10 00:11:53.828404 systemd[1]: sshd@21-10.0.0.58:22-10.0.0.1:59746.service: Deactivated successfully. Sep 10 00:11:53.832290 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 00:11:53.834662 systemd-logind[1484]: Session 22 logged out. Waiting for processes to exit. Sep 10 00:11:53.847311 systemd[1]: Started sshd@22-10.0.0.58:22-10.0.0.1:59750.service - OpenSSH per-connection server daemon (10.0.0.1:59750). Sep 10 00:11:53.848736 systemd-logind[1484]: Removed session 22. Sep 10 00:11:53.885434 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 59750 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:11:53.887211 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:11:53.891545 systemd-logind[1484]: New session 23 of user core. Sep 10 00:11:53.901019 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 00:11:54.138605 sshd[4276]: Connection closed by 10.0.0.1 port 59750 Sep 10 00:11:54.138942 sshd-session[4273]: pam_unix(sshd:session): session closed for user core Sep 10 00:11:54.156138 systemd[1]: sshd@22-10.0.0.58:22-10.0.0.1:59750.service: Deactivated successfully. Sep 10 00:11:54.158467 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 00:11:54.159376 systemd-logind[1484]: Session 23 logged out. Waiting for processes to exit. Sep 10 00:11:54.167146 systemd[1]: Started sshd@23-10.0.0.58:22-10.0.0.1:59756.service - OpenSSH per-connection server daemon (10.0.0.1:59756). Sep 10 00:11:54.168362 systemd-logind[1484]: Removed session 23. Sep 10 00:11:54.203352 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 59756 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:11:54.204983 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:11:54.209241 systemd-logind[1484]: New session 24 of user core. Sep 10 00:11:54.224966 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 00:11:54.344247 sshd[4289]: Connection closed by 10.0.0.1 port 59756 Sep 10 00:11:54.344798 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Sep 10 00:11:54.348758 systemd[1]: sshd@23-10.0.0.58:22-10.0.0.1:59756.service: Deactivated successfully. Sep 10 00:11:54.350927 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 00:11:54.351631 systemd-logind[1484]: Session 24 logged out. Waiting for processes to exit. Sep 10 00:11:54.352548 systemd-logind[1484]: Removed session 24. Sep 10 00:11:59.375698 systemd[1]: Started sshd@24-10.0.0.58:22-10.0.0.1:59770.service - OpenSSH per-connection server daemon (10.0.0.1:59770). Sep 10 00:11:59.423139 sshd[4303]: Accepted publickey for core from 10.0.0.1 port 59770 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:11:59.425281 sshd-session[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:11:59.434976 systemd-logind[1484]: New session 25 of user core. Sep 10 00:11:59.444129 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 00:11:59.675334 sshd[4305]: Connection closed by 10.0.0.1 port 59770 Sep 10 00:11:59.678871 sshd-session[4303]: pam_unix(sshd:session): session closed for user core Sep 10 00:11:59.686980 systemd[1]: sshd@24-10.0.0.58:22-10.0.0.1:59770.service: Deactivated successfully. Sep 10 00:11:59.689775 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 00:11:59.695189 systemd-logind[1484]: Session 25 logged out. Waiting for processes to exit. Sep 10 00:11:59.696338 systemd-logind[1484]: Removed session 25. Sep 10 00:12:04.692061 systemd[1]: Started sshd@25-10.0.0.58:22-10.0.0.1:47040.service - OpenSSH per-connection server daemon (10.0.0.1:47040). Sep 10 00:12:04.733267 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 47040 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:12:04.734710 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:12:04.738640 systemd-logind[1484]: New session 26 of user core. Sep 10 00:12:04.746962 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 10 00:12:04.851929 sshd[4323]: Connection closed by 10.0.0.1 port 47040 Sep 10 00:12:04.852334 sshd-session[4321]: pam_unix(sshd:session): session closed for user core Sep 10 00:12:04.856304 systemd[1]: sshd@25-10.0.0.58:22-10.0.0.1:47040.service: Deactivated successfully. Sep 10 00:12:04.858446 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 00:12:04.859203 systemd-logind[1484]: Session 26 logged out. Waiting for processes to exit. Sep 10 00:12:04.860047 systemd-logind[1484]: Removed session 26. Sep 10 00:12:09.864954 systemd[1]: Started sshd@26-10.0.0.58:22-10.0.0.1:47052.service - OpenSSH per-connection server daemon (10.0.0.1:47052). Sep 10 00:12:09.906365 sshd[4336]: Accepted publickey for core from 10.0.0.1 port 47052 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:12:09.907989 sshd-session[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:12:09.912761 systemd-logind[1484]: New session 27 of user core. Sep 10 00:12:09.920979 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 10 00:12:10.054834 sshd[4338]: Connection closed by 10.0.0.1 port 47052 Sep 10 00:12:10.055294 sshd-session[4336]: pam_unix(sshd:session): session closed for user core Sep 10 00:12:10.059952 systemd[1]: sshd@26-10.0.0.58:22-10.0.0.1:47052.service: Deactivated successfully. Sep 10 00:12:10.062277 systemd[1]: session-27.scope: Deactivated successfully. Sep 10 00:12:10.063100 systemd-logind[1484]: Session 27 logged out. Waiting for processes to exit. Sep 10 00:12:10.064153 systemd-logind[1484]: Removed session 27. Sep 10 00:12:12.202171 kubelet[2674]: E0910 00:12:12.202101 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:12:14.201612 kubelet[2674]: E0910 00:12:14.201576 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:12:15.071579 systemd[1]: Started sshd@27-10.0.0.58:22-10.0.0.1:57506.service - OpenSSH per-connection server daemon (10.0.0.1:57506). Sep 10 00:12:15.117758 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 57506 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:12:15.119444 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:12:15.124293 systemd-logind[1484]: New session 28 of user core. Sep 10 00:12:15.133979 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 10 00:12:15.247193 sshd[4353]: Connection closed by 10.0.0.1 port 57506 Sep 10 00:12:15.247648 sshd-session[4351]: pam_unix(sshd:session): session closed for user core Sep 10 00:12:15.258842 systemd[1]: sshd@27-10.0.0.58:22-10.0.0.1:57506.service: Deactivated successfully. Sep 10 00:12:15.261346 systemd[1]: session-28.scope: Deactivated successfully. Sep 10 00:12:15.263162 systemd-logind[1484]: Session 28 logged out. Waiting for processes to exit. Sep 10 00:12:15.269312 systemd[1]: Started sshd@28-10.0.0.58:22-10.0.0.1:57520.service - OpenSSH per-connection server daemon (10.0.0.1:57520). Sep 10 00:12:15.270614 systemd-logind[1484]: Removed session 28. Sep 10 00:12:15.313295 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 57520 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:12:15.315442 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:12:15.320744 systemd-logind[1484]: New session 29 of user core. Sep 10 00:12:15.330146 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 10 00:12:16.679687 containerd[1498]: time="2025-09-10T00:12:16.679631527Z" level=info msg="StopContainer for \"d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e\" with timeout 30 (s)" Sep 10 00:12:16.686062 containerd[1498]: time="2025-09-10T00:12:16.686003482Z" level=info msg="Stop container \"d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e\" with signal terminated" Sep 10 00:12:16.701299 systemd[1]: run-containerd-runc-k8s.io-84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3-runc.sEGdVz.mount: Deactivated successfully. Sep 10 00:12:16.706106 systemd[1]: cri-containerd-d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e.scope: Deactivated successfully. Sep 10 00:12:16.725453 containerd[1498]: time="2025-09-10T00:12:16.725362965Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:12:16.728449 containerd[1498]: time="2025-09-10T00:12:16.728267119Z" level=info msg="StopContainer for \"84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3\" with timeout 2 (s)" Sep 10 00:12:16.728649 containerd[1498]: time="2025-09-10T00:12:16.728624704Z" level=info msg="Stop container \"84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3\" with signal terminated" Sep 10 00:12:16.737193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e-rootfs.mount: Deactivated successfully. Sep 10 00:12:16.740399 systemd-networkd[1414]: lxc_health: Link DOWN Sep 10 00:12:16.740411 systemd-networkd[1414]: lxc_health: Lost carrier Sep 10 00:12:16.748275 containerd[1498]: time="2025-09-10T00:12:16.748176439Z" level=info msg="shim disconnected" id=d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e namespace=k8s.io Sep 10 00:12:16.748275 containerd[1498]: time="2025-09-10T00:12:16.748271288Z" level=warning msg="cleaning up after shim disconnected" id=d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e namespace=k8s.io Sep 10 00:12:16.748512 containerd[1498]: time="2025-09-10T00:12:16.748285525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:12:16.767513 systemd[1]: cri-containerd-84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3.scope: Deactivated successfully. Sep 10 00:12:16.768040 systemd[1]: cri-containerd-84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3.scope: Consumed 8.173s CPU time, 123.5M memory peak, 236K read from disk, 13.3M written to disk. Sep 10 00:12:16.782905 containerd[1498]: time="2025-09-10T00:12:16.781216947Z" level=info msg="StopContainer for \"d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e\" returns successfully" Sep 10 00:12:16.785478 containerd[1498]: time="2025-09-10T00:12:16.785421589Z" level=info msg="StopPodSandbox for \"15e630d6d39e337f00ba623b1082efce28a2a71389117d259b01a8c9a3ad4c0c\"" Sep 10 00:12:16.793551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3-rootfs.mount: Deactivated successfully. Sep 10 00:12:16.795336 containerd[1498]: time="2025-09-10T00:12:16.785512230Z" level=info msg="Container to stop \"d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:12:16.798197 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15e630d6d39e337f00ba623b1082efce28a2a71389117d259b01a8c9a3ad4c0c-shm.mount: Deactivated successfully. Sep 10 00:12:16.800622 containerd[1498]: time="2025-09-10T00:12:16.800537215Z" level=info msg="shim disconnected" id=84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3 namespace=k8s.io Sep 10 00:12:16.800622 containerd[1498]: time="2025-09-10T00:12:16.800618719Z" level=warning msg="cleaning up after shim disconnected" id=84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3 namespace=k8s.io Sep 10 00:12:16.800622 containerd[1498]: time="2025-09-10T00:12:16.800630842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:12:16.806186 systemd[1]: cri-containerd-15e630d6d39e337f00ba623b1082efce28a2a71389117d259b01a8c9a3ad4c0c.scope: Deactivated successfully. Sep 10 00:12:16.817010 containerd[1498]: time="2025-09-10T00:12:16.816931347Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:12:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 10 00:12:16.821892 containerd[1498]: time="2025-09-10T00:12:16.821844255Z" level=info msg="StopContainer for \"84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3\" returns successfully" Sep 10 00:12:16.822662 containerd[1498]: time="2025-09-10T00:12:16.822614450Z" level=info msg="StopPodSandbox for \"3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0\"" Sep 10 00:12:16.822753 containerd[1498]: time="2025-09-10T00:12:16.822665626Z" level=info msg="Container to stop \"098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:12:16.822753 containerd[1498]: time="2025-09-10T00:12:16.822719407Z" level=info msg="Container to stop \"35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:12:16.822753 containerd[1498]: time="2025-09-10T00:12:16.822731480Z" level=info msg="Container to stop \"237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:12:16.822753 containerd[1498]: time="2025-09-10T00:12:16.822745117Z" level=info msg="Container to stop \"811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:12:16.822962 containerd[1498]: time="2025-09-10T00:12:16.822757109Z" level=info msg="Container to stop \"84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:12:16.830986 systemd[1]: cri-containerd-3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0.scope: Deactivated successfully. Sep 10 00:12:16.856214 containerd[1498]: time="2025-09-10T00:12:16.856139764Z" level=info msg="shim disconnected" id=15e630d6d39e337f00ba623b1082efce28a2a71389117d259b01a8c9a3ad4c0c namespace=k8s.io Sep 10 00:12:16.856214 containerd[1498]: time="2025-09-10T00:12:16.856206670Z" level=warning msg="cleaning up after shim disconnected" id=15e630d6d39e337f00ba623b1082efce28a2a71389117d259b01a8c9a3ad4c0c namespace=k8s.io Sep 10 00:12:16.856214 containerd[1498]: time="2025-09-10T00:12:16.856215567Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:12:16.856572 containerd[1498]: time="2025-09-10T00:12:16.856146316Z" level=info msg="shim disconnected" id=3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0 namespace=k8s.io Sep 10 00:12:16.856572 containerd[1498]: time="2025-09-10T00:12:16.856384567Z" level=warning msg="cleaning up after shim disconnected" id=3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0 namespace=k8s.io Sep 10 00:12:16.856572 containerd[1498]: time="2025-09-10T00:12:16.856396299Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:12:16.873920 containerd[1498]: time="2025-09-10T00:12:16.873856012Z" level=info msg="TearDown network for sandbox \"15e630d6d39e337f00ba623b1082efce28a2a71389117d259b01a8c9a3ad4c0c\" successfully" Sep 10 00:12:16.873920 containerd[1498]: time="2025-09-10T00:12:16.873903142Z" level=info msg="StopPodSandbox for \"15e630d6d39e337f00ba623b1082efce28a2a71389117d259b01a8c9a3ad4c0c\" returns successfully" Sep 10 00:12:16.884602 containerd[1498]: time="2025-09-10T00:12:16.884525362Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:12:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 10 00:12:16.886731 containerd[1498]: time="2025-09-10T00:12:16.886548362Z" level=info msg="TearDown network for sandbox \"3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0\" successfully" Sep 10 00:12:16.886731 containerd[1498]: time="2025-09-10T00:12:16.886605500Z" level=info msg="StopPodSandbox for \"3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0\" returns successfully" Sep 10 00:12:16.925941 kubelet[2674]: I0910 00:12:16.925877 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-host-proc-sys-net\") pod \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " Sep 10 00:12:16.925941 kubelet[2674]: I0910 00:12:16.925939 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8tct\" (UniqueName: \"kubernetes.io/projected/b26d1c48-ebfa-48d1-8300-f94572dffefc-kube-api-access-z8tct\") pod \"b26d1c48-ebfa-48d1-8300-f94572dffefc\" (UID: \"b26d1c48-ebfa-48d1-8300-f94572dffefc\") " Sep 10 00:12:16.925941 kubelet[2674]: I0910 00:12:16.925962 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-cni-path\") pod \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " Sep 10 00:12:16.926752 kubelet[2674]: I0910 00:12:16.925990 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b26d1c48-ebfa-48d1-8300-f94572dffefc-cilium-config-path\") pod \"b26d1c48-ebfa-48d1-8300-f94572dffefc\" (UID: \"b26d1c48-ebfa-48d1-8300-f94572dffefc\") " Sep 10 00:12:16.926752 kubelet[2674]: I0910 00:12:16.926007 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-hostproc\") pod \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " Sep 10 00:12:16.926752 kubelet[2674]: I0910 00:12:16.926021 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-host-proc-sys-kernel\") pod \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " Sep 10 00:12:16.926752 kubelet[2674]: I0910 00:12:16.926626 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-clustermesh-secrets\") pod \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " Sep 10 00:12:16.926752 kubelet[2674]: I0910 00:12:16.926656 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-xtables-lock\") pod \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " Sep 10 00:12:16.926752 kubelet[2674]: I0910 00:12:16.926683 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csrnb\" (UniqueName: \"kubernetes.io/projected/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-kube-api-access-csrnb\") pod \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " Sep 10 00:12:16.926988 kubelet[2674]: I0910 00:12:16.926706 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-bpf-maps\") pod \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " Sep 10 00:12:16.926988 kubelet[2674]: I0910 00:12:16.926729 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-etc-cni-netd\") pod \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " Sep 10 00:12:16.926988 kubelet[2674]: I0910 00:12:16.926747 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-lib-modules\") pod \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " Sep 10 00:12:16.926988 kubelet[2674]: I0910 00:12:16.926769 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-cilium-config-path\") pod \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " Sep 10 00:12:16.926988 kubelet[2674]: I0910 00:12:16.926789 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-hubble-tls\") pod \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " Sep 10 00:12:16.926988 kubelet[2674]: I0910 00:12:16.926841 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-cilium-cgroup\") pod \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " Sep 10 00:12:16.927179 kubelet[2674]: I0910 00:12:16.926864 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-cilium-run\") pod \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\" (UID: \"1ed673c9-f1f6-483a-9e4c-3c7b3c708d64\") " Sep 10 00:12:16.927179 kubelet[2674]: I0910 00:12:16.926071 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64" (UID: "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:12:16.927179 kubelet[2674]: I0910 00:12:16.926071 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-cni-path" (OuterVolumeSpecName: "cni-path") pod "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64" (UID: "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:12:16.927179 kubelet[2674]: I0910 00:12:16.926130 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-hostproc" (OuterVolumeSpecName: "hostproc") pod "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64" (UID: "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:12:16.927179 kubelet[2674]: I0910 00:12:16.926156 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64" (UID: "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:12:16.927352 kubelet[2674]: I0910 00:12:16.926934 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64" (UID: "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:12:16.927352 kubelet[2674]: I0910 00:12:16.927309 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64" (UID: "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:12:16.927352 kubelet[2674]: I0910 00:12:16.927336 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64" (UID: "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:12:16.928405 kubelet[2674]: I0910 00:12:16.928373 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64" (UID: "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:12:16.929263 kubelet[2674]: I0910 00:12:16.929177 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64" (UID: "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:12:16.929263 kubelet[2674]: I0910 00:12:16.929236 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64" (UID: "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:12:16.932879 kubelet[2674]: I0910 00:12:16.931897 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b26d1c48-ebfa-48d1-8300-f94572dffefc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b26d1c48-ebfa-48d1-8300-f94572dffefc" (UID: "b26d1c48-ebfa-48d1-8300-f94572dffefc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 00:12:16.932879 kubelet[2674]: I0910 00:12:16.932021 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-kube-api-access-csrnb" (OuterVolumeSpecName: "kube-api-access-csrnb") pod "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64" (UID: "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64"). InnerVolumeSpecName "kube-api-access-csrnb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 00:12:16.932879 kubelet[2674]: I0910 00:12:16.932092 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64" (UID: "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 00:12:16.933320 kubelet[2674]: I0910 00:12:16.933087 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26d1c48-ebfa-48d1-8300-f94572dffefc-kube-api-access-z8tct" (OuterVolumeSpecName: "kube-api-access-z8tct") pod "b26d1c48-ebfa-48d1-8300-f94572dffefc" (UID: "b26d1c48-ebfa-48d1-8300-f94572dffefc"). InnerVolumeSpecName "kube-api-access-z8tct". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 00:12:16.933787 kubelet[2674]: I0910 00:12:16.933546 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64" (UID: "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 00:12:16.934539 kubelet[2674]: I0910 00:12:16.934495 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64" (UID: "1ed673c9-f1f6-483a-9e4c-3c7b3c708d64"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 00:12:17.028073 kubelet[2674]: I0910 00:12:17.027994 2674 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b26d1c48-ebfa-48d1-8300-f94572dffefc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:12:17.028073 kubelet[2674]: I0910 00:12:17.028047 2674 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 00:12:17.028073 kubelet[2674]: I0910 00:12:17.028057 2674 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:12:17.028073 kubelet[2674]: I0910 00:12:17.028065 2674 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 00:12:17.028073 kubelet[2674]: I0910 00:12:17.028075 2674 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-csrnb\" (UniqueName: \"kubernetes.io/projected/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-kube-api-access-csrnb\") on node \"localhost\" DevicePath \"\"" Sep 10 00:12:17.028073 kubelet[2674]: I0910 00:12:17.028085 2674 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 00:12:17.028073 kubelet[2674]: I0910 00:12:17.028094 2674 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 00:12:17.028073 kubelet[2674]: I0910 00:12:17.028103 2674 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 00:12:17.028542 kubelet[2674]: I0910 00:12:17.028111 2674 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 00:12:17.028542 kubelet[2674]: I0910 00:12:17.028119 2674 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:12:17.028542 kubelet[2674]: I0910 00:12:17.028128 2674 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 00:12:17.028542 kubelet[2674]: I0910 00:12:17.028136 2674 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 00:12:17.028542 kubelet[2674]: I0910 00:12:17.028147 2674 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 00:12:17.028542 kubelet[2674]: I0910 00:12:17.028157 2674 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 00:12:17.028542 kubelet[2674]: I0910 00:12:17.028165 2674 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z8tct\" (UniqueName: \"kubernetes.io/projected/b26d1c48-ebfa-48d1-8300-f94572dffefc-kube-api-access-z8tct\") on node \"localhost\" DevicePath \"\"" Sep 10 00:12:17.028542 kubelet[2674]: I0910 00:12:17.028173 2674 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:12:17.692339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0-rootfs.mount: Deactivated successfully. Sep 10 00:12:17.692506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15e630d6d39e337f00ba623b1082efce28a2a71389117d259b01a8c9a3ad4c0c-rootfs.mount: Deactivated successfully. Sep 10 00:12:17.692601 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3bc579eb379943f011fee428353a0bb19e292241a2f64c7f7cfd68a976d9b3b0-shm.mount: Deactivated successfully. Sep 10 00:12:17.692700 systemd[1]: var-lib-kubelet-pods-1ed673c9\x2df1f6\x2d483a\x2d9e4c\x2d3c7b3c708d64-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcsrnb.mount: Deactivated successfully. Sep 10 00:12:17.692829 systemd[1]: var-lib-kubelet-pods-b26d1c48\x2debfa\x2d48d1\x2d8300\x2df94572dffefc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz8tct.mount: Deactivated successfully. Sep 10 00:12:17.692932 systemd[1]: var-lib-kubelet-pods-1ed673c9\x2df1f6\x2d483a\x2d9e4c\x2d3c7b3c708d64-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 00:12:17.693047 systemd[1]: var-lib-kubelet-pods-1ed673c9\x2df1f6\x2d483a\x2d9e4c\x2d3c7b3c708d64-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 00:12:17.737097 kubelet[2674]: I0910 00:12:17.737053 2674 scope.go:117] "RemoveContainer" containerID="d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e" Sep 10 00:12:17.745878 systemd[1]: Removed slice kubepods-besteffort-podb26d1c48_ebfa_48d1_8300_f94572dffefc.slice - libcontainer container kubepods-besteffort-podb26d1c48_ebfa_48d1_8300_f94572dffefc.slice. Sep 10 00:12:17.750297 containerd[1498]: time="2025-09-10T00:12:17.749986671Z" level=info msg="RemoveContainer for \"d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e\"" Sep 10 00:12:17.755702 systemd[1]: Removed slice kubepods-burstable-pod1ed673c9_f1f6_483a_9e4c_3c7b3c708d64.slice - libcontainer container kubepods-burstable-pod1ed673c9_f1f6_483a_9e4c_3c7b3c708d64.slice. Sep 10 00:12:17.756141 systemd[1]: kubepods-burstable-pod1ed673c9_f1f6_483a_9e4c_3c7b3c708d64.slice: Consumed 8.282s CPU time, 123.8M memory peak, 348K read from disk, 16.6M written to disk. Sep 10 00:12:17.757723 containerd[1498]: time="2025-09-10T00:12:17.757682673Z" level=info msg="RemoveContainer for \"d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e\" returns successfully" Sep 10 00:12:17.758079 kubelet[2674]: I0910 00:12:17.758050 2674 scope.go:117] "RemoveContainer" containerID="d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e" Sep 10 00:12:17.758506 containerd[1498]: time="2025-09-10T00:12:17.758403595Z" level=error msg="ContainerStatus for \"d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e\": not found" Sep 10 00:12:17.768683 kubelet[2674]: E0910 00:12:17.768627 2674 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e\": not found" containerID="d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e" Sep 10 00:12:17.768912 kubelet[2674]: I0910 00:12:17.768697 2674 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e"} err="failed to get container status \"d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8e6455b16cbf7c63c81cf9f8ce441e8f74971042455a923d21a51013843d99e\": not found" Sep 10 00:12:17.768912 kubelet[2674]: I0910 00:12:17.768851 2674 scope.go:117] "RemoveContainer" containerID="84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3" Sep 10 00:12:17.771222 containerd[1498]: time="2025-09-10T00:12:17.770737225Z" level=info msg="RemoveContainer for \"84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3\"" Sep 10 00:12:17.774636 containerd[1498]: time="2025-09-10T00:12:17.774591823Z" level=info msg="RemoveContainer for \"84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3\" returns successfully" Sep 10 00:12:17.774840 kubelet[2674]: I0910 00:12:17.774798 2674 scope.go:117] "RemoveContainer" containerID="811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33" Sep 10 00:12:17.775965 containerd[1498]: time="2025-09-10T00:12:17.775909482Z" level=info msg="RemoveContainer for \"811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33\"" Sep 10 00:12:17.779746 containerd[1498]: time="2025-09-10T00:12:17.779693246Z" level=info msg="RemoveContainer for \"811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33\" returns successfully" Sep 10 00:12:17.780011 kubelet[2674]: I0910 00:12:17.779975 2674 scope.go:117] "RemoveContainer" containerID="237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670" Sep 10 00:12:17.781049 containerd[1498]: time="2025-09-10T00:12:17.781005084Z" level=info msg="RemoveContainer for \"237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670\"" Sep 10 00:12:17.785072 containerd[1498]: time="2025-09-10T00:12:17.785009295Z" level=info msg="RemoveContainer for \"237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670\" returns successfully" Sep 10 00:12:17.785289 kubelet[2674]: I0910 00:12:17.785236 2674 scope.go:117] "RemoveContainer" containerID="098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8" Sep 10 00:12:17.786407 containerd[1498]: time="2025-09-10T00:12:17.786363782Z" level=info msg="RemoveContainer for \"098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8\"" Sep 10 00:12:17.789852 containerd[1498]: time="2025-09-10T00:12:17.789765957Z" level=info msg="RemoveContainer for \"098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8\" returns successfully" Sep 10 00:12:17.789999 kubelet[2674]: I0910 00:12:17.789972 2674 scope.go:117] "RemoveContainer" containerID="35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589" Sep 10 00:12:17.791541 containerd[1498]: time="2025-09-10T00:12:17.791487588Z" level=info msg="RemoveContainer for \"35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589\"" Sep 10 00:12:17.795099 containerd[1498]: time="2025-09-10T00:12:17.795072327Z" level=info msg="RemoveContainer for \"35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589\" returns successfully" Sep 10 00:12:17.795264 kubelet[2674]: I0910 00:12:17.795238 2674 scope.go:117] "RemoveContainer" containerID="84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3" Sep 10 00:12:17.795490 containerd[1498]: time="2025-09-10T00:12:17.795451163Z" level=error msg="ContainerStatus for \"84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3\": not found" Sep 10 00:12:17.795673 kubelet[2674]: E0910 00:12:17.795633 2674 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3\": not found" containerID="84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3" Sep 10 00:12:17.795709 kubelet[2674]: I0910 00:12:17.795681 2674 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3"} err="failed to get container status \"84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"84f71f80023a74a7343e6aff31d524b0fbe94c955cf62ec3b814060f7abd21c3\": not found" Sep 10 00:12:17.795741 kubelet[2674]: I0910 00:12:17.795715 2674 scope.go:117] "RemoveContainer" containerID="811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33" Sep 10 00:12:17.796002 containerd[1498]: time="2025-09-10T00:12:17.795948341Z" level=error msg="ContainerStatus for \"811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33\": not found" Sep 10 00:12:17.796129 kubelet[2674]: E0910 00:12:17.796105 2674 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33\": not found" containerID="811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33" Sep 10 00:12:17.796178 kubelet[2674]: I0910 00:12:17.796131 2674 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33"} err="failed to get container status \"811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33\": rpc error: code = NotFound desc = an error occurred when try to find container \"811908186b5502aa9c08b870387f7627347fb0255c76cfc1853d48a42ba2aa33\": not found" Sep 10 00:12:17.796178 kubelet[2674]: I0910 00:12:17.796148 2674 scope.go:117] "RemoveContainer" containerID="237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670" Sep 10 00:12:17.796353 containerd[1498]: time="2025-09-10T00:12:17.796301438Z" level=error msg="ContainerStatus for \"237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670\": not found" Sep 10 00:12:17.796540 kubelet[2674]: E0910 00:12:17.796444 2674 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670\": not found" containerID="237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670" Sep 10 00:12:17.796540 kubelet[2674]: I0910 00:12:17.796484 2674 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670"} err="failed to get container status \"237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670\": rpc error: code = NotFound desc = an error occurred when try to find container \"237b7a6113f0e1bf3303f664c7806ceba3c9f06c3d850c925a16b47256279670\": not found" Sep 10 00:12:17.796540 kubelet[2674]: I0910 00:12:17.796518 2674 scope.go:117] "RemoveContainer" containerID="098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8" Sep 10 00:12:17.796758 containerd[1498]: time="2025-09-10T00:12:17.796723014Z" level=error msg="ContainerStatus for \"098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8\": not found" Sep 10 00:12:17.796874 kubelet[2674]: E0910 00:12:17.796853 2674 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8\": not found" containerID="098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8" Sep 10 00:12:17.796929 kubelet[2674]: I0910 00:12:17.796875 2674 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8"} err="failed to get container status \"098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"098d9c4121ec613f659aca37813262cdf617f8aba550e17fea86463ffde8b7d8\": not found" Sep 10 00:12:17.796929 kubelet[2674]: I0910 00:12:17.796888 2674 scope.go:117] "RemoveContainer" containerID="35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589" Sep 10 00:12:17.797081 containerd[1498]: time="2025-09-10T00:12:17.797045904Z" level=error msg="ContainerStatus for \"35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589\": not found" Sep 10 00:12:17.797197 kubelet[2674]: E0910 00:12:17.797174 2674 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589\": not found" containerID="35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589" Sep 10 00:12:17.797261 kubelet[2674]: I0910 00:12:17.797201 2674 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589"} err="failed to get container status \"35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589\": rpc error: code = NotFound desc = an error occurred when try to find container \"35bcceec15792973ef914f35e690e9e74941acd57d89b4adab20ee2f0ec67589\": not found" Sep 10 00:12:18.205010 kubelet[2674]: I0910 00:12:18.204942 2674 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ed673c9-f1f6-483a-9e4c-3c7b3c708d64" path="/var/lib/kubelet/pods/1ed673c9-f1f6-483a-9e4c-3c7b3c708d64/volumes" Sep 10 00:12:18.206031 kubelet[2674]: I0910 00:12:18.205994 2674 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b26d1c48-ebfa-48d1-8300-f94572dffefc" path="/var/lib/kubelet/pods/b26d1c48-ebfa-48d1-8300-f94572dffefc/volumes" Sep 10 00:12:18.644059 sshd[4371]: Connection closed by 10.0.0.1 port 57520 Sep 10 00:12:18.644680 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Sep 10 00:12:18.659502 systemd[1]: sshd@28-10.0.0.58:22-10.0.0.1:57520.service: Deactivated successfully. Sep 10 00:12:18.661597 systemd[1]: session-29.scope: Deactivated successfully. Sep 10 00:12:18.662376 systemd-logind[1484]: Session 29 logged out. Waiting for processes to exit. Sep 10 00:12:18.671202 systemd[1]: Started sshd@29-10.0.0.58:22-10.0.0.1:57524.service - OpenSSH per-connection server daemon (10.0.0.1:57524). Sep 10 00:12:18.675255 systemd-logind[1484]: Removed session 29. Sep 10 00:12:18.713623 sshd[4531]: Accepted publickey for core from 10.0.0.1 port 57524 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:12:18.715398 sshd-session[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:12:18.720722 systemd-logind[1484]: New session 30 of user core. Sep 10 00:12:18.731004 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 10 00:12:19.291285 kubelet[2674]: E0910 00:12:19.291243 2674 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 00:12:19.462413 sshd[4534]: Connection closed by 10.0.0.1 port 57524 Sep 10 00:12:19.463137 sshd-session[4531]: pam_unix(sshd:session): session closed for user core Sep 10 00:12:19.478725 kubelet[2674]: I0910 00:12:19.477034 2674 memory_manager.go:355] "RemoveStaleState removing state" podUID="1ed673c9-f1f6-483a-9e4c-3c7b3c708d64" containerName="cilium-agent" Sep 10 00:12:19.478725 kubelet[2674]: I0910 00:12:19.477070 2674 memory_manager.go:355] "RemoveStaleState removing state" podUID="b26d1c48-ebfa-48d1-8300-f94572dffefc" containerName="cilium-operator" Sep 10 00:12:19.482613 systemd[1]: Started sshd@30-10.0.0.58:22-10.0.0.1:57534.service - OpenSSH per-connection server daemon (10.0.0.1:57534). Sep 10 00:12:19.483558 systemd[1]: sshd@29-10.0.0.58:22-10.0.0.1:57524.service: Deactivated successfully. Sep 10 00:12:19.490200 systemd[1]: session-30.scope: Deactivated successfully. Sep 10 00:12:19.497928 systemd-logind[1484]: Session 30 logged out. Waiting for processes to exit. Sep 10 00:12:19.499482 systemd-logind[1484]: Removed session 30. Sep 10 00:12:19.509632 systemd[1]: Created slice kubepods-burstable-pod3b6a2206_2fb3_45ae_aedf_634ca0f21186.slice - libcontainer container kubepods-burstable-pod3b6a2206_2fb3_45ae_aedf_634ca0f21186.slice. Sep 10 00:12:19.532195 sshd[4543]: Accepted publickey for core from 10.0.0.1 port 57534 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:12:19.534106 sshd-session[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:12:19.544785 kubelet[2674]: I0910 00:12:19.543199 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b6a2206-2fb3-45ae-aedf-634ca0f21186-host-proc-sys-kernel\") pod \"cilium-7zjb4\" (UID: \"3b6a2206-2fb3-45ae-aedf-634ca0f21186\") " pod="kube-system/cilium-7zjb4" Sep 10 00:12:19.544785 kubelet[2674]: I0910 00:12:19.543260 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b6a2206-2fb3-45ae-aedf-634ca0f21186-cilium-cgroup\") pod \"cilium-7zjb4\" (UID: \"3b6a2206-2fb3-45ae-aedf-634ca0f21186\") " pod="kube-system/cilium-7zjb4" Sep 10 00:12:19.544785 kubelet[2674]: I0910 00:12:19.543276 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b6a2206-2fb3-45ae-aedf-634ca0f21186-bpf-maps\") pod \"cilium-7zjb4\" (UID: \"3b6a2206-2fb3-45ae-aedf-634ca0f21186\") " pod="kube-system/cilium-7zjb4" Sep 10 00:12:19.544785 kubelet[2674]: I0910 00:12:19.543294 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b6a2206-2fb3-45ae-aedf-634ca0f21186-hostproc\") pod \"cilium-7zjb4\" (UID: \"3b6a2206-2fb3-45ae-aedf-634ca0f21186\") " pod="kube-system/cilium-7zjb4" Sep 10 00:12:19.544785 kubelet[2674]: I0910 00:12:19.543308 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b6a2206-2fb3-45ae-aedf-634ca0f21186-lib-modules\") pod \"cilium-7zjb4\" (UID: \"3b6a2206-2fb3-45ae-aedf-634ca0f21186\") " pod="kube-system/cilium-7zjb4" Sep 10 00:12:19.544785 kubelet[2674]: I0910 00:12:19.543323 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b6a2206-2fb3-45ae-aedf-634ca0f21186-host-proc-sys-net\") pod \"cilium-7zjb4\" (UID: \"3b6a2206-2fb3-45ae-aedf-634ca0f21186\") " pod="kube-system/cilium-7zjb4" Sep 10 00:12:19.545151 kubelet[2674]: I0910 00:12:19.543403 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b6a2206-2fb3-45ae-aedf-634ca0f21186-clustermesh-secrets\") pod \"cilium-7zjb4\" (UID: \"3b6a2206-2fb3-45ae-aedf-634ca0f21186\") " pod="kube-system/cilium-7zjb4" Sep 10 00:12:19.545151 kubelet[2674]: I0910 00:12:19.543449 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqzwh\" (UniqueName: \"kubernetes.io/projected/3b6a2206-2fb3-45ae-aedf-634ca0f21186-kube-api-access-fqzwh\") pod \"cilium-7zjb4\" (UID: \"3b6a2206-2fb3-45ae-aedf-634ca0f21186\") " pod="kube-system/cilium-7zjb4" Sep 10 00:12:19.545151 kubelet[2674]: I0910 00:12:19.543473 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3b6a2206-2fb3-45ae-aedf-634ca0f21186-cilium-ipsec-secrets\") pod \"cilium-7zjb4\" (UID: \"3b6a2206-2fb3-45ae-aedf-634ca0f21186\") " pod="kube-system/cilium-7zjb4" Sep 10 00:12:19.545151 kubelet[2674]: I0910 00:12:19.543495 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b6a2206-2fb3-45ae-aedf-634ca0f21186-xtables-lock\") pod \"cilium-7zjb4\" (UID: \"3b6a2206-2fb3-45ae-aedf-634ca0f21186\") " pod="kube-system/cilium-7zjb4" Sep 10 00:12:19.545151 kubelet[2674]: I0910 00:12:19.543509 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b6a2206-2fb3-45ae-aedf-634ca0f21186-cilium-config-path\") pod \"cilium-7zjb4\" (UID: \"3b6a2206-2fb3-45ae-aedf-634ca0f21186\") " pod="kube-system/cilium-7zjb4" Sep 10 00:12:19.545265 kubelet[2674]: I0910 00:12:19.543524 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b6a2206-2fb3-45ae-aedf-634ca0f21186-cni-path\") pod \"cilium-7zjb4\" (UID: \"3b6a2206-2fb3-45ae-aedf-634ca0f21186\") " pod="kube-system/cilium-7zjb4" Sep 10 00:12:19.545265 kubelet[2674]: I0910 00:12:19.543540 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b6a2206-2fb3-45ae-aedf-634ca0f21186-cilium-run\") pod \"cilium-7zjb4\" (UID: \"3b6a2206-2fb3-45ae-aedf-634ca0f21186\") " pod="kube-system/cilium-7zjb4" Sep 10 00:12:19.545265 kubelet[2674]: I0910 00:12:19.543553 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b6a2206-2fb3-45ae-aedf-634ca0f21186-etc-cni-netd\") pod \"cilium-7zjb4\" (UID: \"3b6a2206-2fb3-45ae-aedf-634ca0f21186\") " pod="kube-system/cilium-7zjb4" Sep 10 00:12:19.545265 kubelet[2674]: I0910 00:12:19.543566 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b6a2206-2fb3-45ae-aedf-634ca0f21186-hubble-tls\") pod \"cilium-7zjb4\" (UID: \"3b6a2206-2fb3-45ae-aedf-634ca0f21186\") " pod="kube-system/cilium-7zjb4" Sep 10 00:12:19.548891 systemd-logind[1484]: New session 31 of user core. Sep 10 00:12:19.549915 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 10 00:12:19.604757 sshd[4548]: Connection closed by 10.0.0.1 port 57534 Sep 10 00:12:19.605564 sshd-session[4543]: pam_unix(sshd:session): session closed for user core Sep 10 00:12:19.617662 systemd[1]: sshd@30-10.0.0.58:22-10.0.0.1:57534.service: Deactivated successfully. Sep 10 00:12:19.619704 systemd[1]: session-31.scope: Deactivated successfully. Sep 10 00:12:19.621129 systemd-logind[1484]: Session 31 logged out. Waiting for processes to exit. Sep 10 00:12:19.630169 systemd[1]: Started sshd@31-10.0.0.58:22-10.0.0.1:57550.service - OpenSSH per-connection server daemon (10.0.0.1:57550). Sep 10 00:12:19.631205 systemd-logind[1484]: Removed session 31. Sep 10 00:12:19.678060 sshd[4554]: Accepted publickey for core from 10.0.0.1 port 57550 ssh2: RSA SHA256:d5FJCUuDdtOtrh+MCA7hutbqAt0MtUB4TEpxXe0/aok Sep 10 00:12:19.679628 sshd-session[4554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:12:19.684123 systemd-logind[1484]: New session 32 of user core. Sep 10 00:12:19.690921 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 10 00:12:19.813629 kubelet[2674]: E0910 00:12:19.813277 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:12:19.814398 containerd[1498]: time="2025-09-10T00:12:19.814354934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7zjb4,Uid:3b6a2206-2fb3-45ae-aedf-634ca0f21186,Namespace:kube-system,Attempt:0,}" Sep 10 00:12:19.839148 containerd[1498]: time="2025-09-10T00:12:19.838828196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:12:19.839148 containerd[1498]: time="2025-09-10T00:12:19.838939527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:12:19.839148 containerd[1498]: time="2025-09-10T00:12:19.838982127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:12:19.839393 containerd[1498]: time="2025-09-10T00:12:19.839173178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:12:19.863044 systemd[1]: Started cri-containerd-7f6bfe1480ee91e6b2b6710470e3d2be24f17f81d2762960628a11cd521c49c2.scope - libcontainer container 7f6bfe1480ee91e6b2b6710470e3d2be24f17f81d2762960628a11cd521c49c2. Sep 10 00:12:19.892193 containerd[1498]: time="2025-09-10T00:12:19.892134472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7zjb4,Uid:3b6a2206-2fb3-45ae-aedf-634ca0f21186,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f6bfe1480ee91e6b2b6710470e3d2be24f17f81d2762960628a11cd521c49c2\"" Sep 10 00:12:19.892901 kubelet[2674]: E0910 00:12:19.892871 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:12:19.895629 containerd[1498]: time="2025-09-10T00:12:19.895567534Z" level=info msg="CreateContainer within sandbox \"7f6bfe1480ee91e6b2b6710470e3d2be24f17f81d2762960628a11cd521c49c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:12:19.910829 containerd[1498]: time="2025-09-10T00:12:19.910666749Z" level=info msg="CreateContainer within sandbox \"7f6bfe1480ee91e6b2b6710470e3d2be24f17f81d2762960628a11cd521c49c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a59fcb994667343d6d418d2b6bbdcae2a6c9b4d7448549fc6662cc6778f675b7\"" Sep 10 00:12:19.911472 containerd[1498]: time="2025-09-10T00:12:19.911445800Z" level=info msg="StartContainer for \"a59fcb994667343d6d418d2b6bbdcae2a6c9b4d7448549fc6662cc6778f675b7\"" Sep 10 00:12:19.942952 systemd[1]: Started cri-containerd-a59fcb994667343d6d418d2b6bbdcae2a6c9b4d7448549fc6662cc6778f675b7.scope - libcontainer container a59fcb994667343d6d418d2b6bbdcae2a6c9b4d7448549fc6662cc6778f675b7. Sep 10 00:12:19.972726 containerd[1498]: time="2025-09-10T00:12:19.972579833Z" level=info msg="StartContainer for \"a59fcb994667343d6d418d2b6bbdcae2a6c9b4d7448549fc6662cc6778f675b7\" returns successfully" Sep 10 00:12:19.983515 systemd[1]: cri-containerd-a59fcb994667343d6d418d2b6bbdcae2a6c9b4d7448549fc6662cc6778f675b7.scope: Deactivated successfully. Sep 10 00:12:20.027462 containerd[1498]: time="2025-09-10T00:12:20.027391526Z" level=info msg="shim disconnected" id=a59fcb994667343d6d418d2b6bbdcae2a6c9b4d7448549fc6662cc6778f675b7 namespace=k8s.io Sep 10 00:12:20.027462 containerd[1498]: time="2025-09-10T00:12:20.027463662Z" level=warning msg="cleaning up after shim disconnected" id=a59fcb994667343d6d418d2b6bbdcae2a6c9b4d7448549fc6662cc6778f675b7 namespace=k8s.io Sep 10 00:12:20.027677 containerd[1498]: time="2025-09-10T00:12:20.027475324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:12:20.757202 kubelet[2674]: E0910 00:12:20.757165 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:12:20.758954 containerd[1498]: time="2025-09-10T00:12:20.758895943Z" level=info msg="CreateContainer within sandbox \"7f6bfe1480ee91e6b2b6710470e3d2be24f17f81d2762960628a11cd521c49c2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:12:20.785038 containerd[1498]: time="2025-09-10T00:12:20.784989039Z" level=info msg="CreateContainer within sandbox \"7f6bfe1480ee91e6b2b6710470e3d2be24f17f81d2762960628a11cd521c49c2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d66b9e23497b1850802d8de71c2a45e9bc3af3deff0012d3c2979a591b9a6d23\"" Sep 10 00:12:20.785483 containerd[1498]: time="2025-09-10T00:12:20.785462553Z" level=info msg="StartContainer for \"d66b9e23497b1850802d8de71c2a45e9bc3af3deff0012d3c2979a591b9a6d23\"" Sep 10 00:12:20.816003 systemd[1]: Started cri-containerd-d66b9e23497b1850802d8de71c2a45e9bc3af3deff0012d3c2979a591b9a6d23.scope - libcontainer container d66b9e23497b1850802d8de71c2a45e9bc3af3deff0012d3c2979a591b9a6d23. Sep 10 00:12:20.847756 containerd[1498]: time="2025-09-10T00:12:20.847713371Z" level=info msg="StartContainer for \"d66b9e23497b1850802d8de71c2a45e9bc3af3deff0012d3c2979a591b9a6d23\" returns successfully" Sep 10 00:12:20.853973 systemd[1]: cri-containerd-d66b9e23497b1850802d8de71c2a45e9bc3af3deff0012d3c2979a591b9a6d23.scope: Deactivated successfully. Sep 10 00:12:20.879436 containerd[1498]: time="2025-09-10T00:12:20.879363596Z" level=info msg="shim disconnected" id=d66b9e23497b1850802d8de71c2a45e9bc3af3deff0012d3c2979a591b9a6d23 namespace=k8s.io Sep 10 00:12:20.879646 containerd[1498]: time="2025-09-10T00:12:20.879437145Z" level=warning msg="cleaning up after shim disconnected" id=d66b9e23497b1850802d8de71c2a45e9bc3af3deff0012d3c2979a591b9a6d23 namespace=k8s.io Sep 10 00:12:20.879646 containerd[1498]: time="2025-09-10T00:12:20.879450640Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:12:21.650362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d66b9e23497b1850802d8de71c2a45e9bc3af3deff0012d3c2979a591b9a6d23-rootfs.mount: Deactivated successfully. Sep 10 00:12:21.760600 kubelet[2674]: E0910 00:12:21.760565 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:12:21.763108 containerd[1498]: time="2025-09-10T00:12:21.763066878Z" level=info msg="CreateContainer within sandbox \"7f6bfe1480ee91e6b2b6710470e3d2be24f17f81d2762960628a11cd521c49c2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:12:21.875722 containerd[1498]: time="2025-09-10T00:12:21.875666424Z" level=info msg="CreateContainer within sandbox \"7f6bfe1480ee91e6b2b6710470e3d2be24f17f81d2762960628a11cd521c49c2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1112dd397d02736ee1b30b269ac73904d8db0613bc8f0caeb3f0ce32269a28c4\"" Sep 10 00:12:21.876341 containerd[1498]: time="2025-09-10T00:12:21.876314598Z" level=info msg="StartContainer for \"1112dd397d02736ee1b30b269ac73904d8db0613bc8f0caeb3f0ce32269a28c4\"" Sep 10 00:12:21.910341 systemd[1]: run-containerd-runc-k8s.io-1112dd397d02736ee1b30b269ac73904d8db0613bc8f0caeb3f0ce32269a28c4-runc.SqNN9i.mount: Deactivated successfully. Sep 10 00:12:21.924178 systemd[1]: Started cri-containerd-1112dd397d02736ee1b30b269ac73904d8db0613bc8f0caeb3f0ce32269a28c4.scope - libcontainer container 1112dd397d02736ee1b30b269ac73904d8db0613bc8f0caeb3f0ce32269a28c4. Sep 10 00:12:21.956607 containerd[1498]: time="2025-09-10T00:12:21.956555279Z" level=info msg="StartContainer for \"1112dd397d02736ee1b30b269ac73904d8db0613bc8f0caeb3f0ce32269a28c4\" returns successfully" Sep 10 00:12:21.960206 systemd[1]: cri-containerd-1112dd397d02736ee1b30b269ac73904d8db0613bc8f0caeb3f0ce32269a28c4.scope: Deactivated successfully. Sep 10 00:12:21.993935 containerd[1498]: time="2025-09-10T00:12:21.993848258Z" level=info msg="shim disconnected" id=1112dd397d02736ee1b30b269ac73904d8db0613bc8f0caeb3f0ce32269a28c4 namespace=k8s.io Sep 10 00:12:21.993935 containerd[1498]: time="2025-09-10T00:12:21.993926466Z" level=warning msg="cleaning up after shim disconnected" id=1112dd397d02736ee1b30b269ac73904d8db0613bc8f0caeb3f0ce32269a28c4 namespace=k8s.io Sep 10 00:12:21.993935 containerd[1498]: time="2025-09-10T00:12:21.993935463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:12:22.202233 kubelet[2674]: E0910 00:12:22.202083 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:12:22.649858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1112dd397d02736ee1b30b269ac73904d8db0613bc8f0caeb3f0ce32269a28c4-rootfs.mount: Deactivated successfully. Sep 10 00:12:22.763728 kubelet[2674]: E0910 00:12:22.763702 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:12:22.765320 containerd[1498]: time="2025-09-10T00:12:22.765107609Z" level=info msg="CreateContainer within sandbox \"7f6bfe1480ee91e6b2b6710470e3d2be24f17f81d2762960628a11cd521c49c2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:12:22.781959 containerd[1498]: time="2025-09-10T00:12:22.781919287Z" level=info msg="CreateContainer within sandbox \"7f6bfe1480ee91e6b2b6710470e3d2be24f17f81d2762960628a11cd521c49c2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"37ee762e12f365ef78afd15ae960a900aa2d035911aa296946ce05da4b990adf\"" Sep 10 00:12:22.783634 containerd[1498]: time="2025-09-10T00:12:22.782777807Z" level=info msg="StartContainer for \"37ee762e12f365ef78afd15ae960a900aa2d035911aa296946ce05da4b990adf\"" Sep 10 00:12:22.810955 systemd[1]: Started cri-containerd-37ee762e12f365ef78afd15ae960a900aa2d035911aa296946ce05da4b990adf.scope - libcontainer container 37ee762e12f365ef78afd15ae960a900aa2d035911aa296946ce05da4b990adf. Sep 10 00:12:22.836003 systemd[1]: cri-containerd-37ee762e12f365ef78afd15ae960a900aa2d035911aa296946ce05da4b990adf.scope: Deactivated successfully. Sep 10 00:12:22.837827 containerd[1498]: time="2025-09-10T00:12:22.837762939Z" level=info msg="StartContainer for \"37ee762e12f365ef78afd15ae960a900aa2d035911aa296946ce05da4b990adf\" returns successfully" Sep 10 00:12:22.861023 containerd[1498]: time="2025-09-10T00:12:22.860964987Z" level=info msg="shim disconnected" id=37ee762e12f365ef78afd15ae960a900aa2d035911aa296946ce05da4b990adf namespace=k8s.io Sep 10 00:12:22.861023 containerd[1498]: time="2025-09-10T00:12:22.861017956Z" level=warning msg="cleaning up after shim disconnected" id=37ee762e12f365ef78afd15ae960a900aa2d035911aa296946ce05da4b990adf namespace=k8s.io Sep 10 00:12:22.861023 containerd[1498]: time="2025-09-10T00:12:22.861026322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:12:23.650020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37ee762e12f365ef78afd15ae960a900aa2d035911aa296946ce05da4b990adf-rootfs.mount: Deactivated successfully. Sep 10 00:12:23.767772 kubelet[2674]: E0910 00:12:23.767744 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:12:23.769387 containerd[1498]: time="2025-09-10T00:12:23.769349037Z" level=info msg="CreateContainer within sandbox \"7f6bfe1480ee91e6b2b6710470e3d2be24f17f81d2762960628a11cd521c49c2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:12:23.796168 containerd[1498]: time="2025-09-10T00:12:23.794042376Z" level=info msg="CreateContainer within sandbox \"7f6bfe1480ee91e6b2b6710470e3d2be24f17f81d2762960628a11cd521c49c2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"832c33dc5d128bd7db8023e4c16a2b774f0298c76dbab0cf63fe2592eb165d80\"" Sep 10 00:12:23.797892 containerd[1498]: time="2025-09-10T00:12:23.797842467Z" level=info msg="StartContainer for \"832c33dc5d128bd7db8023e4c16a2b774f0298c76dbab0cf63fe2592eb165d80\"" Sep 10 00:12:23.831013 systemd[1]: Started cri-containerd-832c33dc5d128bd7db8023e4c16a2b774f0298c76dbab0cf63fe2592eb165d80.scope - libcontainer container 832c33dc5d128bd7db8023e4c16a2b774f0298c76dbab0cf63fe2592eb165d80. Sep 10 00:12:23.868111 containerd[1498]: time="2025-09-10T00:12:23.868049031Z" level=info msg="StartContainer for \"832c33dc5d128bd7db8023e4c16a2b774f0298c76dbab0cf63fe2592eb165d80\" returns successfully" Sep 10 00:12:24.299851 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 10 00:12:24.772545 kubelet[2674]: E0910 00:12:24.772400 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:12:24.786034 kubelet[2674]: I0910 00:12:24.785978 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7zjb4" podStartSLOduration=5.785956831 podStartE2EDuration="5.785956831s" podCreationTimestamp="2025-09-10 00:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:12:24.784768589 +0000 UTC m=+100.674413702" watchObservedRunningTime="2025-09-10 00:12:24.785956831 +0000 UTC m=+100.675601944" Sep 10 00:12:25.201697 kubelet[2674]: E0910 00:12:25.201658 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:12:25.814618 kubelet[2674]: E0910 00:12:25.814576 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:12:27.465156 systemd-networkd[1414]: lxc_health: Link UP Sep 10 00:12:27.469142 systemd-networkd[1414]: lxc_health: Gained carrier Sep 10 00:12:27.815432 kubelet[2674]: E0910 00:12:27.814866 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:12:28.780714 kubelet[2674]: E0910 00:12:28.780672 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:12:28.819100 systemd-networkd[1414]: lxc_health: Gained IPv6LL Sep 10 00:12:29.782593 kubelet[2674]: E0910 00:12:29.782557 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:12:32.506193 sshd[4561]: Connection closed by 10.0.0.1 port 57550 Sep 10 00:12:32.506775 sshd-session[4554]: pam_unix(sshd:session): session closed for user core Sep 10 00:12:32.511138 systemd[1]: sshd@31-10.0.0.58:22-10.0.0.1:57550.service: Deactivated successfully. Sep 10 00:12:32.513126 systemd[1]: session-32.scope: Deactivated successfully. Sep 10 00:12:32.513825 systemd-logind[1484]: Session 32 logged out. Waiting for processes to exit. Sep 10 00:12:32.514662 systemd-logind[1484]: Removed session 32.