Sep 4 23:40:48.030422 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:03:18 -00 2025 Sep 4 23:40:48.030457 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:40:48.030470 kernel: BIOS-provided physical RAM map: Sep 4 23:40:48.030480 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 4 23:40:48.030492 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 4 23:40:48.030502 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Sep 4 23:40:48.030514 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 4 23:40:48.030523 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Sep 4 23:40:48.030533 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 4 23:40:48.030542 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 4 23:40:48.030552 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 4 23:40:48.030562 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 4 23:40:48.030577 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 4 23:40:48.030591 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 4 23:40:48.030606 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 4 23:40:48.030617 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 4 23:40:48.030627 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 4 23:40:48.030637 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 23:40:48.030651 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 23:40:48.030661 kernel: NX (Execute Disable) protection: active Sep 4 23:40:48.030671 kernel: APIC: Static calls initialized Sep 4 23:40:48.030681 kernel: e820: update [mem 0x9a185018-0x9a18ec57] usable ==> usable Sep 4 23:40:48.030692 kernel: e820: update [mem 0x9a185018-0x9a18ec57] usable ==> usable Sep 4 23:40:48.030702 kernel: e820: update [mem 0x9a148018-0x9a184e57] usable ==> usable Sep 4 23:40:48.030712 kernel: e820: update [mem 0x9a148018-0x9a184e57] usable ==> usable Sep 4 23:40:48.030722 kernel: extended physical RAM map: Sep 4 23:40:48.030732 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 4 23:40:48.030742 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 4 23:40:48.030753 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Sep 4 23:40:48.030763 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 4 23:40:48.030777 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a148017] usable Sep 4 23:40:48.030788 kernel: reserve setup_data: [mem 0x000000009a148018-0x000000009a184e57] usable Sep 4 23:40:48.030798 kernel: reserve setup_data: [mem 0x000000009a184e58-0x000000009a185017] usable Sep 4 23:40:48.030808 kernel: reserve setup_data: [mem 0x000000009a185018-0x000000009a18ec57] usable Sep 4 23:40:48.030818 kernel: reserve setup_data: [mem 0x000000009a18ec58-0x000000009b8ecfff] usable Sep 4 23:40:48.030829 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 4 23:40:48.030839 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 4 23:40:48.030849 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 4 23:40:48.030860 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 4 23:40:48.030870 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 4 23:40:48.030889 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 4 23:40:48.030900 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 4 23:40:48.030910 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 4 23:40:48.030921 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 4 23:40:48.030932 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 23:40:48.030947 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 23:40:48.030961 kernel: efi: EFI v2.7 by EDK II Sep 4 23:40:48.030972 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1f7018 RNG=0x9bb73018 Sep 4 23:40:48.030983 kernel: random: crng init done Sep 4 23:40:48.030994 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 4 23:40:48.031005 kernel: secureboot: Secure boot enabled Sep 4 23:40:48.031015 kernel: SMBIOS 2.8 present. Sep 4 23:40:48.031026 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 4 23:40:48.031037 kernel: Hypervisor detected: KVM Sep 4 23:40:48.031047 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 23:40:48.031058 kernel: kvm-clock: using sched offset of 6953695292 cycles Sep 4 23:40:48.031070 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 23:40:48.031085 kernel: tsc: Detected 2794.750 MHz processor Sep 4 23:40:48.031100 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 23:40:48.031111 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 23:40:48.031122 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Sep 4 23:40:48.031133 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 23:40:48.031144 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 23:40:48.031155 kernel: Using GB pages for direct mapping Sep 4 23:40:48.031176 kernel: ACPI: Early table checksum verification disabled Sep 4 23:40:48.031187 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Sep 4 23:40:48.031203 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 4 23:40:48.031215 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:40:48.031229 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:40:48.031240 kernel: ACPI: FACS 0x000000009BBDD000 000040 Sep 4 23:40:48.031251 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:40:48.031260 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:40:48.031269 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:40:48.031294 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:40:48.031308 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 4 23:40:48.031317 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Sep 4 23:40:48.031326 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Sep 4 23:40:48.031335 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Sep 4 23:40:48.031344 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Sep 4 23:40:48.031353 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Sep 4 23:40:48.031363 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Sep 4 23:40:48.031372 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Sep 4 23:40:48.031381 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Sep 4 23:40:48.031394 kernel: No NUMA configuration found Sep 4 23:40:48.031404 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Sep 4 23:40:48.031414 kernel: NODE_DATA(0) allocated [mem 0x9bf59000-0x9bf5efff] Sep 4 23:40:48.031425 kernel: Zone ranges: Sep 4 23:40:48.031435 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 23:40:48.031445 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Sep 4 23:40:48.031456 kernel: Normal empty Sep 4 23:40:48.031467 kernel: Movable zone start for each node Sep 4 23:40:48.031477 kernel: Early memory node ranges Sep 4 23:40:48.031488 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Sep 4 23:40:48.031504 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Sep 4 23:40:48.031514 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Sep 4 23:40:48.031525 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Sep 4 23:40:48.031536 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Sep 4 23:40:48.031547 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Sep 4 23:40:48.031558 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 23:40:48.031569 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Sep 4 23:40:48.031580 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 23:40:48.031591 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 4 23:40:48.031606 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 4 23:40:48.031617 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Sep 4 23:40:48.031628 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 23:40:48.031643 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 23:40:48.031654 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 23:40:48.031665 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 23:40:48.031676 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 23:40:48.031687 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 23:40:48.031699 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 23:40:48.031713 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 23:40:48.031724 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 23:40:48.031735 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 23:40:48.031746 kernel: TSC deadline timer available Sep 4 23:40:48.031757 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 4 23:40:48.031768 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 23:40:48.031779 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 23:40:48.031805 kernel: kvm-guest: setup PV sched yield Sep 4 23:40:48.031816 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 4 23:40:48.031828 kernel: Booting paravirtualized kernel on KVM Sep 4 23:40:48.031839 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 23:40:48.031851 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 23:40:48.031866 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 4 23:40:48.031877 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 4 23:40:48.031888 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 23:40:48.031944 kernel: kvm-guest: PV spinlocks enabled Sep 4 23:40:48.031960 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 23:40:48.031973 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:40:48.031985 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:40:48.031997 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 23:40:48.032013 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 23:40:48.032024 kernel: Fallback order for Node 0: 0 Sep 4 23:40:48.032036 kernel: Built 1 zonelists, mobility grouping on. Total pages: 625927 Sep 4 23:40:48.032047 kernel: Policy zone: DMA32 Sep 4 23:40:48.032059 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:40:48.032074 kernel: Memory: 2370352K/2552216K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43508K init, 1568K bss, 181608K reserved, 0K cma-reserved) Sep 4 23:40:48.032086 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 23:40:48.032097 kernel: ftrace: allocating 37943 entries in 149 pages Sep 4 23:40:48.032108 kernel: ftrace: allocated 149 pages with 4 groups Sep 4 23:40:48.032120 kernel: Dynamic Preempt: voluntary Sep 4 23:40:48.032131 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:40:48.032144 kernel: rcu: RCU event tracing is enabled. Sep 4 23:40:48.032156 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 23:40:48.032177 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:40:48.032193 kernel: Rude variant of Tasks RCU enabled. Sep 4 23:40:48.032205 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:40:48.032216 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:40:48.032228 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 23:40:48.032239 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 23:40:48.032251 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:40:48.032263 kernel: Console: colour dummy device 80x25 Sep 4 23:40:48.032292 kernel: printk: console [ttyS0] enabled Sep 4 23:40:48.032304 kernel: ACPI: Core revision 20230628 Sep 4 23:40:48.032322 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 23:40:48.032336 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 23:40:48.032349 kernel: x2apic enabled Sep 4 23:40:48.032361 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 23:40:48.032373 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 23:40:48.032385 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 23:40:48.032397 kernel: kvm-guest: setup PV IPIs Sep 4 23:40:48.032408 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 23:40:48.032420 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 23:40:48.032436 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 4 23:40:48.032447 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 23:40:48.032459 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 23:40:48.032471 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 23:40:48.032482 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 23:40:48.032494 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 23:40:48.032506 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 23:40:48.032518 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 23:40:48.032529 kernel: active return thunk: retbleed_return_thunk Sep 4 23:40:48.032544 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 23:40:48.032577 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 23:40:48.032589 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 23:40:48.032601 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 23:40:48.032614 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 23:40:48.032625 kernel: active return thunk: srso_return_thunk Sep 4 23:40:48.032637 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 23:40:48.032652 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 23:40:48.032669 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 23:40:48.032681 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 23:40:48.032692 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 23:40:48.032704 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 23:40:48.032716 kernel: Freeing SMP alternatives memory: 32K Sep 4 23:40:48.032727 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:40:48.032739 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 23:40:48.032751 kernel: landlock: Up and running. Sep 4 23:40:48.032762 kernel: SELinux: Initializing. Sep 4 23:40:48.032777 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:40:48.032789 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:40:48.032801 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 23:40:48.032812 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 23:40:48.032824 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 23:40:48.032835 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 23:40:48.032847 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 23:40:48.032858 kernel: ... version: 0 Sep 4 23:40:48.032870 kernel: ... bit width: 48 Sep 4 23:40:48.032889 kernel: ... generic registers: 6 Sep 4 23:40:48.032901 kernel: ... value mask: 0000ffffffffffff Sep 4 23:40:48.032912 kernel: ... max period: 00007fffffffffff Sep 4 23:40:48.032924 kernel: ... fixed-purpose events: 0 Sep 4 23:40:48.032935 kernel: ... event mask: 000000000000003f Sep 4 23:40:48.032947 kernel: signal: max sigframe size: 1776 Sep 4 23:40:48.032959 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:40:48.032971 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:40:48.032982 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:40:48.032997 kernel: smpboot: x86: Booting SMP configuration: Sep 4 23:40:48.033008 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 23:40:48.033020 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 23:40:48.033031 kernel: smpboot: Max logical packages: 1 Sep 4 23:40:48.033043 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 4 23:40:48.033054 kernel: devtmpfs: initialized Sep 4 23:40:48.033079 kernel: x86/mm: Memory block size: 128MB Sep 4 23:40:48.033102 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Sep 4 23:40:48.033132 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Sep 4 23:40:48.033171 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:40:48.033192 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 23:40:48.033204 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:40:48.033216 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:40:48.033227 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:40:48.033239 kernel: audit: type=2000 audit(1757029247.513:1): state=initialized audit_enabled=0 res=1 Sep 4 23:40:48.033251 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:40:48.033263 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 23:40:48.033315 kernel: cpuidle: using governor menu Sep 4 23:40:48.033355 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:40:48.033368 kernel: dca service started, version 1.12.1 Sep 4 23:40:48.033380 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 4 23:40:48.033392 kernel: PCI: Using configuration type 1 for base access Sep 4 23:40:48.033403 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 23:40:48.033414 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 23:40:48.033426 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 23:40:48.033438 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:40:48.033450 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:40:48.033466 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:40:48.033478 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:40:48.033489 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:40:48.033501 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 23:40:48.033513 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 23:40:48.033524 kernel: ACPI: Interpreter enabled Sep 4 23:40:48.033535 kernel: ACPI: PM: (supports S0 S5) Sep 4 23:40:48.033546 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 23:40:48.033555 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 23:40:48.033568 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 23:40:48.033577 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 4 23:40:48.033587 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 23:40:48.033942 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 23:40:48.034131 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 4 23:40:48.034346 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 4 23:40:48.034367 kernel: PCI host bridge to bus 0000:00 Sep 4 23:40:48.034637 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 23:40:48.034804 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 23:40:48.034962 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 23:40:48.035124 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 4 23:40:48.035318 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 4 23:40:48.035486 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 4 23:40:48.035655 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 23:40:48.035901 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 4 23:40:48.036175 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 4 23:40:48.036392 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 4 23:40:48.036659 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 4 23:40:48.036832 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 4 23:40:48.037002 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 4 23:40:48.037193 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 23:40:48.037424 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 23:40:48.037600 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 4 23:40:48.037772 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 4 23:40:48.037985 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Sep 4 23:40:48.038419 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 4 23:40:48.038633 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 4 23:40:48.038829 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 4 23:40:48.039025 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Sep 4 23:40:48.039247 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 4 23:40:48.039478 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 4 23:40:48.039652 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 4 23:40:48.039822 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 4 23:40:48.040013 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 4 23:40:48.040243 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 4 23:40:48.040475 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 4 23:40:48.040680 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 4 23:40:48.040919 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 4 23:40:48.041099 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 4 23:40:48.041333 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 4 23:40:48.041521 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 4 23:40:48.041547 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 23:40:48.041560 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 23:40:48.041572 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 23:40:48.041584 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 23:40:48.041597 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 4 23:40:48.041608 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 4 23:40:48.041620 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 4 23:40:48.041632 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 4 23:40:48.041649 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 4 23:40:48.041661 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 4 23:40:48.041673 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 4 23:40:48.041685 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 4 23:40:48.041697 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 4 23:40:48.041709 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 4 23:40:48.041721 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 4 23:40:48.041734 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 4 23:40:48.041745 kernel: iommu: Default domain type: Translated Sep 4 23:40:48.041762 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 23:40:48.041774 kernel: efivars: Registered efivars operations Sep 4 23:40:48.041786 kernel: PCI: Using ACPI for IRQ routing Sep 4 23:40:48.041798 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 23:40:48.041810 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Sep 4 23:40:48.041822 kernel: e820: reserve RAM buffer [mem 0x9a148018-0x9bffffff] Sep 4 23:40:48.041833 kernel: e820: reserve RAM buffer [mem 0x9a185018-0x9bffffff] Sep 4 23:40:48.041845 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Sep 4 23:40:48.041857 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Sep 4 23:40:48.042044 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 4 23:40:48.042239 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 4 23:40:48.042441 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 23:40:48.042459 kernel: vgaarb: loaded Sep 4 23:40:48.042472 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 23:40:48.042484 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 23:40:48.042497 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 23:40:48.042509 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:40:48.042522 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:40:48.042541 kernel: pnp: PnP ACPI init Sep 4 23:40:48.043037 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 4 23:40:48.043060 kernel: pnp: PnP ACPI: found 6 devices Sep 4 23:40:48.043073 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 23:40:48.043085 kernel: NET: Registered PF_INET protocol family Sep 4 23:40:48.043097 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 23:40:48.043109 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 23:40:48.043122 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:40:48.043139 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 23:40:48.043151 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 23:40:48.043177 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 23:40:48.043189 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:40:48.043201 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:40:48.043213 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:40:48.043225 kernel: NET: Registered PF_XDP protocol family Sep 4 23:40:48.043428 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 4 23:40:48.043616 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 4 23:40:48.043784 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 23:40:48.043942 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 23:40:48.044098 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 23:40:48.044266 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 4 23:40:48.044468 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 4 23:40:48.044632 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 4 23:40:48.044650 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:40:48.044669 kernel: Initialise system trusted keyrings Sep 4 23:40:48.044681 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 23:40:48.044693 kernel: Key type asymmetric registered Sep 4 23:40:48.044705 kernel: Asymmetric key parser 'x509' registered Sep 4 23:40:48.044717 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 23:40:48.044729 kernel: io scheduler mq-deadline registered Sep 4 23:40:48.044776 kernel: io scheduler kyber registered Sep 4 23:40:48.044788 kernel: io scheduler bfq registered Sep 4 23:40:48.044800 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 23:40:48.044840 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 4 23:40:48.044856 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 4 23:40:48.044868 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 4 23:40:48.044880 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:40:48.044893 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 23:40:48.044905 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 23:40:48.044917 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 23:40:48.044930 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 23:40:48.045171 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 23:40:48.045198 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 23:40:48.045385 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 23:40:48.045547 kernel: rtc_cmos 00:04: setting system clock to 2025-09-04T23:40:47 UTC (1757029247) Sep 4 23:40:48.045707 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 4 23:40:48.045723 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 23:40:48.045736 kernel: efifb: probing for efifb Sep 4 23:40:48.045748 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 4 23:40:48.045761 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 4 23:40:48.045778 kernel: efifb: scrolling: redraw Sep 4 23:40:48.045791 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 23:40:48.045803 kernel: Console: switching to colour frame buffer device 160x50 Sep 4 23:40:48.045815 kernel: fb0: EFI VGA frame buffer device Sep 4 23:40:48.045828 kernel: pstore: Using crash dump compression: deflate Sep 4 23:40:48.045840 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 23:40:48.045852 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:40:48.045864 kernel: Segment Routing with IPv6 Sep 4 23:40:48.045876 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:40:48.045892 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:40:48.045905 kernel: Key type dns_resolver registered Sep 4 23:40:48.045921 kernel: IPI shorthand broadcast: enabled Sep 4 23:40:48.045933 kernel: sched_clock: Marking stable (1170003552, 148635281)->(1403853485, -85214652) Sep 4 23:40:48.045946 kernel: registered taskstats version 1 Sep 4 23:40:48.045958 kernel: Loading compiled-in X.509 certificates Sep 4 23:40:48.045974 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: f395d469db1520f53594f6c4948c5f8002e6cc8b' Sep 4 23:40:48.045986 kernel: Key type .fscrypt registered Sep 4 23:40:48.045998 kernel: Key type fscrypt-provisioning registered Sep 4 23:40:48.046010 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 23:40:48.046023 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:40:48.046035 kernel: ima: No architecture policies found Sep 4 23:40:48.046047 kernel: clk: Disabling unused clocks Sep 4 23:40:48.046059 kernel: Freeing unused kernel image (initmem) memory: 43508K Sep 4 23:40:48.046071 kernel: Write protecting the kernel read-only data: 38912k Sep 4 23:40:48.046087 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 4 23:40:48.046099 kernel: Run /init as init process Sep 4 23:40:48.046111 kernel: with arguments: Sep 4 23:40:48.046124 kernel: /init Sep 4 23:40:48.046136 kernel: with environment: Sep 4 23:40:48.046148 kernel: HOME=/ Sep 4 23:40:48.046170 kernel: TERM=linux Sep 4 23:40:48.046182 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:40:48.046196 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:40:48.046217 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:40:48.046231 systemd[1]: Detected virtualization kvm. Sep 4 23:40:48.046244 systemd[1]: Detected architecture x86-64. Sep 4 23:40:48.046256 systemd[1]: Running in initrd. Sep 4 23:40:48.046269 systemd[1]: No hostname configured, using default hostname. Sep 4 23:40:48.046299 systemd[1]: Hostname set to . Sep 4 23:40:48.046317 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:40:48.046333 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:40:48.046347 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:40:48.046363 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:40:48.046377 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:40:48.046390 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:40:48.046404 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:40:48.046418 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:40:48.046438 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:40:48.046451 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:40:48.046464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:40:48.046477 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:40:48.046490 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:40:48.046503 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:40:48.046516 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:40:48.046529 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:40:48.046546 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:40:48.046559 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:40:48.046572 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:40:48.046585 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:40:48.046598 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:40:48.046646 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:40:48.046659 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:40:48.046672 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:40:48.046685 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:40:48.046702 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:40:48.046715 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:40:48.046728 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:40:48.046742 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:40:48.046755 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:40:48.046769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:40:48.046782 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:40:48.046795 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:40:48.046813 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:40:48.046868 systemd-journald[191]: Collecting audit messages is disabled. Sep 4 23:40:48.046906 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:40:48.046920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:40:48.046933 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:40:48.046947 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:40:48.046960 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:40:48.046974 systemd-journald[191]: Journal started Sep 4 23:40:48.047006 systemd-journald[191]: Runtime Journal (/run/log/journal/f8ee0f62dcec4779ae521a40d91074eb) is 6M, max 48M, 42M free. Sep 4 23:40:48.038630 systemd-modules-load[194]: Inserted module 'overlay' Sep 4 23:40:48.050356 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:40:48.063693 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:40:48.064363 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:40:48.067220 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:40:48.071595 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:40:48.078700 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:40:48.082063 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:40:48.084837 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 4 23:40:48.085942 kernel: Bridge firewalling registered Sep 4 23:40:48.087589 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:40:48.173071 dracut-cmdline[222]: dracut-dracut-053 Sep 4 23:40:48.173586 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:40:48.177121 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:40:48.193673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:40:48.208589 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:40:48.248262 systemd-resolved[255]: Positive Trust Anchors: Sep 4 23:40:48.248304 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:40:48.248336 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:40:48.251532 systemd-resolved[255]: Defaulting to hostname 'linux'. Sep 4 23:40:48.253028 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:40:48.259915 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:40:48.299332 kernel: SCSI subsystem initialized Sep 4 23:40:48.310315 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:40:48.322322 kernel: iscsi: registered transport (tcp) Sep 4 23:40:48.349330 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:40:48.349439 kernel: QLogic iSCSI HBA Driver Sep 4 23:40:48.424563 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:40:48.433609 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:40:48.462368 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:40:48.462448 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:40:48.463460 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 23:40:48.509948 kernel: raid6: avx2x4 gen() 22308 MB/s Sep 4 23:40:48.526332 kernel: raid6: avx2x2 gen() 24587 MB/s Sep 4 23:40:48.543647 kernel: raid6: avx2x1 gen() 19148 MB/s Sep 4 23:40:48.543731 kernel: raid6: using algorithm avx2x2 gen() 24587 MB/s Sep 4 23:40:48.561930 kernel: raid6: .... xor() 14737 MB/s, rmw enabled Sep 4 23:40:48.562039 kernel: raid6: using avx2x2 recovery algorithm Sep 4 23:40:48.584328 kernel: xor: automatically using best checksumming function avx Sep 4 23:40:48.752307 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:40:48.769473 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:40:48.781674 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:40:48.800035 systemd-udevd[415]: Using default interface naming scheme 'v255'. Sep 4 23:40:48.806063 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:40:48.817543 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:40:48.837932 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Sep 4 23:40:48.879340 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:40:48.890543 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:40:48.973755 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:40:48.983477 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:40:48.997271 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:40:48.998860 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:40:49.000821 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:40:49.004478 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:40:49.012444 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:40:49.019355 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 23:40:49.023307 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 23:40:49.030563 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 23:40:49.030601 kernel: GPT:9289727 != 19775487 Sep 4 23:40:49.030613 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 23:40:49.030624 kernel: GPT:9289727 != 19775487 Sep 4 23:40:49.030635 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 23:40:49.030647 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:40:49.029408 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:40:49.045321 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 23:40:49.061482 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 23:40:49.061543 kernel: libata version 3.00 loaded. Sep 4 23:40:49.061555 kernel: AES CTR mode by8 optimization enabled Sep 4 23:40:49.071567 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:40:49.071751 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:40:49.074192 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:40:49.074266 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:40:49.087379 kernel: BTRFS: device fsid 185ffa67-4184-4488-b7c8-7c0711a63b2d devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (464) Sep 4 23:40:49.074469 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:40:49.085397 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:40:49.091365 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (474) Sep 4 23:40:49.093544 kernel: ahci 0000:00:1f.2: version 3.0 Sep 4 23:40:49.096493 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 4 23:40:49.099831 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 4 23:40:49.100087 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 4 23:40:49.099722 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:40:49.102245 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:40:49.112330 kernel: scsi host0: ahci Sep 4 23:40:49.112585 kernel: scsi host1: ahci Sep 4 23:40:49.112796 kernel: scsi host2: ahci Sep 4 23:40:49.112993 kernel: scsi host3: ahci Sep 4 23:40:49.114326 kernel: scsi host4: ahci Sep 4 23:40:49.116288 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 23:40:49.122972 kernel: scsi host5: ahci Sep 4 23:40:49.123255 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 4 23:40:49.123269 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 4 23:40:49.124931 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 4 23:40:49.124960 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 4 23:40:49.124976 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 4 23:40:49.124990 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 4 23:40:49.160894 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 23:40:49.162445 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 23:40:49.175481 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 23:40:49.186181 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 23:40:49.200617 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:40:49.203093 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:40:49.203230 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:40:49.206762 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:40:49.210533 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:40:49.213088 disk-uuid[566]: Primary Header is updated. Sep 4 23:40:49.213088 disk-uuid[566]: Secondary Entries is updated. Sep 4 23:40:49.213088 disk-uuid[566]: Secondary Header is updated. Sep 4 23:40:49.213509 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:40:49.219320 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:40:49.224346 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:40:49.254009 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:40:49.299258 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:40:49.327835 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:40:49.436891 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 23:40:49.436994 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 23:40:49.437013 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 23:40:49.438302 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 4 23:40:49.439323 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 4 23:40:49.440306 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 4 23:40:49.440330 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 23:40:49.440810 kernel: ata3.00: applying bridge limits Sep 4 23:40:49.442333 kernel: ata3.00: configured for UDMA/100 Sep 4 23:40:49.442427 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 23:40:49.484333 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 23:40:49.484735 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 23:40:49.498448 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 4 23:40:50.226324 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:40:50.226919 disk-uuid[568]: The operation has completed successfully. Sep 4 23:40:50.260512 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:40:50.260669 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:40:50.325661 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:40:50.329550 sh[597]: Success Sep 4 23:40:50.345321 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 4 23:40:50.394258 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:40:50.404160 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:40:50.407567 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:40:50.420183 kernel: BTRFS info (device dm-0): first mount of filesystem 185ffa67-4184-4488-b7c8-7c0711a63b2d Sep 4 23:40:50.420247 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:40:50.420291 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 23:40:50.421447 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:40:50.422938 kernel: BTRFS info (device dm-0): using free space tree Sep 4 23:40:50.428800 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:40:50.430572 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:40:50.446589 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:40:50.449795 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:40:50.465581 kernel: BTRFS info (device vda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:40:50.465631 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:40:50.465644 kernel: BTRFS info (device vda6): using free space tree Sep 4 23:40:50.469307 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 23:40:50.474330 kernel: BTRFS info (device vda6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:40:50.481587 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:40:50.487621 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:40:50.612723 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:40:50.624626 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:40:50.628908 ignition[680]: Ignition 2.20.0 Sep 4 23:40:50.629233 ignition[680]: Stage: fetch-offline Sep 4 23:40:50.629301 ignition[680]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:40:50.629316 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:40:50.629449 ignition[680]: parsed url from cmdline: "" Sep 4 23:40:50.629455 ignition[680]: no config URL provided Sep 4 23:40:50.629465 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:40:50.629480 ignition[680]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:40:50.629516 ignition[680]: op(1): [started] loading QEMU firmware config module Sep 4 23:40:50.629523 ignition[680]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 23:40:50.641444 ignition[680]: op(1): [finished] loading QEMU firmware config module Sep 4 23:40:50.666422 systemd-networkd[782]: lo: Link UP Sep 4 23:40:50.666433 systemd-networkd[782]: lo: Gained carrier Sep 4 23:40:50.669733 systemd-networkd[782]: Enumeration completed Sep 4 23:40:50.669924 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:40:50.672000 systemd[1]: Reached target network.target - Network. Sep 4 23:40:50.675233 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:40:50.675242 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:40:50.680080 systemd-networkd[782]: eth0: Link UP Sep 4 23:40:50.680101 systemd-networkd[782]: eth0: Gained carrier Sep 4 23:40:50.680110 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:40:50.693875 ignition[680]: parsing config with SHA512: beec8f754d3ab000f7c05336ace38c7ca0c0211c7bb0bd8a857ecd073a18c88307e303bfd5919d1b4b2b3738410db4066903f498df8015d0b3e26374cbf63adf Sep 4 23:40:50.701119 unknown[680]: fetched base config from "system" Sep 4 23:40:50.701135 unknown[680]: fetched user config from "qemu" Sep 4 23:40:50.701574 ignition[680]: fetch-offline: fetch-offline passed Sep 4 23:40:50.701659 ignition[680]: Ignition finished successfully Sep 4 23:40:50.704131 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 23:40:50.708391 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:40:50.710898 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 23:40:50.715478 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:40:50.774259 ignition[789]: Ignition 2.20.0 Sep 4 23:40:50.774290 ignition[789]: Stage: kargs Sep 4 23:40:50.774511 ignition[789]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:40:50.774526 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:40:50.775816 ignition[789]: kargs: kargs passed Sep 4 23:40:50.775868 ignition[789]: Ignition finished successfully Sep 4 23:40:50.783324 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:40:50.795544 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:40:50.814144 ignition[797]: Ignition 2.20.0 Sep 4 23:40:50.814159 ignition[797]: Stage: disks Sep 4 23:40:50.814416 ignition[797]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:40:50.814432 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:40:50.818270 ignition[797]: disks: disks passed Sep 4 23:40:50.818340 ignition[797]: Ignition finished successfully Sep 4 23:40:50.821951 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:40:50.822332 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:40:50.825442 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:40:50.829115 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:40:50.829546 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:40:50.829903 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:40:50.848596 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:40:50.866133 systemd-resolved[255]: Detected conflict on linux IN A 10.0.0.28 Sep 4 23:40:50.866153 systemd-resolved[255]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Sep 4 23:40:50.867644 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 23:40:50.875909 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:40:50.889450 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:40:50.998319 kernel: EXT4-fs (vda9): mounted filesystem 86dd2c20-900e-43ec-8fda-e9f0f484a013 r/w with ordered data mode. Quota mode: none. Sep 4 23:40:50.999871 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:40:51.002732 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:40:51.020450 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:40:51.023965 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:40:51.027214 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 23:40:51.027300 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:40:51.037470 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (816) Sep 4 23:40:51.037507 kernel: BTRFS info (device vda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:40:51.037523 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:40:51.037538 kernel: BTRFS info (device vda6): using free space tree Sep 4 23:40:51.029610 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:40:51.040162 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:40:51.042429 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 23:40:51.043865 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:40:51.055511 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:40:51.094652 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:40:51.100037 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:40:51.106576 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:40:51.111633 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:40:51.217871 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:40:51.257690 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:40:51.261383 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:40:51.267321 kernel: BTRFS info (device vda6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:40:51.333321 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:40:51.420324 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:40:51.432472 ignition[933]: INFO : Ignition 2.20.0 Sep 4 23:40:51.432472 ignition[933]: INFO : Stage: mount Sep 4 23:40:51.434629 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:40:51.434629 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:40:51.437921 ignition[933]: INFO : mount: mount passed Sep 4 23:40:51.438923 ignition[933]: INFO : Ignition finished successfully Sep 4 23:40:51.443096 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:40:51.454693 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:40:51.467507 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:40:51.484578 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (942) Sep 4 23:40:51.487179 kernel: BTRFS info (device vda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:40:51.487224 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:40:51.487238 kernel: BTRFS info (device vda6): using free space tree Sep 4 23:40:51.491324 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 23:40:51.493325 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:40:51.544762 ignition[959]: INFO : Ignition 2.20.0 Sep 4 23:40:51.544762 ignition[959]: INFO : Stage: files Sep 4 23:40:51.547481 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:40:51.547481 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:40:51.551613 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:40:51.554168 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:40:51.554168 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:40:51.559354 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:40:51.561109 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:40:51.563446 unknown[959]: wrote ssh authorized keys file for user: core Sep 4 23:40:51.574019 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:40:51.577316 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 4 23:40:51.579704 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 4 23:40:51.665430 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:40:52.453480 systemd-networkd[782]: eth0: Gained IPv6LL Sep 4 23:40:52.471691 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 4 23:40:52.471691 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:40:52.476400 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 23:40:52.559619 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 23:40:52.713326 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:40:52.713326 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:40:52.719368 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:40:52.719368 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:40:52.719368 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:40:52.719368 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:40:52.719368 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:40:52.719368 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:40:52.719368 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:40:52.719368 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:40:52.719368 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:40:52.719368 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 4 23:40:52.719368 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 4 23:40:52.719368 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 4 23:40:52.719368 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 4 23:40:53.101765 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 23:40:53.879841 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 4 23:40:53.879841 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 23:40:53.884483 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:40:53.884483 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:40:53.884483 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 23:40:53.884483 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 23:40:53.884483 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 23:40:53.884483 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 23:40:53.884483 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 23:40:53.884483 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 23:40:53.909160 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 23:40:53.914162 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 23:40:53.916315 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 23:40:53.916315 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:40:53.919445 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:40:53.921256 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:40:53.923414 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:40:53.925320 ignition[959]: INFO : files: files passed Sep 4 23:40:53.926214 ignition[959]: INFO : Ignition finished successfully Sep 4 23:40:53.930354 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:40:53.938448 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:40:53.939310 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:40:53.949102 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:40:53.949446 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:40:53.953050 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 23:40:53.954668 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:40:53.954668 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:40:53.960079 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:40:53.964112 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:40:53.964504 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:40:53.975435 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:40:54.008116 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:40:54.008258 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:40:54.012219 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:40:54.012324 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:40:54.016518 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:40:54.019558 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:40:54.045110 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:40:54.057628 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:40:54.067612 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:40:54.069312 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:40:54.072033 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:40:54.074587 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:40:54.074724 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:40:54.077544 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:40:54.079402 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:40:54.081817 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:40:54.084202 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:40:54.086262 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:40:54.088486 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:40:54.090885 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:40:54.093765 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:40:54.096065 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:40:54.098502 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:40:54.100645 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:40:54.100817 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:40:54.103365 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:40:54.104791 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:40:54.106899 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:40:54.107029 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:40:54.109129 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:40:54.109246 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:40:54.111744 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:40:54.111860 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:40:54.114227 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:40:54.116170 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:40:54.119420 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:40:54.121487 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:40:54.123540 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:40:54.125820 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:40:54.125971 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:40:54.127833 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:40:54.127925 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:40:54.130143 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:40:54.130289 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:40:54.133058 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:40:54.133175 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:40:54.143442 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:40:54.145956 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:40:54.147228 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:40:54.147376 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:40:54.149966 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:40:54.150154 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:40:54.156926 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:40:54.157060 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:40:54.164211 ignition[1016]: INFO : Ignition 2.20.0 Sep 4 23:40:54.164211 ignition[1016]: INFO : Stage: umount Sep 4 23:40:54.165791 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:40:54.165791 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:40:54.165791 ignition[1016]: INFO : umount: umount passed Sep 4 23:40:54.165791 ignition[1016]: INFO : Ignition finished successfully Sep 4 23:40:54.168202 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:40:54.168383 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:40:54.169396 systemd[1]: Stopped target network.target - Network. Sep 4 23:40:54.172608 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:40:54.172674 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:40:54.174530 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:40:54.174584 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:40:54.175429 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:40:54.175480 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:40:54.175743 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:40:54.175788 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:40:54.176178 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:40:54.176774 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:40:54.184598 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:40:54.184785 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:40:54.192040 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:40:54.192867 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:40:54.193059 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:40:54.198836 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:40:54.199845 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:40:54.199941 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:40:54.212379 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:40:54.212469 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:40:54.212540 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:40:54.214460 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:40:54.214525 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:40:54.219033 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:40:54.219109 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:40:54.221087 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:40:54.221144 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:40:54.224246 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:40:54.229021 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:40:54.229101 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:40:54.237497 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:40:54.256559 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:40:54.256753 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:40:54.259493 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:40:54.259745 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:40:54.263692 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:40:54.263767 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:40:54.264890 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:40:54.264947 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:40:54.266854 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:40:54.266929 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:40:54.272262 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:40:54.272424 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:40:54.276237 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:40:54.276388 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:40:54.287616 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:40:54.290249 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:40:54.290386 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:40:54.292927 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 23:40:54.292989 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:40:54.294537 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:40:54.294632 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:40:54.297116 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:40:54.297178 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:40:54.300644 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 23:40:54.300723 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:40:54.301169 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:40:54.301319 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:40:54.688507 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:40:54.688659 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:40:54.691190 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:40:54.692476 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:40:54.692554 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:40:54.704653 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:40:54.714713 systemd[1]: Switching root. Sep 4 23:40:54.747454 systemd-journald[191]: Journal stopped Sep 4 23:40:56.281145 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Sep 4 23:40:56.281244 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:40:56.281320 kernel: SELinux: policy capability open_perms=1 Sep 4 23:40:56.281343 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:40:56.281367 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:40:56.281383 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:40:56.281400 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:40:56.281424 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:40:56.281447 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:40:56.281470 kernel: audit: type=1403 audit(1757029255.310:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:40:56.281500 systemd[1]: Successfully loaded SELinux policy in 46.298ms. Sep 4 23:40:56.281521 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.110ms. Sep 4 23:40:56.281540 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:40:56.281558 systemd[1]: Detected virtualization kvm. Sep 4 23:40:56.281575 systemd[1]: Detected architecture x86-64. Sep 4 23:40:56.281592 systemd[1]: Detected first boot. Sep 4 23:40:56.281609 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:40:56.281626 zram_generator::config[1063]: No configuration found. Sep 4 23:40:56.281644 kernel: Guest personality initialized and is inactive Sep 4 23:40:56.281665 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 23:40:56.281765 kernel: Initialized host personality Sep 4 23:40:56.281781 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:40:56.281798 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:40:56.281819 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:40:56.281843 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:40:56.281860 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:40:56.281878 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:40:56.281897 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:40:56.281926 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:40:56.281944 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:40:56.281971 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:40:56.281989 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:40:56.282008 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:40:56.282025 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:40:56.282042 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:40:56.282060 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:40:56.282082 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:40:56.282099 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:40:56.282116 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:40:56.282133 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:40:56.282160 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:40:56.282199 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 23:40:56.282217 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:40:56.282235 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:40:56.282258 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:40:56.282305 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:40:56.282325 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:40:56.282349 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:40:56.282366 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:40:56.282383 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:40:56.282399 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:40:56.282416 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:40:56.282433 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:40:56.282458 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:40:56.282475 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:40:56.282493 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:40:56.282511 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:40:56.282529 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:40:56.282547 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:40:56.282564 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:40:56.282582 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:40:56.282599 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:40:56.282633 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:40:56.282651 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:40:56.282668 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:40:56.282687 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:40:56.282704 systemd[1]: Reached target machines.target - Containers. Sep 4 23:40:56.282721 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:40:56.282738 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:40:56.282756 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:40:56.282778 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:40:56.282797 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:40:56.282816 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:40:56.282834 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:40:56.282852 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:40:56.282871 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:40:56.282889 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:40:56.282906 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:40:56.282924 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:40:56.282945 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:40:56.282974 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:40:56.282993 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:40:56.283011 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:40:56.283038 kernel: fuse: init (API version 7.39) Sep 4 23:40:56.283055 kernel: loop: module loaded Sep 4 23:40:56.283071 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:40:56.283088 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:40:56.283109 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:40:56.283127 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:40:56.283144 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:40:56.283161 kernel: ACPI: bus type drm_connector registered Sep 4 23:40:56.283177 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:40:56.283199 systemd[1]: Stopped verity-setup.service. Sep 4 23:40:56.283218 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:40:56.283237 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:40:56.283257 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:40:56.283333 systemd-journald[1138]: Collecting audit messages is disabled. Sep 4 23:40:56.283370 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:40:56.283388 systemd-journald[1138]: Journal started Sep 4 23:40:56.283420 systemd-journald[1138]: Runtime Journal (/run/log/journal/f8ee0f62dcec4779ae521a40d91074eb) is 6M, max 48M, 42M free. Sep 4 23:40:56.000213 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:40:56.015687 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 23:40:56.016223 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:40:56.288023 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:40:56.289036 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:40:56.290876 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:40:56.292442 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:40:56.294151 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:40:56.296206 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:40:56.298293 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:40:56.298618 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:40:56.300633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:40:56.300945 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:40:56.303157 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:40:56.303502 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:40:56.305440 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:40:56.305758 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:40:56.307844 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:40:56.308170 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:40:56.310164 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:40:56.310496 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:40:56.312523 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:40:56.314581 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:40:56.316822 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:40:56.318900 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:40:56.344931 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:40:56.363477 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:40:56.367023 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:40:56.368490 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:40:56.368535 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:40:56.371366 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:40:56.374988 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:40:56.378886 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:40:56.380633 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:40:56.393804 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:40:56.397700 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:40:56.399200 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:40:56.407557 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:40:56.408872 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:40:56.411852 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:40:56.415357 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:40:56.418391 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:40:56.422130 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:40:56.425736 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:40:56.427228 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:40:56.443112 systemd-journald[1138]: Time spent on flushing to /var/log/journal/f8ee0f62dcec4779ae521a40d91074eb is 17.884ms for 1041 entries. Sep 4 23:40:56.443112 systemd-journald[1138]: System Journal (/var/log/journal/f8ee0f62dcec4779ae521a40d91074eb) is 8M, max 195.6M, 187.6M free. Sep 4 23:40:56.502892 systemd-journald[1138]: Received client request to flush runtime journal. Sep 4 23:40:56.502963 kernel: loop0: detected capacity change from 0 to 147912 Sep 4 23:40:56.433754 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:40:56.448992 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:40:56.452001 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:40:56.463548 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:40:56.467471 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 23:40:56.470860 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:40:56.504519 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Sep 4 23:40:56.504533 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Sep 4 23:40:56.507146 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:40:56.512666 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:40:56.522205 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:40:56.529519 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 23:40:56.540320 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:40:56.542954 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:40:56.569616 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:40:56.578322 kernel: loop1: detected capacity change from 0 to 229808 Sep 4 23:40:56.580705 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:40:56.621506 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Sep 4 23:40:56.621534 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Sep 4 23:40:56.630230 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:40:56.644368 kernel: loop2: detected capacity change from 0 to 138176 Sep 4 23:40:56.738337 kernel: loop3: detected capacity change from 0 to 147912 Sep 4 23:40:56.759415 kernel: loop4: detected capacity change from 0 to 229808 Sep 4 23:40:56.777318 kernel: loop5: detected capacity change from 0 to 138176 Sep 4 23:40:56.791823 (sd-merge)[1211]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 23:40:56.792723 (sd-merge)[1211]: Merged extensions into '/usr'. Sep 4 23:40:56.810330 systemd[1]: Reload requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:40:56.810354 systemd[1]: Reloading... Sep 4 23:40:56.899336 zram_generator::config[1239]: No configuration found. Sep 4 23:40:57.037225 ldconfig[1178]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:40:57.092331 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:40:57.172688 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:40:57.173497 systemd[1]: Reloading finished in 362 ms. Sep 4 23:40:57.198918 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:40:57.200669 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:40:57.221257 systemd[1]: Starting ensure-sysext.service... Sep 4 23:40:57.223902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:40:57.237175 systemd[1]: Reload requested from client PID 1276 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:40:57.237197 systemd[1]: Reloading... Sep 4 23:40:57.325805 zram_generator::config[1311]: No configuration found. Sep 4 23:40:57.342236 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:40:57.343113 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:40:57.344317 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:40:57.344607 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Sep 4 23:40:57.344696 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Sep 4 23:40:57.350173 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:40:57.350192 systemd-tmpfiles[1277]: Skipping /boot Sep 4 23:40:57.367564 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:40:57.367582 systemd-tmpfiles[1277]: Skipping /boot Sep 4 23:40:57.451384 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:40:57.527343 systemd[1]: Reloading finished in 289 ms. Sep 4 23:40:57.541500 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:40:57.562766 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:40:57.584739 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:40:57.587705 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:40:57.590334 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:40:57.595219 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:40:57.603463 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:40:57.608805 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:40:57.614154 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:40:57.614376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:40:57.627459 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:40:57.631556 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:40:57.634471 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:40:57.636539 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:40:57.636696 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:40:57.640329 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:40:57.641439 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:40:57.642932 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:40:57.644967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:40:57.645216 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:40:57.648297 augenrules[1375]: No rules Sep 4 23:40:57.649420 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:40:57.650223 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:40:57.652096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:40:57.652355 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:40:57.654700 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:40:57.654956 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:40:57.662167 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Sep 4 23:40:57.673573 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:40:57.681623 systemd[1]: Finished ensure-sysext.service. Sep 4 23:40:57.685035 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:40:57.698608 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:40:57.699771 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:40:57.703477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:40:57.707102 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:40:57.710542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:40:57.714013 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:40:57.715970 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:40:57.716019 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:40:57.718463 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 23:40:57.722812 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:40:57.724007 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:40:57.724560 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:40:57.727631 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:40:57.729251 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:40:57.731229 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:40:57.732509 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:40:57.734561 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:40:57.734844 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:40:57.738416 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:40:57.738656 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:40:57.752067 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:40:57.752354 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:40:57.754646 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:40:57.763380 augenrules[1387]: /sbin/augenrules: No change Sep 4 23:40:57.780199 augenrules[1442]: No rules Sep 4 23:40:57.783472 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:40:57.784841 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:40:57.784945 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:40:57.784974 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:40:57.785398 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:40:57.786080 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:40:57.835098 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1413) Sep 4 23:40:57.837621 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 23:40:57.896346 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 23:40:57.932677 systemd-resolved[1352]: Positive Trust Anchors: Sep 4 23:40:57.933133 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:40:57.933221 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:40:57.935305 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 4 23:40:57.942691 systemd-resolved[1352]: Defaulting to hostname 'linux'. Sep 4 23:40:57.946312 kernel: ACPI: button: Power Button [PWRF] Sep 4 23:40:57.948103 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:40:57.949694 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:40:57.951270 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:40:57.954784 systemd-networkd[1443]: lo: Link UP Sep 4 23:40:57.955369 systemd-networkd[1443]: lo: Gained carrier Sep 4 23:40:57.958049 systemd-networkd[1443]: Enumeration completed Sep 4 23:40:57.958623 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:40:57.958699 systemd-networkd[1443]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:40:57.959438 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 23:40:57.959586 systemd-networkd[1443]: eth0: Link UP Sep 4 23:40:57.959636 systemd-networkd[1443]: eth0: Gained carrier Sep 4 23:40:57.959697 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:40:57.961311 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:40:57.962871 systemd[1]: Reached target network.target - Network. Sep 4 23:40:57.972313 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 4 23:40:57.976181 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 4 23:40:59.040462 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 4 23:40:59.046240 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 4 23:40:57.969935 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:40:57.978405 systemd-networkd[1443]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 23:40:57.979177 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Sep 4 23:40:59.036854 systemd-timesyncd[1410]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 23:40:59.036932 systemd-timesyncd[1410]: Initial clock synchronization to Thu 2025-09-04 23:40:59.036745 UTC. Sep 4 23:40:59.038563 systemd-resolved[1352]: Clock change detected. Flushing caches. Sep 4 23:40:59.040188 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:40:59.077114 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:40:59.102424 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:40:59.104234 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 23:40:59.131931 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 23:40:59.142342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:40:59.165939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:40:59.166964 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:40:59.181368 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:40:59.184967 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:40:59.274635 kernel: kvm_amd: TSC scaling supported Sep 4 23:40:59.274769 kernel: kvm_amd: Nested Virtualization enabled Sep 4 23:40:59.274791 kernel: kvm_amd: Nested Paging enabled Sep 4 23:40:59.275261 kernel: kvm_amd: LBR virtualization supported Sep 4 23:40:59.275978 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 23:40:59.277489 kernel: kvm_amd: Virtual GIF supported Sep 4 23:40:59.304948 kernel: EDAC MC: Ver: 3.0.0 Sep 4 23:40:59.314817 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:40:59.348725 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 23:40:59.362328 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 23:40:59.372099 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:40:59.431881 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 23:40:59.434214 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:40:59.435498 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:40:59.436805 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:40:59.438170 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:40:59.439757 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:40:59.441001 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:40:59.442287 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:40:59.443540 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:40:59.443587 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:40:59.444499 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:40:59.446651 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:40:59.449668 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:40:59.453949 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:40:59.455492 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:40:59.456856 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:40:59.461580 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:40:59.463198 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:40:59.465942 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 23:40:59.467967 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:40:59.469316 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:40:59.470431 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:40:59.471577 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:40:59.471620 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:40:59.473030 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:40:59.475643 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:40:59.480561 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:40:59.484216 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:40:59.486408 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:40:59.487734 jq[1487]: false Sep 4 23:40:59.489873 lvm[1484]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:40:59.490689 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:40:59.496022 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:40:59.501063 dbus-daemon[1486]: [system] SELinux support is enabled Sep 4 23:40:59.501685 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:40:59.507137 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:40:59.516693 extend-filesystems[1488]: Found loop3 Sep 4 23:40:59.517381 extend-filesystems[1488]: Found loop4 Sep 4 23:40:59.518867 extend-filesystems[1488]: Found loop5 Sep 4 23:40:59.518867 extend-filesystems[1488]: Found sr0 Sep 4 23:40:59.518867 extend-filesystems[1488]: Found vda Sep 4 23:40:59.518867 extend-filesystems[1488]: Found vda1 Sep 4 23:40:59.518867 extend-filesystems[1488]: Found vda2 Sep 4 23:40:59.518867 extend-filesystems[1488]: Found vda3 Sep 4 23:40:59.518867 extend-filesystems[1488]: Found usr Sep 4 23:40:59.518867 extend-filesystems[1488]: Found vda4 Sep 4 23:40:59.518867 extend-filesystems[1488]: Found vda6 Sep 4 23:40:59.518867 extend-filesystems[1488]: Found vda7 Sep 4 23:40:59.518867 extend-filesystems[1488]: Found vda9 Sep 4 23:40:59.518867 extend-filesystems[1488]: Checking size of /dev/vda9 Sep 4 23:40:59.521614 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:40:59.529525 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 23:40:59.531171 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:40:59.540204 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:40:59.544640 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:40:59.547735 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:40:59.560950 jq[1505]: true Sep 4 23:40:59.565100 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:40:59.565453 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:40:59.565926 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:40:59.566222 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:40:59.574094 extend-filesystems[1488]: Resized partition /dev/vda9 Sep 4 23:40:59.576316 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:40:59.576627 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:40:59.590762 extend-filesystems[1509]: resize2fs 1.47.1 (20-May-2024) Sep 4 23:40:59.597373 (ntainerd)[1512]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:40:59.600997 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 23:40:59.608367 update_engine[1504]: I20250904 23:40:59.608285 1504 main.cc:92] Flatcar Update Engine starting Sep 4 23:40:59.610274 update_engine[1504]: I20250904 23:40:59.610162 1504 update_check_scheduler.cc:74] Next update check in 7m2s Sep 4 23:40:59.614843 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1401) Sep 4 23:40:59.621261 jq[1510]: true Sep 4 23:40:59.635974 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 23:40:59.656308 tar[1508]: linux-amd64/LICENSE Sep 4 23:40:59.662926 tar[1508]: linux-amd64/helm Sep 4 23:40:59.660877 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:40:59.664446 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:40:59.664519 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:40:59.667398 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:40:59.667452 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:40:59.680927 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 23:40:59.681233 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:40:59.709102 extend-filesystems[1509]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 23:40:59.709102 extend-filesystems[1509]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 23:40:59.709102 extend-filesystems[1509]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 23:40:59.713592 extend-filesystems[1488]: Resized filesystem in /dev/vda9 Sep 4 23:40:59.715454 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:40:59.716026 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:40:59.740842 systemd-logind[1494]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 23:40:59.740882 systemd-logind[1494]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 23:40:59.743756 systemd-logind[1494]: New seat seat0. Sep 4 23:40:59.756561 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:40:59.775994 bash[1542]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:40:59.782299 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:40:59.786204 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 23:40:59.793400 locksmithd[1531]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:41:00.033354 sshd_keygen[1515]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:41:00.099859 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:41:00.112336 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:41:00.123325 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:41:00.123685 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:41:00.134388 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:41:00.175449 containerd[1512]: time="2025-09-04T23:41:00.175172031Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 4 23:41:00.177694 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:41:00.188295 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:41:00.192308 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 23:41:00.193848 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:41:00.210427 containerd[1512]: time="2025-09-04T23:41:00.210307580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:41:00.214935 containerd[1512]: time="2025-09-04T23:41:00.213729185Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:41:00.214935 containerd[1512]: time="2025-09-04T23:41:00.213792284Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 23:41:00.214935 containerd[1512]: time="2025-09-04T23:41:00.213819735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 23:41:00.214935 containerd[1512]: time="2025-09-04T23:41:00.214093839Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 23:41:00.214935 containerd[1512]: time="2025-09-04T23:41:00.214122373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 23:41:00.214935 containerd[1512]: time="2025-09-04T23:41:00.214205979Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:41:00.214935 containerd[1512]: time="2025-09-04T23:41:00.214218483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:41:00.214935 containerd[1512]: time="2025-09-04T23:41:00.214488930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:41:00.214935 containerd[1512]: time="2025-09-04T23:41:00.214545987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 23:41:00.214935 containerd[1512]: time="2025-09-04T23:41:00.214560714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:41:00.214935 containerd[1512]: time="2025-09-04T23:41:00.214569661Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 23:41:00.215389 containerd[1512]: time="2025-09-04T23:41:00.214673927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:41:00.215389 containerd[1512]: time="2025-09-04T23:41:00.214982896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:41:00.215389 containerd[1512]: time="2025-09-04T23:41:00.215153796Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:41:00.215389 containerd[1512]: time="2025-09-04T23:41:00.215166089Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 23:41:00.215389 containerd[1512]: time="2025-09-04T23:41:00.215276497Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 23:41:00.215534 containerd[1512]: time="2025-09-04T23:41:00.215394177Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:41:00.222675 containerd[1512]: time="2025-09-04T23:41:00.222609155Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 23:41:00.222788 containerd[1512]: time="2025-09-04T23:41:00.222704714Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 23:41:00.222788 containerd[1512]: time="2025-09-04T23:41:00.222730994Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 23:41:00.222788 containerd[1512]: time="2025-09-04T23:41:00.222752775Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 23:41:00.222788 containerd[1512]: time="2025-09-04T23:41:00.222769626Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 23:41:00.223053 containerd[1512]: time="2025-09-04T23:41:00.223022851Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 23:41:00.223335 containerd[1512]: time="2025-09-04T23:41:00.223309739Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 23:41:00.223461 containerd[1512]: time="2025-09-04T23:41:00.223438450Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 23:41:00.223487 containerd[1512]: time="2025-09-04T23:41:00.223462185Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 23:41:00.223487 containerd[1512]: time="2025-09-04T23:41:00.223476902Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 23:41:00.223546 containerd[1512]: time="2025-09-04T23:41:00.223506868Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 23:41:00.223546 containerd[1512]: time="2025-09-04T23:41:00.223528108Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 23:41:00.223585 containerd[1512]: time="2025-09-04T23:41:00.223546002Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 23:41:00.223585 containerd[1512]: time="2025-09-04T23:41:00.223567432Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 23:41:00.223621 containerd[1512]: time="2025-09-04T23:41:00.223588441Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 23:41:00.223621 containerd[1512]: time="2025-09-04T23:41:00.223606275Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 23:41:00.223663 containerd[1512]: time="2025-09-04T23:41:00.223623297Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 23:41:00.223663 containerd[1512]: time="2025-09-04T23:41:00.223639988Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 23:41:00.223701 containerd[1512]: time="2025-09-04T23:41:00.223686235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.223721 containerd[1512]: time="2025-09-04T23:41:00.223704138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.223747 containerd[1512]: time="2025-09-04T23:41:00.223719397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.223747 containerd[1512]: time="2025-09-04T23:41:00.223731430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.223747 containerd[1512]: time="2025-09-04T23:41:00.223743723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.223809 containerd[1512]: time="2025-09-04T23:41:00.223757198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.223809 containerd[1512]: time="2025-09-04T23:41:00.223768619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.223809 containerd[1512]: time="2025-09-04T23:41:00.223781463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.223809 containerd[1512]: time="2025-09-04T23:41:00.223793446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.223809 containerd[1512]: time="2025-09-04T23:41:00.223807592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.223939 containerd[1512]: time="2025-09-04T23:41:00.223819565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.223939 containerd[1512]: time="2025-09-04T23:41:00.223835304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.223939 containerd[1512]: time="2025-09-04T23:41:00.223851104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.223939 containerd[1512]: time="2025-09-04T23:41:00.223868677Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 23:41:00.223939 containerd[1512]: time="2025-09-04T23:41:00.223913361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.223939 containerd[1512]: time="2025-09-04T23:41:00.223933088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.224062 containerd[1512]: time="2025-09-04T23:41:00.223949849Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 23:41:00.224062 containerd[1512]: time="2025-09-04T23:41:00.224026803Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 23:41:00.224062 containerd[1512]: time="2025-09-04T23:41:00.224052642Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 23:41:00.224119 containerd[1512]: time="2025-09-04T23:41:00.224066247Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 23:41:00.224119 containerd[1512]: time="2025-09-04T23:41:00.224084101Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 23:41:00.224119 containerd[1512]: time="2025-09-04T23:41:00.224095552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.224119 containerd[1512]: time="2025-09-04T23:41:00.224111993Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 23:41:00.224197 containerd[1512]: time="2025-09-04T23:41:00.224143592Z" level=info msg="NRI interface is disabled by configuration." Sep 4 23:41:00.224197 containerd[1512]: time="2025-09-04T23:41:00.224173819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 23:41:00.225807 containerd[1512]: time="2025-09-04T23:41:00.224531560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 23:41:00.225807 containerd[1512]: time="2025-09-04T23:41:00.224598515Z" level=info msg="Connect containerd service" Sep 4 23:41:00.225807 containerd[1512]: time="2025-09-04T23:41:00.224639763Z" level=info msg="using legacy CRI server" Sep 4 23:41:00.225807 containerd[1512]: time="2025-09-04T23:41:00.224649361Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:41:00.225807 containerd[1512]: time="2025-09-04T23:41:00.224789604Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 23:41:00.226218 containerd[1512]: time="2025-09-04T23:41:00.226136429Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:41:00.226615 containerd[1512]: time="2025-09-04T23:41:00.226516532Z" level=info msg="Start subscribing containerd event" Sep 4 23:41:00.226680 containerd[1512]: time="2025-09-04T23:41:00.226646525Z" level=info msg="Start recovering state" Sep 4 23:41:00.226824 containerd[1512]: time="2025-09-04T23:41:00.226799753Z" level=info msg="Start event monitor" Sep 4 23:41:00.226866 containerd[1512]: time="2025-09-04T23:41:00.226833586Z" level=info msg="Start snapshots syncer" Sep 4 23:41:00.226866 containerd[1512]: time="2025-09-04T23:41:00.226850959Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:41:00.226939 containerd[1512]: time="2025-09-04T23:41:00.226868021Z" level=info msg="Start streaming server" Sep 4 23:41:00.227606 containerd[1512]: time="2025-09-04T23:41:00.227548076Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:41:00.227717 containerd[1512]: time="2025-09-04T23:41:00.227667850Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:41:00.227864 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:41:00.229414 containerd[1512]: time="2025-09-04T23:41:00.229387404Z" level=info msg="containerd successfully booted in 0.057508s" Sep 4 23:41:00.322931 tar[1508]: linux-amd64/README.md Sep 4 23:41:00.341267 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:41:00.742184 systemd-networkd[1443]: eth0: Gained IPv6LL Sep 4 23:41:00.747276 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:41:00.749613 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:41:00.763462 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 23:41:00.767065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:41:00.769543 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:41:00.799464 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:41:00.801673 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 23:41:00.802031 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 23:41:00.805630 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:41:02.080062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:41:02.082049 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:41:02.085035 systemd[1]: Startup finished in 1.326s (kernel) + 7.565s (initrd) + 5.763s (userspace) = 14.656s. Sep 4 23:41:02.088244 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:41:02.533953 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 23:41:02.557388 systemd[1]: Started sshd@0-10.0.0.28:22-10.0.0.1:38030.service - OpenSSH per-connection server daemon (10.0.0.1:38030). Sep 4 23:41:02.708282 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 38030 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:41:02.711249 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:41:02.719791 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:41:02.727127 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:41:02.733796 systemd-logind[1494]: New session 1 of user core. Sep 4 23:41:02.753622 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:41:02.763336 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:41:02.770052 (systemd)[1614]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:41:02.773941 systemd-logind[1494]: New session c1 of user core. Sep 4 23:41:02.982879 systemd[1614]: Queued start job for default target default.target. Sep 4 23:41:02.997560 systemd[1614]: Created slice app.slice - User Application Slice. Sep 4 23:41:02.997599 systemd[1614]: Reached target paths.target - Paths. Sep 4 23:41:02.997667 systemd[1614]: Reached target timers.target - Timers. Sep 4 23:41:02.999985 systemd[1614]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:41:03.030547 systemd[1614]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:41:03.030773 systemd[1614]: Reached target sockets.target - Sockets. Sep 4 23:41:03.030848 systemd[1614]: Reached target basic.target - Basic System. Sep 4 23:41:03.030951 systemd[1614]: Reached target default.target - Main User Target. Sep 4 23:41:03.031020 systemd[1614]: Startup finished in 246ms. Sep 4 23:41:03.031493 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:41:03.043200 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:41:03.133356 systemd[1]: Started sshd@1-10.0.0.28:22-10.0.0.1:38040.service - OpenSSH per-connection server daemon (10.0.0.1:38040). Sep 4 23:41:03.182936 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 38040 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:41:03.185064 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:41:03.186871 kubelet[1599]: E0904 23:41:03.186809 1599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:41:03.190519 systemd-logind[1494]: New session 2 of user core. Sep 4 23:41:03.204095 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:41:03.204614 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:41:03.204811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:41:03.205265 systemd[1]: kubelet.service: Consumed 1.919s CPU time, 266.3M memory peak. Sep 4 23:41:03.262970 sshd[1629]: Connection closed by 10.0.0.1 port 38040 Sep 4 23:41:03.263765 sshd-session[1626]: pam_unix(sshd:session): session closed for user core Sep 4 23:41:03.278009 systemd[1]: sshd@1-10.0.0.28:22-10.0.0.1:38040.service: Deactivated successfully. Sep 4 23:41:03.280350 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 23:41:03.281600 systemd-logind[1494]: Session 2 logged out. Waiting for processes to exit. Sep 4 23:41:03.293490 systemd[1]: Started sshd@2-10.0.0.28:22-10.0.0.1:38044.service - OpenSSH per-connection server daemon (10.0.0.1:38044). Sep 4 23:41:03.294984 systemd-logind[1494]: Removed session 2. Sep 4 23:41:03.332376 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 38044 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:41:03.334498 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:41:03.340195 systemd-logind[1494]: New session 3 of user core. Sep 4 23:41:03.357236 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 23:41:03.408259 sshd[1637]: Connection closed by 10.0.0.1 port 38044 Sep 4 23:41:03.408761 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Sep 4 23:41:03.422446 systemd[1]: sshd@2-10.0.0.28:22-10.0.0.1:38044.service: Deactivated successfully. Sep 4 23:41:03.425566 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 23:41:03.428302 systemd-logind[1494]: Session 3 logged out. Waiting for processes to exit. Sep 4 23:41:03.443374 systemd[1]: Started sshd@3-10.0.0.28:22-10.0.0.1:38058.service - OpenSSH per-connection server daemon (10.0.0.1:38058). Sep 4 23:41:03.444985 systemd-logind[1494]: Removed session 3. Sep 4 23:41:03.487536 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 38058 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:41:03.489429 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:41:03.494848 systemd-logind[1494]: New session 4 of user core. Sep 4 23:41:03.513045 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 23:41:03.569521 sshd[1645]: Connection closed by 10.0.0.1 port 38058 Sep 4 23:41:03.570175 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Sep 4 23:41:03.583875 systemd[1]: sshd@3-10.0.0.28:22-10.0.0.1:38058.service: Deactivated successfully. Sep 4 23:41:03.586288 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 23:41:03.587841 systemd-logind[1494]: Session 4 logged out. Waiting for processes to exit. Sep 4 23:41:03.600240 systemd[1]: Started sshd@4-10.0.0.28:22-10.0.0.1:38068.service - OpenSSH per-connection server daemon (10.0.0.1:38068). Sep 4 23:41:03.601585 systemd-logind[1494]: Removed session 4. Sep 4 23:41:03.639476 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 38068 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:41:03.641446 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:41:03.646297 systemd-logind[1494]: New session 5 of user core. Sep 4 23:41:03.656055 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 23:41:03.719501 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 23:41:03.719985 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:41:03.737042 sudo[1654]: pam_unix(sudo:session): session closed for user root Sep 4 23:41:03.739182 sshd[1653]: Connection closed by 10.0.0.1 port 38068 Sep 4 23:41:03.739734 sshd-session[1650]: pam_unix(sshd:session): session closed for user core Sep 4 23:41:03.758854 systemd[1]: sshd@4-10.0.0.28:22-10.0.0.1:38068.service: Deactivated successfully. Sep 4 23:41:03.761784 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 23:41:03.763749 systemd-logind[1494]: Session 5 logged out. Waiting for processes to exit. Sep 4 23:41:03.778965 systemd[1]: Started sshd@5-10.0.0.28:22-10.0.0.1:38080.service - OpenSSH per-connection server daemon (10.0.0.1:38080). Sep 4 23:41:03.781164 systemd-logind[1494]: Removed session 5. Sep 4 23:41:03.819283 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 38080 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:41:03.821189 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:41:03.828339 systemd-logind[1494]: New session 6 of user core. Sep 4 23:41:03.838195 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 23:41:03.900480 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 23:41:03.900873 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:41:03.907387 sudo[1664]: pam_unix(sudo:session): session closed for user root Sep 4 23:41:03.918440 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 23:41:03.918985 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:41:03.945291 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:41:03.982314 augenrules[1686]: No rules Sep 4 23:41:03.984564 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:41:03.984936 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:41:03.986472 sudo[1663]: pam_unix(sudo:session): session closed for user root Sep 4 23:41:03.988594 sshd[1662]: Connection closed by 10.0.0.1 port 38080 Sep 4 23:41:03.989111 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Sep 4 23:41:04.003845 systemd[1]: sshd@5-10.0.0.28:22-10.0.0.1:38080.service: Deactivated successfully. Sep 4 23:41:04.006205 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 23:41:04.008563 systemd-logind[1494]: Session 6 logged out. Waiting for processes to exit. Sep 4 23:41:04.021302 systemd[1]: Started sshd@6-10.0.0.28:22-10.0.0.1:38082.service - OpenSSH per-connection server daemon (10.0.0.1:38082). Sep 4 23:41:04.022673 systemd-logind[1494]: Removed session 6. Sep 4 23:41:04.060311 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 38082 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:41:04.062143 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:41:04.067454 systemd-logind[1494]: New session 7 of user core. Sep 4 23:41:04.079362 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 23:41:04.135359 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 23:41:04.135881 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:41:05.815368 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 23:41:05.815532 (dockerd)[1718]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 23:41:06.833301 dockerd[1718]: time="2025-09-04T23:41:06.833204890Z" level=info msg="Starting up" Sep 4 23:41:07.450103 dockerd[1718]: time="2025-09-04T23:41:07.450020850Z" level=info msg="Loading containers: start." Sep 4 23:41:07.788935 kernel: Initializing XFRM netlink socket Sep 4 23:41:07.881764 systemd-networkd[1443]: docker0: Link UP Sep 4 23:41:07.942427 dockerd[1718]: time="2025-09-04T23:41:07.942350999Z" level=info msg="Loading containers: done." Sep 4 23:41:07.997714 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2306139477-merged.mount: Deactivated successfully. Sep 4 23:41:08.003121 dockerd[1718]: time="2025-09-04T23:41:08.003049128Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 23:41:08.003248 dockerd[1718]: time="2025-09-04T23:41:08.003215290Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 4 23:41:08.003407 dockerd[1718]: time="2025-09-04T23:41:08.003380119Z" level=info msg="Daemon has completed initialization" Sep 4 23:41:08.051957 dockerd[1718]: time="2025-09-04T23:41:08.051865880Z" level=info msg="API listen on /run/docker.sock" Sep 4 23:41:08.052175 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 23:41:09.055680 containerd[1512]: time="2025-09-04T23:41:09.055597296Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 4 23:41:12.114119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2758046157.mount: Deactivated successfully. Sep 4 23:41:13.455337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 23:41:13.465167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:41:13.676721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:41:13.682226 (kubelet)[1938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:41:13.893301 kubelet[1938]: E0904 23:41:13.893090 1938 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:41:13.900404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:41:13.900689 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:41:13.901196 systemd[1]: kubelet.service: Consumed 265ms CPU time, 113.1M memory peak. Sep 4 23:41:16.137002 containerd[1512]: time="2025-09-04T23:41:16.136870130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:16.138127 containerd[1512]: time="2025-09-04T23:41:16.138004768Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 4 23:41:16.139767 containerd[1512]: time="2025-09-04T23:41:16.139687252Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:16.144652 containerd[1512]: time="2025-09-04T23:41:16.144590246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:16.145653 containerd[1512]: time="2025-09-04T23:41:16.145572227Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 7.089904559s" Sep 4 23:41:16.145653 containerd[1512]: time="2025-09-04T23:41:16.145624104Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 4 23:41:16.146509 containerd[1512]: time="2025-09-04T23:41:16.146468207Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 4 23:41:18.279593 containerd[1512]: time="2025-09-04T23:41:18.279503307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:18.280715 containerd[1512]: time="2025-09-04T23:41:18.280672629Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 4 23:41:18.284453 containerd[1512]: time="2025-09-04T23:41:18.284414626Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:18.287490 containerd[1512]: time="2025-09-04T23:41:18.287461108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:18.289133 containerd[1512]: time="2025-09-04T23:41:18.289100011Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 2.142579947s" Sep 4 23:41:18.289172 containerd[1512]: time="2025-09-04T23:41:18.289140968Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 4 23:41:18.289734 containerd[1512]: time="2025-09-04T23:41:18.289708371Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 4 23:41:20.299178 containerd[1512]: time="2025-09-04T23:41:20.299111832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:20.300082 containerd[1512]: time="2025-09-04T23:41:20.300040003Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 4 23:41:20.301398 containerd[1512]: time="2025-09-04T23:41:20.301362482Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:20.305202 containerd[1512]: time="2025-09-04T23:41:20.305130958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:20.306629 containerd[1512]: time="2025-09-04T23:41:20.306581618Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 2.016835837s" Sep 4 23:41:20.306749 containerd[1512]: time="2025-09-04T23:41:20.306723935Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 4 23:41:20.307516 containerd[1512]: time="2025-09-04T23:41:20.307335402Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 4 23:41:21.616805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774683126.mount: Deactivated successfully. Sep 4 23:41:22.345925 containerd[1512]: time="2025-09-04T23:41:22.345829167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:22.346690 containerd[1512]: time="2025-09-04T23:41:22.346642031Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 4 23:41:22.347976 containerd[1512]: time="2025-09-04T23:41:22.347920879Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:22.351366 containerd[1512]: time="2025-09-04T23:41:22.351305996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:22.352453 containerd[1512]: time="2025-09-04T23:41:22.352396220Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 2.045028999s" Sep 4 23:41:22.352453 containerd[1512]: time="2025-09-04T23:41:22.352435905Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 4 23:41:22.353076 containerd[1512]: time="2025-09-04T23:41:22.353047942Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 4 23:41:22.892069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2859174042.mount: Deactivated successfully. Sep 4 23:41:24.151395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 23:41:24.165208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:41:24.549856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:41:24.555813 (kubelet)[2063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:41:24.660097 kubelet[2063]: E0904 23:41:24.659913 2063 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:41:24.664850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:41:24.665163 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:41:24.665672 systemd[1]: kubelet.service: Consumed 249ms CPU time, 113M memory peak. Sep 4 23:41:24.995586 containerd[1512]: time="2025-09-04T23:41:24.995380595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:25.044912 containerd[1512]: time="2025-09-04T23:41:25.044822649Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 4 23:41:25.064598 containerd[1512]: time="2025-09-04T23:41:25.064505384Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:25.069199 containerd[1512]: time="2025-09-04T23:41:25.069107824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:25.070558 containerd[1512]: time="2025-09-04T23:41:25.070498100Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.717412868s" Sep 4 23:41:25.070558 containerd[1512]: time="2025-09-04T23:41:25.070554727Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 4 23:41:25.071303 containerd[1512]: time="2025-09-04T23:41:25.071180550Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 23:41:25.724888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2915964849.mount: Deactivated successfully. Sep 4 23:41:25.734540 containerd[1512]: time="2025-09-04T23:41:25.734457053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:25.735980 containerd[1512]: time="2025-09-04T23:41:25.735927981Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 23:41:25.738581 containerd[1512]: time="2025-09-04T23:41:25.738530591Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:25.741153 containerd[1512]: time="2025-09-04T23:41:25.741117922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:25.742190 containerd[1512]: time="2025-09-04T23:41:25.742128237Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 670.854963ms" Sep 4 23:41:25.742190 containerd[1512]: time="2025-09-04T23:41:25.742175906Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 23:41:25.742711 containerd[1512]: time="2025-09-04T23:41:25.742686934Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 4 23:41:26.219830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3719944343.mount: Deactivated successfully. Sep 4 23:41:28.983083 containerd[1512]: time="2025-09-04T23:41:28.983010650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:28.983905 containerd[1512]: time="2025-09-04T23:41:28.983860404Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 4 23:41:28.985282 containerd[1512]: time="2025-09-04T23:41:28.985246372Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:28.988546 containerd[1512]: time="2025-09-04T23:41:28.988496907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:41:28.991965 containerd[1512]: time="2025-09-04T23:41:28.991882776Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.249162459s" Sep 4 23:41:28.991965 containerd[1512]: time="2025-09-04T23:41:28.991960071Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 4 23:41:31.849594 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:41:31.849848 systemd[1]: kubelet.service: Consumed 249ms CPU time, 113M memory peak. Sep 4 23:41:31.863220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:41:31.892599 systemd[1]: Reload requested from client PID 2160 ('systemctl') (unit session-7.scope)... Sep 4 23:41:31.892627 systemd[1]: Reloading... Sep 4 23:41:32.039961 zram_generator::config[2213]: No configuration found. Sep 4 23:41:32.282964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:41:32.397901 systemd[1]: Reloading finished in 504 ms. Sep 4 23:41:32.470172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:41:32.477115 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:41:32.500483 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:41:32.508457 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:41:32.508930 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:41:32.509006 systemd[1]: kubelet.service: Consumed 203ms CPU time, 106.1M memory peak. Sep 4 23:41:32.530964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:41:32.796548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:41:32.810469 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:41:32.937974 kubelet[2260]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:41:32.937974 kubelet[2260]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:41:32.941863 kubelet[2260]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:41:32.942100 kubelet[2260]: I0904 23:41:32.942027 2260 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:41:33.506077 kubelet[2260]: I0904 23:41:33.506007 2260 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 4 23:41:33.506077 kubelet[2260]: I0904 23:41:33.506051 2260 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:41:33.506374 kubelet[2260]: I0904 23:41:33.506346 2260 server.go:956] "Client rotation is on, will bootstrap in background" Sep 4 23:41:33.557539 kubelet[2260]: E0904 23:41:33.557455 2260 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 4 23:41:33.560437 kubelet[2260]: I0904 23:41:33.560379 2260 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:41:33.582772 kubelet[2260]: E0904 23:41:33.582235 2260 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:41:33.582772 kubelet[2260]: I0904 23:41:33.582337 2260 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:41:33.593059 kubelet[2260]: I0904 23:41:33.592999 2260 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:41:33.593447 kubelet[2260]: I0904 23:41:33.593375 2260 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:41:33.594035 kubelet[2260]: I0904 23:41:33.593430 2260 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:41:33.594035 kubelet[2260]: I0904 23:41:33.593728 2260 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:41:33.594035 kubelet[2260]: I0904 23:41:33.593745 2260 container_manager_linux.go:303] "Creating device plugin manager" Sep 4 23:41:33.598132 kubelet[2260]: I0904 23:41:33.597875 2260 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:41:33.602494 kubelet[2260]: I0904 23:41:33.602327 2260 kubelet.go:480] "Attempting to sync node with API server" Sep 4 23:41:33.602494 kubelet[2260]: I0904 23:41:33.602362 2260 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:41:33.605516 kubelet[2260]: I0904 23:41:33.605181 2260 kubelet.go:386] "Adding apiserver pod source" Sep 4 23:41:33.605516 kubelet[2260]: I0904 23:41:33.605221 2260 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:41:33.614926 kubelet[2260]: I0904 23:41:33.614847 2260 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:41:33.615453 kubelet[2260]: E0904 23:41:33.615315 2260 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 4 23:41:33.615453 kubelet[2260]: E0904 23:41:33.615403 2260 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 23:41:33.616000 kubelet[2260]: I0904 23:41:33.615846 2260 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 4 23:41:33.616922 kubelet[2260]: W0904 23:41:33.616864 2260 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 23:41:33.662440 kubelet[2260]: I0904 23:41:33.662386 2260 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:41:33.662926 kubelet[2260]: I0904 23:41:33.662501 2260 server.go:1289] "Started kubelet" Sep 4 23:41:33.663397 kubelet[2260]: I0904 23:41:33.663317 2260 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:41:33.665733 kubelet[2260]: I0904 23:41:33.665033 2260 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:41:33.665733 kubelet[2260]: I0904 23:41:33.665062 2260 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:41:33.670497 kubelet[2260]: I0904 23:41:33.670333 2260 server.go:317] "Adding debug handlers to kubelet server" Sep 4 23:41:33.670815 kubelet[2260]: I0904 23:41:33.665031 2260 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:41:33.672534 kubelet[2260]: E0904 23:41:33.672384 2260 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:41:33.672580 kubelet[2260]: I0904 23:41:33.672553 2260 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:41:33.672834 kubelet[2260]: I0904 23:41:33.672808 2260 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:41:33.673134 kubelet[2260]: I0904 23:41:33.672911 2260 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:41:33.673669 kubelet[2260]: E0904 23:41:33.673633 2260 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 4 23:41:33.673704 kubelet[2260]: I0904 23:41:33.671368 2260 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:41:33.674228 kubelet[2260]: E0904 23:41:33.672681 2260 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186238d55cb47145 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 23:41:33.662433605 +0000 UTC m=+0.841991326,LastTimestamp:2025-09-04 23:41:33.662433605 +0000 UTC m=+0.841991326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 23:41:33.674315 kubelet[2260]: E0904 23:41:33.674277 2260 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="200ms" Sep 4 23:41:33.677532 kubelet[2260]: I0904 23:41:33.676730 2260 factory.go:223] Registration of the containerd container factory successfully Sep 4 23:41:33.677532 kubelet[2260]: I0904 23:41:33.676755 2260 factory.go:223] Registration of the systemd container factory successfully Sep 4 23:41:33.677532 kubelet[2260]: I0904 23:41:33.676838 2260 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:41:33.683419 kubelet[2260]: E0904 23:41:33.683382 2260 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:41:33.698270 kubelet[2260]: I0904 23:41:33.698229 2260 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:41:33.698270 kubelet[2260]: I0904 23:41:33.698251 2260 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:41:33.698270 kubelet[2260]: I0904 23:41:33.698269 2260 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:41:33.699248 kubelet[2260]: I0904 23:41:33.699045 2260 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 4 23:41:33.701399 kubelet[2260]: I0904 23:41:33.701366 2260 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 4 23:41:33.701444 kubelet[2260]: I0904 23:41:33.701410 2260 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 4 23:41:33.701470 kubelet[2260]: I0904 23:41:33.701442 2260 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:41:33.701470 kubelet[2260]: I0904 23:41:33.701459 2260 kubelet.go:2436] "Starting kubelet main sync loop" Sep 4 23:41:33.701541 kubelet[2260]: E0904 23:41:33.701517 2260 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:41:33.773653 kubelet[2260]: E0904 23:41:33.773368 2260 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:41:33.802826 kubelet[2260]: E0904 23:41:33.802668 2260 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:41:33.874424 kubelet[2260]: E0904 23:41:33.874355 2260 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:41:33.874843 kubelet[2260]: E0904 23:41:33.874789 2260 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="400ms" Sep 4 23:41:33.975471 kubelet[2260]: E0904 23:41:33.975332 2260 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:41:34.003745 kubelet[2260]: E0904 23:41:34.003605 2260 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:41:34.076578 kubelet[2260]: E0904 23:41:34.076350 2260 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:41:34.177472 kubelet[2260]: E0904 23:41:34.177390 2260 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:41:34.275612 kubelet[2260]: E0904 23:41:34.275517 2260 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="800ms" Sep 4 23:41:34.278580 kubelet[2260]: E0904 23:41:34.278530 2260 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:41:34.379140 kubelet[2260]: E0904 23:41:34.378957 2260 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:41:34.404336 kubelet[2260]: E0904 23:41:34.404224 2260 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:41:34.428438 kubelet[2260]: E0904 23:41:34.428342 2260 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 4 23:41:34.476156 kubelet[2260]: E0904 23:41:34.475996 2260 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 4 23:41:34.479842 kubelet[2260]: E0904 23:41:34.479807 2260 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:41:34.491520 kubelet[2260]: I0904 23:41:34.491432 2260 policy_none.go:49] "None policy: Start" Sep 4 23:41:34.491520 kubelet[2260]: I0904 23:41:34.491500 2260 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:41:34.491520 kubelet[2260]: I0904 23:41:34.491519 2260 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:41:34.521152 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 23:41:34.536206 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 23:41:34.539771 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 23:41:34.551174 kubelet[2260]: E0904 23:41:34.551124 2260 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 4 23:41:34.551467 kubelet[2260]: I0904 23:41:34.551444 2260 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:41:34.551502 kubelet[2260]: I0904 23:41:34.551466 2260 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:41:34.553657 kubelet[2260]: E0904 23:41:34.553618 2260 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:41:34.553715 kubelet[2260]: E0904 23:41:34.553693 2260 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 23:41:34.554384 kubelet[2260]: I0904 23:41:34.554347 2260 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:41:34.644883 kubelet[2260]: E0904 23:41:34.644711 2260 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 23:41:34.653786 kubelet[2260]: I0904 23:41:34.653705 2260 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:41:34.654145 kubelet[2260]: E0904 23:41:34.654111 2260 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Sep 4 23:41:34.856556 kubelet[2260]: I0904 23:41:34.856502 2260 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:41:34.857014 kubelet[2260]: E0904 23:41:34.856975 2260 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Sep 4 23:41:35.033103 kubelet[2260]: E0904 23:41:35.033022 2260 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 4 23:41:35.076767 kubelet[2260]: E0904 23:41:35.076651 2260 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="1.6s" Sep 4 23:41:35.244934 systemd[1]: Created slice kubepods-burstable-pod06e3270e5c6cae6be062c2e4d3059349.slice - libcontainer container kubepods-burstable-pod06e3270e5c6cae6be062c2e4d3059349.slice. Sep 4 23:41:35.257244 kubelet[2260]: E0904 23:41:35.257193 2260 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:41:35.258497 kubelet[2260]: I0904 23:41:35.258478 2260 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:41:35.258964 kubelet[2260]: E0904 23:41:35.258917 2260 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Sep 4 23:41:35.262233 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 4 23:41:35.264023 kubelet[2260]: E0904 23:41:35.263988 2260 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:41:35.284508 kubelet[2260]: I0904 23:41:35.284355 2260 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06e3270e5c6cae6be062c2e4d3059349-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"06e3270e5c6cae6be062c2e4d3059349\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:41:35.284508 kubelet[2260]: I0904 23:41:35.284415 2260 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06e3270e5c6cae6be062c2e4d3059349-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"06e3270e5c6cae6be062c2e4d3059349\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:41:35.284508 kubelet[2260]: I0904 23:41:35.284438 2260 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:41:35.284508 kubelet[2260]: I0904 23:41:35.284499 2260 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 4 23:41:35.284682 kubelet[2260]: I0904 23:41:35.284653 2260 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06e3270e5c6cae6be062c2e4d3059349-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"06e3270e5c6cae6be062c2e4d3059349\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:41:35.284756 kubelet[2260]: I0904 23:41:35.284706 2260 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:41:35.284793 kubelet[2260]: I0904 23:41:35.284767 2260 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:41:35.284827 kubelet[2260]: I0904 23:41:35.284794 2260 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:41:35.284827 kubelet[2260]: I0904 23:41:35.284817 2260 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:41:35.358866 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 4 23:41:35.360930 kubelet[2260]: E0904 23:41:35.360875 2260 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:41:35.544202 kubelet[2260]: E0904 23:41:35.544020 2260 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 4 23:41:35.558204 kubelet[2260]: E0904 23:41:35.558129 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:35.559215 containerd[1512]: time="2025-09-04T23:41:35.559140591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:06e3270e5c6cae6be062c2e4d3059349,Namespace:kube-system,Attempt:0,}" Sep 4 23:41:35.565634 kubelet[2260]: E0904 23:41:35.565570 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:35.566494 containerd[1512]: time="2025-09-04T23:41:35.566429660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 4 23:41:35.662138 kubelet[2260]: E0904 23:41:35.662083 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:35.662776 containerd[1512]: time="2025-09-04T23:41:35.662694651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 4 23:41:35.749728 kubelet[2260]: E0904 23:41:35.749657 2260 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 4 23:41:36.060382 kubelet[2260]: I0904 23:41:36.060350 2260 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:41:36.060861 kubelet[2260]: E0904 23:41:36.060814 2260 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Sep 4 23:41:36.449536 kubelet[2260]: E0904 23:41:36.449343 2260 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 23:41:36.583146 kubelet[2260]: E0904 23:41:36.583060 2260 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 4 23:41:36.678379 kubelet[2260]: E0904 23:41:36.678312 2260 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="3.2s" Sep 4 23:41:37.359058 kubelet[2260]: E0904 23:41:37.358972 2260 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 4 23:41:37.663108 kubelet[2260]: I0904 23:41:37.662926 2260 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:41:37.663431 kubelet[2260]: E0904 23:41:37.663345 2260 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Sep 4 23:41:37.710372 kubelet[2260]: E0904 23:41:37.710295 2260 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 4 23:41:37.726059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748816637.mount: Deactivated successfully. Sep 4 23:41:38.100244 kubelet[2260]: E0904 23:41:38.100066 2260 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186238d55cb47145 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 23:41:33.662433605 +0000 UTC m=+0.841991326,LastTimestamp:2025-09-04 23:41:33.662433605 +0000 UTC m=+0.841991326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 23:41:38.397188 containerd[1512]: time="2025-09-04T23:41:38.396974645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:41:38.455635 containerd[1512]: time="2025-09-04T23:41:38.455513917Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 23:41:38.474668 containerd[1512]: time="2025-09-04T23:41:38.474593458Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:41:38.514580 containerd[1512]: time="2025-09-04T23:41:38.514478199Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:41:38.531318 containerd[1512]: time="2025-09-04T23:41:38.531257215Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:41:38.537640 containerd[1512]: time="2025-09-04T23:41:38.537564876Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:41:38.552406 containerd[1512]: time="2025-09-04T23:41:38.552303835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:41:38.553514 containerd[1512]: time="2025-09-04T23:41:38.553449719Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.994168528s" Sep 4 23:41:38.565102 containerd[1512]: time="2025-09-04T23:41:38.565017404Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:41:38.577353 containerd[1512]: time="2025-09-04T23:41:38.577299790Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.010760571s" Sep 4 23:41:38.616092 containerd[1512]: time="2025-09-04T23:41:38.616020933Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.953179061s" Sep 4 23:41:39.181689 containerd[1512]: time="2025-09-04T23:41:39.181535119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:41:39.181689 containerd[1512]: time="2025-09-04T23:41:39.181613779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:41:39.181689 containerd[1512]: time="2025-09-04T23:41:39.181628037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:41:39.181992 containerd[1512]: time="2025-09-04T23:41:39.181724210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:41:39.198216 containerd[1512]: time="2025-09-04T23:41:39.193744685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:41:39.198216 containerd[1512]: time="2025-09-04T23:41:39.198147056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:41:39.198216 containerd[1512]: time="2025-09-04T23:41:39.198169509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:41:39.198630 containerd[1512]: time="2025-09-04T23:41:39.198326467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:41:39.207275 containerd[1512]: time="2025-09-04T23:41:39.206667718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:41:39.207275 containerd[1512]: time="2025-09-04T23:41:39.206731730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:41:39.207275 containerd[1512]: time="2025-09-04T23:41:39.206743411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:41:39.207275 containerd[1512]: time="2025-09-04T23:41:39.206853873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:41:39.226177 systemd[1]: Started cri-containerd-1a54ed5b425c7d8a5fe1110304d451fec676af8d79ab00c07ccacb68e006db82.scope - libcontainer container 1a54ed5b425c7d8a5fe1110304d451fec676af8d79ab00c07ccacb68e006db82. Sep 4 23:41:39.240853 systemd[1]: Started cri-containerd-f7cad906892b48ddf5f1e7cf8426d7216afa2e7e99bfa91cc9d989d5b8abc3e8.scope - libcontainer container f7cad906892b48ddf5f1e7cf8426d7216afa2e7e99bfa91cc9d989d5b8abc3e8. Sep 4 23:41:39.248658 systemd[1]: Started cri-containerd-6479c3e40839d36145323fdd8bcf2e8e53e22a123c64e8a5d4dd4179b547b6cb.scope - libcontainer container 6479c3e40839d36145323fdd8bcf2e8e53e22a123c64e8a5d4dd4179b547b6cb. Sep 4 23:41:39.297306 containerd[1512]: time="2025-09-04T23:41:39.297243399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a54ed5b425c7d8a5fe1110304d451fec676af8d79ab00c07ccacb68e006db82\"" Sep 4 23:41:39.299938 kubelet[2260]: E0904 23:41:39.299756 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:39.313100 containerd[1512]: time="2025-09-04T23:41:39.312862796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:06e3270e5c6cae6be062c2e4d3059349,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7cad906892b48ddf5f1e7cf8426d7216afa2e7e99bfa91cc9d989d5b8abc3e8\"" Sep 4 23:41:39.314500 kubelet[2260]: E0904 23:41:39.314370 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:39.321728 containerd[1512]: time="2025-09-04T23:41:39.321656347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"6479c3e40839d36145323fdd8bcf2e8e53e22a123c64e8a5d4dd4179b547b6cb\"" Sep 4 23:41:39.322383 kubelet[2260]: E0904 23:41:39.322336 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:39.421125 containerd[1512]: time="2025-09-04T23:41:39.421051839Z" level=info msg="CreateContainer within sandbox \"1a54ed5b425c7d8a5fe1110304d451fec676af8d79ab00c07ccacb68e006db82\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 23:41:39.583367 containerd[1512]: time="2025-09-04T23:41:39.583299202Z" level=info msg="CreateContainer within sandbox \"f7cad906892b48ddf5f1e7cf8426d7216afa2e7e99bfa91cc9d989d5b8abc3e8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 23:41:39.658965 containerd[1512]: time="2025-09-04T23:41:39.658882319Z" level=info msg="CreateContainer within sandbox \"6479c3e40839d36145323fdd8bcf2e8e53e22a123c64e8a5d4dd4179b547b6cb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 23:41:39.880017 kubelet[2260]: E0904 23:41:39.879860 2260 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="6.4s" Sep 4 23:41:40.035182 kubelet[2260]: E0904 23:41:40.035097 2260 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 4 23:41:40.267013 containerd[1512]: time="2025-09-04T23:41:40.266858756Z" level=info msg="CreateContainer within sandbox \"1a54ed5b425c7d8a5fe1110304d451fec676af8d79ab00c07ccacb68e006db82\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9520b615b24ac3731f379fc155800d57f4c72597657926a82121f02c90058e6b\"" Sep 4 23:41:40.268083 containerd[1512]: time="2025-09-04T23:41:40.268043008Z" level=info msg="StartContainer for \"9520b615b24ac3731f379fc155800d57f4c72597657926a82121f02c90058e6b\"" Sep 4 23:41:40.313708 systemd[1]: Started cri-containerd-9520b615b24ac3731f379fc155800d57f4c72597657926a82121f02c90058e6b.scope - libcontainer container 9520b615b24ac3731f379fc155800d57f4c72597657926a82121f02c90058e6b. Sep 4 23:41:40.411250 kubelet[2260]: E0904 23:41:40.411139 2260 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 23:41:40.579506 containerd[1512]: time="2025-09-04T23:41:40.579318613Z" level=info msg="CreateContainer within sandbox \"f7cad906892b48ddf5f1e7cf8426d7216afa2e7e99bfa91cc9d989d5b8abc3e8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"de7aa5f1218db0e91bd6652ded218fbce1dee132458a09a40dc35b80251cee4f\"" Sep 4 23:41:40.579506 containerd[1512]: time="2025-09-04T23:41:40.579450814Z" level=info msg="StartContainer for \"9520b615b24ac3731f379fc155800d57f4c72597657926a82121f02c90058e6b\" returns successfully" Sep 4 23:41:40.580942 containerd[1512]: time="2025-09-04T23:41:40.580877929Z" level=info msg="StartContainer for \"de7aa5f1218db0e91bd6652ded218fbce1dee132458a09a40dc35b80251cee4f\"" Sep 4 23:41:40.619380 containerd[1512]: time="2025-09-04T23:41:40.619177806Z" level=info msg="CreateContainer within sandbox \"6479c3e40839d36145323fdd8bcf2e8e53e22a123c64e8a5d4dd4179b547b6cb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f70045aa07db88e756ed89984ce34ea71553e690aa6f8d154ba1e35abee6f8b9\"" Sep 4 23:41:40.619992 containerd[1512]: time="2025-09-04T23:41:40.619962590Z" level=info msg="StartContainer for \"f70045aa07db88e756ed89984ce34ea71553e690aa6f8d154ba1e35abee6f8b9\"" Sep 4 23:41:40.725025 kubelet[2260]: E0904 23:41:40.724966 2260 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:41:40.725278 kubelet[2260]: E0904 23:41:40.725237 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:40.768399 systemd[1]: Started cri-containerd-de7aa5f1218db0e91bd6652ded218fbce1dee132458a09a40dc35b80251cee4f.scope - libcontainer container de7aa5f1218db0e91bd6652ded218fbce1dee132458a09a40dc35b80251cee4f. Sep 4 23:41:40.799089 systemd[1]: Started cri-containerd-f70045aa07db88e756ed89984ce34ea71553e690aa6f8d154ba1e35abee6f8b9.scope - libcontainer container f70045aa07db88e756ed89984ce34ea71553e690aa6f8d154ba1e35abee6f8b9. Sep 4 23:41:40.865961 kubelet[2260]: I0904 23:41:40.865251 2260 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:41:40.865961 kubelet[2260]: E0904 23:41:40.865885 2260 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Sep 4 23:41:40.940953 systemd[1]: run-containerd-runc-k8s.io-9520b615b24ac3731f379fc155800d57f4c72597657926a82121f02c90058e6b-runc.dhfyiD.mount: Deactivated successfully. Sep 4 23:41:41.209216 containerd[1512]: time="2025-09-04T23:41:41.209021416Z" level=info msg="StartContainer for \"f70045aa07db88e756ed89984ce34ea71553e690aa6f8d154ba1e35abee6f8b9\" returns successfully" Sep 4 23:41:41.209216 containerd[1512]: time="2025-09-04T23:41:41.209089917Z" level=info msg="StartContainer for \"de7aa5f1218db0e91bd6652ded218fbce1dee132458a09a40dc35b80251cee4f\" returns successfully" Sep 4 23:41:41.733643 kubelet[2260]: E0904 23:41:41.733589 2260 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:41:41.735761 kubelet[2260]: E0904 23:41:41.733743 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:41.739998 kubelet[2260]: E0904 23:41:41.739626 2260 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:41:41.739998 kubelet[2260]: E0904 23:41:41.739820 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:41.741215 kubelet[2260]: E0904 23:41:41.741021 2260 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:41:41.741215 kubelet[2260]: E0904 23:41:41.741144 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:42.740944 kubelet[2260]: E0904 23:41:42.740904 2260 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:41:42.741557 kubelet[2260]: E0904 23:41:42.740979 2260 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:41:42.741557 kubelet[2260]: E0904 23:41:42.741074 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:42.741557 kubelet[2260]: E0904 23:41:42.741077 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:42.741705 kubelet[2260]: E0904 23:41:42.741676 2260 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:41:42.741880 kubelet[2260]: E0904 23:41:42.741858 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:43.621104 kubelet[2260]: I0904 23:41:43.621002 2260 apiserver.go:52] "Watching apiserver" Sep 4 23:41:43.673126 kubelet[2260]: I0904 23:41:43.673014 2260 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:41:43.743269 kubelet[2260]: E0904 23:41:43.743206 2260 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:41:43.743804 kubelet[2260]: E0904 23:41:43.743325 2260 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:41:43.743804 kubelet[2260]: E0904 23:41:43.743385 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:43.743804 kubelet[2260]: E0904 23:41:43.743505 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:44.433037 update_engine[1504]: I20250904 23:41:44.432871 1504 update_attempter.cc:509] Updating boot flags... Sep 4 23:41:44.554150 kubelet[2260]: E0904 23:41:44.554069 2260 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 23:41:44.707944 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2557) Sep 4 23:41:44.745045 kubelet[2260]: E0904 23:41:44.744996 2260 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:41:44.745591 kubelet[2260]: E0904 23:41:44.745196 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:45.494961 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2556) Sep 4 23:41:45.538349 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2556) Sep 4 23:41:46.184234 kubelet[2260]: E0904 23:41:46.184172 2260 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 4 23:41:46.549119 kubelet[2260]: E0904 23:41:46.549049 2260 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 23:41:47.269868 kubelet[2260]: I0904 23:41:47.269485 2260 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:41:48.099770 kubelet[2260]: I0904 23:41:48.099689 2260 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 23:41:48.173804 kubelet[2260]: I0904 23:41:48.173733 2260 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:41:48.547057 kubelet[2260]: I0904 23:41:48.546475 2260 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:41:48.547057 kubelet[2260]: E0904 23:41:48.547047 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:48.646219 kubelet[2260]: E0904 23:41:48.646146 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:48.647599 kubelet[2260]: I0904 23:41:48.647573 2260 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:41:48.868781 kubelet[2260]: E0904 23:41:48.868623 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:49.664058 kubelet[2260]: E0904 23:41:49.663945 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:50.828076 kubelet[2260]: E0904 23:41:50.827981 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:51.102673 kubelet[2260]: I0904 23:41:51.102368 2260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.10234003 podStartE2EDuration="3.10234003s" podCreationTimestamp="2025-09-04 23:41:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:41:51.102107371 +0000 UTC m=+18.281665112" watchObservedRunningTime="2025-09-04 23:41:51.10234003 +0000 UTC m=+18.281897751" Sep 4 23:41:51.352520 kubelet[2260]: I0904 23:41:51.352402 2260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.352376242 podStartE2EDuration="3.352376242s" podCreationTimestamp="2025-09-04 23:41:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:41:51.352345954 +0000 UTC m=+18.531903685" watchObservedRunningTime="2025-09-04 23:41:51.352376242 +0000 UTC m=+18.531933963" Sep 4 23:41:51.686639 kubelet[2260]: I0904 23:41:51.686532 2260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.686479166 podStartE2EDuration="3.686479166s" podCreationTimestamp="2025-09-04 23:41:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:41:51.489700731 +0000 UTC m=+18.669258452" watchObservedRunningTime="2025-09-04 23:41:51.686479166 +0000 UTC m=+18.866036887" Sep 4 23:41:53.326214 kubelet[2260]: E0904 23:41:53.326153 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:56.339048 systemd[1]: Reload requested from client PID 2570 ('systemctl') (unit session-7.scope)... Sep 4 23:41:56.339071 systemd[1]: Reloading... Sep 4 23:41:56.443137 zram_generator::config[2610]: No configuration found. Sep 4 23:41:56.589859 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:41:56.738354 systemd[1]: Reloading finished in 398 ms. Sep 4 23:41:56.773945 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:41:56.793837 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:41:56.794275 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:41:56.794344 systemd[1]: kubelet.service: Consumed 2.084s CPU time, 137M memory peak. Sep 4 23:41:56.809590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:41:57.029668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:41:57.035289 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:41:57.095009 kubelet[2659]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:41:57.095009 kubelet[2659]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:41:57.095009 kubelet[2659]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:41:57.095009 kubelet[2659]: I0904 23:41:57.093958 2659 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:41:57.103403 kubelet[2659]: I0904 23:41:57.103259 2659 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 4 23:41:57.103403 kubelet[2659]: I0904 23:41:57.103297 2659 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:41:57.103662 kubelet[2659]: I0904 23:41:57.103623 2659 server.go:956] "Client rotation is on, will bootstrap in background" Sep 4 23:41:57.105553 kubelet[2659]: I0904 23:41:57.105511 2659 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 4 23:41:57.108054 kubelet[2659]: I0904 23:41:57.107981 2659 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:41:57.113844 kubelet[2659]: E0904 23:41:57.113794 2659 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:41:57.113844 kubelet[2659]: I0904 23:41:57.113830 2659 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:41:57.119749 kubelet[2659]: I0904 23:41:57.119693 2659 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:41:57.120106 kubelet[2659]: I0904 23:41:57.120057 2659 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:41:57.120280 kubelet[2659]: I0904 23:41:57.120098 2659 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:41:57.120376 kubelet[2659]: I0904 23:41:57.120293 2659 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:41:57.120376 kubelet[2659]: I0904 23:41:57.120302 2659 container_manager_linux.go:303] "Creating device plugin manager" Sep 4 23:41:57.120376 kubelet[2659]: I0904 23:41:57.120358 2659 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:41:57.120560 kubelet[2659]: I0904 23:41:57.120545 2659 kubelet.go:480] "Attempting to sync node with API server" Sep 4 23:41:57.120590 kubelet[2659]: I0904 23:41:57.120562 2659 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:41:57.120590 kubelet[2659]: I0904 23:41:57.120589 2659 kubelet.go:386] "Adding apiserver pod source" Sep 4 23:41:57.120672 kubelet[2659]: I0904 23:41:57.120605 2659 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:41:57.121970 kubelet[2659]: I0904 23:41:57.121926 2659 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:41:57.122686 kubelet[2659]: I0904 23:41:57.122648 2659 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 4 23:41:57.130589 kubelet[2659]: I0904 23:41:57.128586 2659 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:41:57.130589 kubelet[2659]: I0904 23:41:57.128673 2659 server.go:1289] "Started kubelet" Sep 4 23:41:57.130589 kubelet[2659]: I0904 23:41:57.130044 2659 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:41:57.132790 kubelet[2659]: I0904 23:41:57.132718 2659 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:41:57.134480 kubelet[2659]: I0904 23:41:57.134041 2659 server.go:317] "Adding debug handlers to kubelet server" Sep 4 23:41:57.136321 kubelet[2659]: I0904 23:41:57.136100 2659 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:41:57.137207 kubelet[2659]: I0904 23:41:57.137114 2659 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:41:57.137393 kubelet[2659]: I0904 23:41:57.137266 2659 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:41:57.137769 kubelet[2659]: I0904 23:41:57.137278 2659 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:41:57.138190 kubelet[2659]: I0904 23:41:57.138173 2659 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:41:57.139349 kubelet[2659]: I0904 23:41:57.139311 2659 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:41:57.147924 kubelet[2659]: E0904 23:41:57.145539 2659 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:41:57.147924 kubelet[2659]: I0904 23:41:57.146982 2659 factory.go:223] Registration of the systemd container factory successfully Sep 4 23:41:57.147924 kubelet[2659]: I0904 23:41:57.147120 2659 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:41:57.148216 kubelet[2659]: I0904 23:41:57.148156 2659 factory.go:223] Registration of the containerd container factory successfully Sep 4 23:41:57.161518 kubelet[2659]: I0904 23:41:57.161472 2659 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 4 23:41:57.164027 kubelet[2659]: I0904 23:41:57.163980 2659 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 4 23:41:57.164027 kubelet[2659]: I0904 23:41:57.164028 2659 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 4 23:41:57.164139 kubelet[2659]: I0904 23:41:57.164064 2659 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:41:57.164139 kubelet[2659]: I0904 23:41:57.164076 2659 kubelet.go:2436] "Starting kubelet main sync loop" Sep 4 23:41:57.164212 kubelet[2659]: E0904 23:41:57.164139 2659 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:41:57.191864 kubelet[2659]: I0904 23:41:57.191810 2659 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:41:57.191864 kubelet[2659]: I0904 23:41:57.191835 2659 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:41:57.191864 kubelet[2659]: I0904 23:41:57.191862 2659 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:41:57.192141 kubelet[2659]: I0904 23:41:57.192066 2659 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 23:41:57.192141 kubelet[2659]: I0904 23:41:57.192080 2659 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 23:41:57.192141 kubelet[2659]: I0904 23:41:57.192100 2659 policy_none.go:49] "None policy: Start" Sep 4 23:41:57.192141 kubelet[2659]: I0904 23:41:57.192112 2659 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:41:57.192141 kubelet[2659]: I0904 23:41:57.192126 2659 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:41:57.192307 kubelet[2659]: I0904 23:41:57.192251 2659 state_mem.go:75] "Updated machine memory state" Sep 4 23:41:57.197753 kubelet[2659]: E0904 23:41:57.197622 2659 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 4 23:41:57.198007 kubelet[2659]: I0904 23:41:57.197988 2659 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:41:57.198060 kubelet[2659]: I0904 23:41:57.198008 2659 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:41:57.198828 kubelet[2659]: I0904 23:41:57.198793 2659 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:41:57.200013 kubelet[2659]: E0904 23:41:57.199942 2659 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:41:57.265793 kubelet[2659]: I0904 23:41:57.265726 2659 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:41:57.266270 kubelet[2659]: I0904 23:41:57.265729 2659 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:41:57.266270 kubelet[2659]: I0904 23:41:57.266004 2659 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:41:57.306999 kubelet[2659]: I0904 23:41:57.306856 2659 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:41:57.339845 kubelet[2659]: I0904 23:41:57.339754 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06e3270e5c6cae6be062c2e4d3059349-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"06e3270e5c6cae6be062c2e4d3059349\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:41:57.339845 kubelet[2659]: I0904 23:41:57.339831 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06e3270e5c6cae6be062c2e4d3059349-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"06e3270e5c6cae6be062c2e4d3059349\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:41:57.339845 kubelet[2659]: I0904 23:41:57.339859 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:41:57.340158 kubelet[2659]: I0904 23:41:57.339889 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:41:57.340158 kubelet[2659]: I0904 23:41:57.339951 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 4 23:41:57.340158 kubelet[2659]: I0904 23:41:57.339971 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06e3270e5c6cae6be062c2e4d3059349-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"06e3270e5c6cae6be062c2e4d3059349\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:41:57.340158 kubelet[2659]: I0904 23:41:57.339994 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:41:57.340158 kubelet[2659]: I0904 23:41:57.340041 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:41:57.340333 kubelet[2659]: I0904 23:41:57.340090 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:41:57.488132 kubelet[2659]: E0904 23:41:57.488067 2659 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 23:41:57.488405 kubelet[2659]: E0904 23:41:57.488302 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:57.488721 kubelet[2659]: E0904 23:41:57.488701 2659 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:41:57.489123 kubelet[2659]: E0904 23:41:57.488867 2659 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 23:41:57.489123 kubelet[2659]: E0904 23:41:57.489074 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:57.489321 kubelet[2659]: E0904 23:41:57.489304 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:57.528511 kubelet[2659]: I0904 23:41:57.528447 2659 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 4 23:41:57.528729 kubelet[2659]: I0904 23:41:57.528589 2659 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 23:41:58.121952 kubelet[2659]: I0904 23:41:58.121886 2659 apiserver.go:52] "Watching apiserver" Sep 4 23:41:58.138645 kubelet[2659]: I0904 23:41:58.138560 2659 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:41:58.177875 kubelet[2659]: I0904 23:41:58.177838 2659 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:41:58.177875 kubelet[2659]: E0904 23:41:58.177945 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:58.177875 kubelet[2659]: I0904 23:41:58.178059 2659 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:41:58.304119 sudo[2698]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 23:41:58.304501 sudo[2698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 23:41:58.423344 kubelet[2659]: E0904 23:41:58.423191 2659 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 23:41:58.423487 kubelet[2659]: E0904 23:41:58.423437 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:58.423761 kubelet[2659]: E0904 23:41:58.423547 2659 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 23:41:58.423761 kubelet[2659]: E0904 23:41:58.423679 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:58.810084 sudo[2698]: pam_unix(sudo:session): session closed for user root Sep 4 23:41:59.179505 kubelet[2659]: E0904 23:41:59.179332 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:41:59.179505 kubelet[2659]: E0904 23:41:59.179409 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:00.234725 kubelet[2659]: E0904 23:42:00.234405 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:01.182589 kubelet[2659]: E0904 23:42:01.182525 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:02.480826 kubelet[2659]: E0904 23:42:02.480767 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:03.186187 kubelet[2659]: E0904 23:42:03.186132 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:03.511656 kubelet[2659]: I0904 23:42:03.511351 2659 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 23:42:03.512208 kubelet[2659]: I0904 23:42:03.511934 2659 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 23:42:03.512253 containerd[1512]: time="2025-09-04T23:42:03.511715947Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 23:42:05.226765 sudo[1698]: pam_unix(sudo:session): session closed for user root Sep 4 23:42:05.228443 sshd[1697]: Connection closed by 10.0.0.1 port 38082 Sep 4 23:42:05.229507 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Sep 4 23:42:05.233973 systemd[1]: sshd@6-10.0.0.28:22-10.0.0.1:38082.service: Deactivated successfully. Sep 4 23:42:05.236920 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 23:42:05.237231 systemd[1]: session-7.scope: Consumed 6.411s CPU time, 249.8M memory peak. Sep 4 23:42:05.238763 systemd-logind[1494]: Session 7 logged out. Waiting for processes to exit. Sep 4 23:42:05.239964 systemd-logind[1494]: Removed session 7. Sep 4 23:42:05.711393 systemd[1]: Created slice kubepods-burstable-pod75d2f65c_be6b_49bc_b83f_56af452cdd2b.slice - libcontainer container kubepods-burstable-pod75d2f65c_be6b_49bc_b83f_56af452cdd2b.slice. Sep 4 23:42:05.795293 kubelet[2659]: I0904 23:42:05.795229 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-hostproc\") pod \"cilium-ntbr4\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " pod="kube-system/cilium-ntbr4" Sep 4 23:42:05.795293 kubelet[2659]: I0904 23:42:05.795277 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-cni-path\") pod \"cilium-ntbr4\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " pod="kube-system/cilium-ntbr4" Sep 4 23:42:05.795293 kubelet[2659]: I0904 23:42:05.795297 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75d2f65c-be6b-49bc-b83f-56af452cdd2b-cilium-config-path\") pod \"cilium-ntbr4\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " pod="kube-system/cilium-ntbr4" Sep 4 23:42:05.796047 kubelet[2659]: I0904 23:42:05.795318 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-bpf-maps\") pod \"cilium-ntbr4\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " pod="kube-system/cilium-ntbr4" Sep 4 23:42:05.796047 kubelet[2659]: I0904 23:42:05.795337 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-xtables-lock\") pod \"cilium-ntbr4\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " pod="kube-system/cilium-ntbr4" Sep 4 23:42:05.796047 kubelet[2659]: I0904 23:42:05.795354 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-host-proc-sys-net\") pod \"cilium-ntbr4\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " pod="kube-system/cilium-ntbr4" Sep 4 23:42:05.796047 kubelet[2659]: I0904 23:42:05.795372 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-host-proc-sys-kernel\") pod \"cilium-ntbr4\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " pod="kube-system/cilium-ntbr4" Sep 4 23:42:05.796047 kubelet[2659]: I0904 23:42:05.795409 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-cilium-cgroup\") pod \"cilium-ntbr4\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " pod="kube-system/cilium-ntbr4" Sep 4 23:42:05.796047 kubelet[2659]: I0904 23:42:05.795443 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75d2f65c-be6b-49bc-b83f-56af452cdd2b-hubble-tls\") pod \"cilium-ntbr4\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " pod="kube-system/cilium-ntbr4" Sep 4 23:42:05.796248 kubelet[2659]: I0904 23:42:05.795491 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-cilium-run\") pod \"cilium-ntbr4\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " pod="kube-system/cilium-ntbr4" Sep 4 23:42:05.796248 kubelet[2659]: I0904 23:42:05.795533 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-etc-cni-netd\") pod \"cilium-ntbr4\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " pod="kube-system/cilium-ntbr4" Sep 4 23:42:05.796248 kubelet[2659]: I0904 23:42:05.795580 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-lib-modules\") pod \"cilium-ntbr4\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " pod="kube-system/cilium-ntbr4" Sep 4 23:42:05.796248 kubelet[2659]: I0904 23:42:05.795616 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75d2f65c-be6b-49bc-b83f-56af452cdd2b-clustermesh-secrets\") pod \"cilium-ntbr4\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " pod="kube-system/cilium-ntbr4" Sep 4 23:42:05.796248 kubelet[2659]: I0904 23:42:05.795646 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4m9n\" (UniqueName: \"kubernetes.io/projected/75d2f65c-be6b-49bc-b83f-56af452cdd2b-kube-api-access-t4m9n\") pod \"cilium-ntbr4\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " pod="kube-system/cilium-ntbr4" Sep 4 23:42:05.845847 systemd[1]: Created slice kubepods-besteffort-podb54cfc4a_953e_4deb_bda3_6be04a9117c2.slice - libcontainer container kubepods-besteffort-podb54cfc4a_953e_4deb_bda3_6be04a9117c2.slice. Sep 4 23:42:05.896951 kubelet[2659]: I0904 23:42:05.896815 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxmg5\" (UniqueName: \"kubernetes.io/projected/b54cfc4a-953e-4deb-bda3-6be04a9117c2-kube-api-access-cxmg5\") pod \"kube-proxy-l649h\" (UID: \"b54cfc4a-953e-4deb-bda3-6be04a9117c2\") " pod="kube-system/kube-proxy-l649h" Sep 4 23:42:05.897221 kubelet[2659]: I0904 23:42:05.897032 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b54cfc4a-953e-4deb-bda3-6be04a9117c2-kube-proxy\") pod \"kube-proxy-l649h\" (UID: \"b54cfc4a-953e-4deb-bda3-6be04a9117c2\") " pod="kube-system/kube-proxy-l649h" Sep 4 23:42:05.897221 kubelet[2659]: I0904 23:42:05.897053 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b54cfc4a-953e-4deb-bda3-6be04a9117c2-xtables-lock\") pod \"kube-proxy-l649h\" (UID: \"b54cfc4a-953e-4deb-bda3-6be04a9117c2\") " pod="kube-system/kube-proxy-l649h" Sep 4 23:42:05.897221 kubelet[2659]: I0904 23:42:05.897076 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b54cfc4a-953e-4deb-bda3-6be04a9117c2-lib-modules\") pod \"kube-proxy-l649h\" (UID: \"b54cfc4a-953e-4deb-bda3-6be04a9117c2\") " pod="kube-system/kube-proxy-l649h" Sep 4 23:42:06.055318 systemd[1]: Created slice kubepods-besteffort-pod53e99c6e_dda3_4308_a6d4_e6e9e5a2ed1f.slice - libcontainer container kubepods-besteffort-pod53e99c6e_dda3_4308_a6d4_e6e9e5a2ed1f.slice. Sep 4 23:42:06.098923 kubelet[2659]: I0904 23:42:06.098827 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5jf2\" (UniqueName: \"kubernetes.io/projected/53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f-kube-api-access-b5jf2\") pod \"cilium-operator-6c4d7847fc-p6fss\" (UID: \"53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f\") " pod="kube-system/cilium-operator-6c4d7847fc-p6fss" Sep 4 23:42:06.098923 kubelet[2659]: I0904 23:42:06.098886 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-p6fss\" (UID: \"53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f\") " pod="kube-system/cilium-operator-6c4d7847fc-p6fss" Sep 4 23:42:06.358372 kubelet[2659]: E0904 23:42:06.358213 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:06.359039 containerd[1512]: time="2025-09-04T23:42:06.358961808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-p6fss,Uid:53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f,Namespace:kube-system,Attempt:0,}" Sep 4 23:42:06.457995 kubelet[2659]: E0904 23:42:06.457873 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:06.458650 containerd[1512]: time="2025-09-04T23:42:06.458596295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l649h,Uid:b54cfc4a-953e-4deb-bda3-6be04a9117c2,Namespace:kube-system,Attempt:0,}" Sep 4 23:42:06.615243 kubelet[2659]: E0904 23:42:06.615111 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:06.615666 containerd[1512]: time="2025-09-04T23:42:06.615596924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ntbr4,Uid:75d2f65c-be6b-49bc-b83f-56af452cdd2b,Namespace:kube-system,Attempt:0,}" Sep 4 23:42:07.602798 containerd[1512]: time="2025-09-04T23:42:07.601916322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:42:07.602798 containerd[1512]: time="2025-09-04T23:42:07.602740511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:42:07.602798 containerd[1512]: time="2025-09-04T23:42:07.602755208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:42:07.603379 containerd[1512]: time="2025-09-04T23:42:07.602852191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:42:07.635108 systemd[1]: Started cri-containerd-5cc9ed24615f88049f17ff0edd8d077cdf57a579e3ab2c61a25caba281a8f8e6.scope - libcontainer container 5cc9ed24615f88049f17ff0edd8d077cdf57a579e3ab2c61a25caba281a8f8e6. Sep 4 23:42:07.678114 containerd[1512]: time="2025-09-04T23:42:07.678061094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-p6fss,Uid:53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cc9ed24615f88049f17ff0edd8d077cdf57a579e3ab2c61a25caba281a8f8e6\"" Sep 4 23:42:07.678998 kubelet[2659]: E0904 23:42:07.678956 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:07.680755 containerd[1512]: time="2025-09-04T23:42:07.680687340Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 23:42:07.885257 containerd[1512]: time="2025-09-04T23:42:07.884999182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:42:07.885257 containerd[1512]: time="2025-09-04T23:42:07.885053194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:42:07.885257 containerd[1512]: time="2025-09-04T23:42:07.885064225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:42:07.885257 containerd[1512]: time="2025-09-04T23:42:07.885149124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:42:07.910068 systemd[1]: Started cri-containerd-51c30470596c22eacbcab7f57900bcfa467cf8e4b43cabd5bb72893a25109cec.scope - libcontainer container 51c30470596c22eacbcab7f57900bcfa467cf8e4b43cabd5bb72893a25109cec. Sep 4 23:42:07.942482 containerd[1512]: time="2025-09-04T23:42:07.942360456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:42:07.942659 containerd[1512]: time="2025-09-04T23:42:07.942509316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:42:07.942659 containerd[1512]: time="2025-09-04T23:42:07.942559631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:42:07.942878 containerd[1512]: time="2025-09-04T23:42:07.942803118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:42:07.944481 containerd[1512]: time="2025-09-04T23:42:07.944402856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l649h,Uid:b54cfc4a-953e-4deb-bda3-6be04a9117c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"51c30470596c22eacbcab7f57900bcfa467cf8e4b43cabd5bb72893a25109cec\"" Sep 4 23:42:07.945294 kubelet[2659]: E0904 23:42:07.945264 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:07.977126 systemd[1]: Started cri-containerd-c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47.scope - libcontainer container c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47. Sep 4 23:42:08.007052 containerd[1512]: time="2025-09-04T23:42:08.007003126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ntbr4,Uid:75d2f65c-be6b-49bc-b83f-56af452cdd2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47\"" Sep 4 23:42:08.007813 kubelet[2659]: E0904 23:42:08.007783 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:08.031510 containerd[1512]: time="2025-09-04T23:42:08.031301012Z" level=info msg="CreateContainer within sandbox \"51c30470596c22eacbcab7f57900bcfa467cf8e4b43cabd5bb72893a25109cec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 23:42:08.032488 kubelet[2659]: E0904 23:42:08.031385 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:08.197030 kubelet[2659]: E0904 23:42:08.196844 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:08.298986 containerd[1512]: time="2025-09-04T23:42:08.298913091Z" level=info msg="CreateContainer within sandbox \"51c30470596c22eacbcab7f57900bcfa467cf8e4b43cabd5bb72893a25109cec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8ec367662907a121386d73066ddd847a8ffd9c33a0340971d0c3e496dda7e331\"" Sep 4 23:42:08.301092 containerd[1512]: time="2025-09-04T23:42:08.300139066Z" level=info msg="StartContainer for \"8ec367662907a121386d73066ddd847a8ffd9c33a0340971d0c3e496dda7e331\"" Sep 4 23:42:08.339334 systemd[1]: Started cri-containerd-8ec367662907a121386d73066ddd847a8ffd9c33a0340971d0c3e496dda7e331.scope - libcontainer container 8ec367662907a121386d73066ddd847a8ffd9c33a0340971d0c3e496dda7e331. Sep 4 23:42:08.385434 containerd[1512]: time="2025-09-04T23:42:08.385355174Z" level=info msg="StartContainer for \"8ec367662907a121386d73066ddd847a8ffd9c33a0340971d0c3e496dda7e331\" returns successfully" Sep 4 23:42:09.216453 kubelet[2659]: E0904 23:42:09.215609 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:09.259214 kubelet[2659]: I0904 23:42:09.258996 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l649h" podStartSLOduration=5.258972762 podStartE2EDuration="5.258972762s" podCreationTimestamp="2025-09-04 23:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:42:09.258698547 +0000 UTC m=+12.216699355" watchObservedRunningTime="2025-09-04 23:42:09.258972762 +0000 UTC m=+12.216973560" Sep 4 23:42:09.544839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount933810338.mount: Deactivated successfully. Sep 4 23:42:10.221841 kubelet[2659]: E0904 23:42:10.221772 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:14.727143 containerd[1512]: time="2025-09-04T23:42:14.727020800Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:42:14.757516 containerd[1512]: time="2025-09-04T23:42:14.757384754Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 4 23:42:14.791264 containerd[1512]: time="2025-09-04T23:42:14.791186605Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:42:14.793565 containerd[1512]: time="2025-09-04T23:42:14.793492345Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.112710076s" Sep 4 23:42:14.793654 containerd[1512]: time="2025-09-04T23:42:14.793575060Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 23:42:14.794863 containerd[1512]: time="2025-09-04T23:42:14.794828755Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 23:42:14.887851 containerd[1512]: time="2025-09-04T23:42:14.887768006Z" level=info msg="CreateContainer within sandbox \"5cc9ed24615f88049f17ff0edd8d077cdf57a579e3ab2c61a25caba281a8f8e6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 23:42:15.480483 containerd[1512]: time="2025-09-04T23:42:15.480416726Z" level=info msg="CreateContainer within sandbox \"5cc9ed24615f88049f17ff0edd8d077cdf57a579e3ab2c61a25caba281a8f8e6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18\"" Sep 4 23:42:15.481019 containerd[1512]: time="2025-09-04T23:42:15.480834551Z" level=info msg="StartContainer for \"05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18\"" Sep 4 23:42:15.514044 systemd[1]: Started cri-containerd-05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18.scope - libcontainer container 05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18. Sep 4 23:42:15.766688 containerd[1512]: time="2025-09-04T23:42:15.766599878Z" level=info msg="StartContainer for \"05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18\" returns successfully" Sep 4 23:42:16.235720 kubelet[2659]: E0904 23:42:16.235553 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:17.237425 kubelet[2659]: E0904 23:42:17.237386 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:24.113116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2428887874.mount: Deactivated successfully. Sep 4 23:42:29.493085 containerd[1512]: time="2025-09-04T23:42:29.492913894Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:42:29.494112 containerd[1512]: time="2025-09-04T23:42:29.494041941Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 4 23:42:29.495490 containerd[1512]: time="2025-09-04T23:42:29.495456213Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:42:29.497578 containerd[1512]: time="2025-09-04T23:42:29.497517892Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.702646077s" Sep 4 23:42:29.497661 containerd[1512]: time="2025-09-04T23:42:29.497583064Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 23:42:29.503861 containerd[1512]: time="2025-09-04T23:42:29.503818342Z" level=info msg="CreateContainer within sandbox \"c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:42:29.585582 containerd[1512]: time="2025-09-04T23:42:29.585511259Z" level=info msg="CreateContainer within sandbox \"c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5\"" Sep 4 23:42:29.586283 containerd[1512]: time="2025-09-04T23:42:29.586245798Z" level=info msg="StartContainer for \"eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5\"" Sep 4 23:42:29.710386 systemd[1]: Started cri-containerd-eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5.scope - libcontainer container eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5. Sep 4 23:42:29.752172 containerd[1512]: time="2025-09-04T23:42:29.751999075Z" level=info msg="StartContainer for \"eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5\" returns successfully" Sep 4 23:42:29.763861 systemd[1]: cri-containerd-eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5.scope: Deactivated successfully. Sep 4 23:42:30.101075 containerd[1512]: time="2025-09-04T23:42:30.089980862Z" level=info msg="shim disconnected" id=eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5 namespace=k8s.io Sep 4 23:42:30.101075 containerd[1512]: time="2025-09-04T23:42:30.101059797Z" level=warning msg="cleaning up after shim disconnected" id=eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5 namespace=k8s.io Sep 4 23:42:30.101075 containerd[1512]: time="2025-09-04T23:42:30.101080075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:42:30.318708 kubelet[2659]: E0904 23:42:30.318649 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:30.333231 containerd[1512]: time="2025-09-04T23:42:30.333083069Z" level=info msg="CreateContainer within sandbox \"c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:42:30.378328 kubelet[2659]: I0904 23:42:30.378128 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-p6fss" podStartSLOduration=19.263719881 podStartE2EDuration="26.37807147s" podCreationTimestamp="2025-09-04 23:42:04 +0000 UTC" firstStartedPulling="2025-09-04 23:42:07.68023495 +0000 UTC m=+10.638235748" lastFinishedPulling="2025-09-04 23:42:14.794586529 +0000 UTC m=+17.752587337" observedRunningTime="2025-09-04 23:42:16.267292954 +0000 UTC m=+19.225293762" watchObservedRunningTime="2025-09-04 23:42:30.37807147 +0000 UTC m=+33.336072268" Sep 4 23:42:30.585340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5-rootfs.mount: Deactivated successfully. Sep 4 23:42:30.751501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1967959837.mount: Deactivated successfully. Sep 4 23:42:30.754191 containerd[1512]: time="2025-09-04T23:42:30.754142166Z" level=info msg="CreateContainer within sandbox \"c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7\"" Sep 4 23:42:30.755053 containerd[1512]: time="2025-09-04T23:42:30.754968055Z" level=info msg="StartContainer for \"ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7\"" Sep 4 23:42:30.798286 systemd[1]: Started cri-containerd-ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7.scope - libcontainer container ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7. Sep 4 23:42:30.840822 containerd[1512]: time="2025-09-04T23:42:30.840752033Z" level=info msg="StartContainer for \"ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7\" returns successfully" Sep 4 23:42:30.856373 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:42:30.856992 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:42:30.857242 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:42:30.864313 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:42:30.864556 systemd[1]: cri-containerd-ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7.scope: Deactivated successfully. Sep 4 23:42:30.891042 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:42:30.906769 containerd[1512]: time="2025-09-04T23:42:30.906671722Z" level=info msg="shim disconnected" id=ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7 namespace=k8s.io Sep 4 23:42:30.906769 containerd[1512]: time="2025-09-04T23:42:30.906741904Z" level=warning msg="cleaning up after shim disconnected" id=ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7 namespace=k8s.io Sep 4 23:42:30.906769 containerd[1512]: time="2025-09-04T23:42:30.906753235Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:42:31.321348 kubelet[2659]: E0904 23:42:31.321307 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:31.332215 containerd[1512]: time="2025-09-04T23:42:31.332107999Z" level=info msg="CreateContainer within sandbox \"c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:42:31.364305 containerd[1512]: time="2025-09-04T23:42:31.364223469Z" level=info msg="CreateContainer within sandbox \"c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03\"" Sep 4 23:42:31.364971 containerd[1512]: time="2025-09-04T23:42:31.364860383Z" level=info msg="StartContainer for \"0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03\"" Sep 4 23:42:31.404137 systemd[1]: Started cri-containerd-0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03.scope - libcontainer container 0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03. Sep 4 23:42:31.444851 containerd[1512]: time="2025-09-04T23:42:31.444794911Z" level=info msg="StartContainer for \"0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03\" returns successfully" Sep 4 23:42:31.446956 systemd[1]: cri-containerd-0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03.scope: Deactivated successfully. Sep 4 23:42:31.476855 containerd[1512]: time="2025-09-04T23:42:31.476782480Z" level=info msg="shim disconnected" id=0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03 namespace=k8s.io Sep 4 23:42:31.476855 containerd[1512]: time="2025-09-04T23:42:31.476843704Z" level=warning msg="cleaning up after shim disconnected" id=0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03 namespace=k8s.io Sep 4 23:42:31.476855 containerd[1512]: time="2025-09-04T23:42:31.476852751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:42:31.582009 systemd[1]: run-containerd-runc-k8s.io-ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7-runc.R7HQtO.mount: Deactivated successfully. Sep 4 23:42:31.582180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7-rootfs.mount: Deactivated successfully. Sep 4 23:42:32.325510 kubelet[2659]: E0904 23:42:32.325416 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:32.435657 containerd[1512]: time="2025-09-04T23:42:32.435602665Z" level=info msg="CreateContainer within sandbox \"c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:42:32.770614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1132408772.mount: Deactivated successfully. Sep 4 23:42:32.983171 containerd[1512]: time="2025-09-04T23:42:32.983108524Z" level=info msg="CreateContainer within sandbox \"c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45\"" Sep 4 23:42:32.984272 containerd[1512]: time="2025-09-04T23:42:32.983733807Z" level=info msg="StartContainer for \"eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45\"" Sep 4 23:42:33.064129 systemd[1]: Started cri-containerd-eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45.scope - libcontainer container eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45. Sep 4 23:42:33.097553 systemd[1]: cri-containerd-eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45.scope: Deactivated successfully. Sep 4 23:42:33.264422 containerd[1512]: time="2025-09-04T23:42:33.264358458Z" level=info msg="StartContainer for \"eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45\" returns successfully" Sep 4 23:42:33.288157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45-rootfs.mount: Deactivated successfully. Sep 4 23:42:33.298002 containerd[1512]: time="2025-09-04T23:42:33.297888676Z" level=info msg="shim disconnected" id=eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45 namespace=k8s.io Sep 4 23:42:33.298002 containerd[1512]: time="2025-09-04T23:42:33.298000516Z" level=warning msg="cleaning up after shim disconnected" id=eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45 namespace=k8s.io Sep 4 23:42:33.298213 containerd[1512]: time="2025-09-04T23:42:33.298015143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:42:33.329560 kubelet[2659]: E0904 23:42:33.329393 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:33.337715 containerd[1512]: time="2025-09-04T23:42:33.337662637Z" level=info msg="CreateContainer within sandbox \"c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:42:33.359311 containerd[1512]: time="2025-09-04T23:42:33.359230317Z" level=info msg="CreateContainer within sandbox \"c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812\"" Sep 4 23:42:33.360351 containerd[1512]: time="2025-09-04T23:42:33.360309199Z" level=info msg="StartContainer for \"cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812\"" Sep 4 23:42:33.393071 systemd[1]: Started cri-containerd-cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812.scope - libcontainer container cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812. Sep 4 23:42:33.430153 containerd[1512]: time="2025-09-04T23:42:33.430084891Z" level=info msg="StartContainer for \"cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812\" returns successfully" Sep 4 23:42:33.590825 kubelet[2659]: I0904 23:42:33.590657 2659 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 23:42:33.682519 systemd[1]: Created slice kubepods-burstable-pod553cf5a3_8301_4038_9a6d_f90cf6ada490.slice - libcontainer container kubepods-burstable-pod553cf5a3_8301_4038_9a6d_f90cf6ada490.slice. Sep 4 23:42:33.691731 systemd[1]: Created slice kubepods-burstable-pod32766d13_67fb_43f5_b192_a1dd7b2e6020.slice - libcontainer container kubepods-burstable-pod32766d13_67fb_43f5_b192_a1dd7b2e6020.slice. Sep 4 23:42:33.741092 kubelet[2659]: I0904 23:42:33.740820 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt9rv\" (UniqueName: \"kubernetes.io/projected/32766d13-67fb-43f5-b192-a1dd7b2e6020-kube-api-access-vt9rv\") pod \"coredns-674b8bbfcf-qb82q\" (UID: \"32766d13-67fb-43f5-b192-a1dd7b2e6020\") " pod="kube-system/coredns-674b8bbfcf-qb82q" Sep 4 23:42:33.741092 kubelet[2659]: I0904 23:42:33.740943 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/553cf5a3-8301-4038-9a6d-f90cf6ada490-config-volume\") pod \"coredns-674b8bbfcf-hjjm4\" (UID: \"553cf5a3-8301-4038-9a6d-f90cf6ada490\") " pod="kube-system/coredns-674b8bbfcf-hjjm4" Sep 4 23:42:33.741092 kubelet[2659]: I0904 23:42:33.740978 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32766d13-67fb-43f5-b192-a1dd7b2e6020-config-volume\") pod \"coredns-674b8bbfcf-qb82q\" (UID: \"32766d13-67fb-43f5-b192-a1dd7b2e6020\") " pod="kube-system/coredns-674b8bbfcf-qb82q" Sep 4 23:42:33.741092 kubelet[2659]: I0904 23:42:33.741002 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsvfk\" (UniqueName: \"kubernetes.io/projected/553cf5a3-8301-4038-9a6d-f90cf6ada490-kube-api-access-fsvfk\") pod \"coredns-674b8bbfcf-hjjm4\" (UID: \"553cf5a3-8301-4038-9a6d-f90cf6ada490\") " pod="kube-system/coredns-674b8bbfcf-hjjm4" Sep 4 23:42:33.987120 kubelet[2659]: E0904 23:42:33.986940 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:33.988260 containerd[1512]: time="2025-09-04T23:42:33.988188488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hjjm4,Uid:553cf5a3-8301-4038-9a6d-f90cf6ada490,Namespace:kube-system,Attempt:0,}" Sep 4 23:42:33.996696 kubelet[2659]: E0904 23:42:33.996640 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:33.997064 containerd[1512]: time="2025-09-04T23:42:33.997020947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qb82q,Uid:32766d13-67fb-43f5-b192-a1dd7b2e6020,Namespace:kube-system,Attempt:0,}" Sep 4 23:42:34.336860 kubelet[2659]: E0904 23:42:34.336796 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:34.355700 kubelet[2659]: I0904 23:42:34.355007 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ntbr4" podStartSLOduration=8.864955917 podStartE2EDuration="30.3549816s" podCreationTimestamp="2025-09-04 23:42:04 +0000 UTC" firstStartedPulling="2025-09-04 23:42:08.008492195 +0000 UTC m=+10.966493003" lastFinishedPulling="2025-09-04 23:42:29.498517888 +0000 UTC m=+32.456518686" observedRunningTime="2025-09-04 23:42:34.354593692 +0000 UTC m=+37.312594510" watchObservedRunningTime="2025-09-04 23:42:34.3549816 +0000 UTC m=+37.312982398" Sep 4 23:42:35.339084 kubelet[2659]: E0904 23:42:35.339022 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:35.948387 systemd-networkd[1443]: cilium_host: Link UP Sep 4 23:42:35.948605 systemd-networkd[1443]: cilium_net: Link UP Sep 4 23:42:35.948875 systemd-networkd[1443]: cilium_net: Gained carrier Sep 4 23:42:35.949107 systemd-networkd[1443]: cilium_host: Gained carrier Sep 4 23:42:35.949296 systemd-networkd[1443]: cilium_net: Gained IPv6LL Sep 4 23:42:35.949512 systemd-networkd[1443]: cilium_host: Gained IPv6LL Sep 4 23:42:36.099293 systemd-networkd[1443]: cilium_vxlan: Link UP Sep 4 23:42:36.099308 systemd-networkd[1443]: cilium_vxlan: Gained carrier Sep 4 23:42:36.341402 kubelet[2659]: E0904 23:42:36.341336 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:36.341949 kernel: NET: Registered PF_ALG protocol family Sep 4 23:42:37.105114 systemd-networkd[1443]: lxc_health: Link UP Sep 4 23:42:37.114746 systemd-networkd[1443]: lxc_health: Gained carrier Sep 4 23:42:37.251328 systemd-networkd[1443]: lxc46e8198a4acf: Link UP Sep 4 23:42:37.270511 systemd-networkd[1443]: lxce26e6ae8d6f5: Link UP Sep 4 23:42:37.280959 kernel: eth0: renamed from tmp2b487 Sep 4 23:42:37.292927 kernel: eth0: renamed from tmped53d Sep 4 23:42:37.299640 systemd-networkd[1443]: lxce26e6ae8d6f5: Gained carrier Sep 4 23:42:37.300841 systemd-networkd[1443]: lxc46e8198a4acf: Gained carrier Sep 4 23:42:37.510097 systemd-networkd[1443]: cilium_vxlan: Gained IPv6LL Sep 4 23:42:38.534161 systemd-networkd[1443]: lxce26e6ae8d6f5: Gained IPv6LL Sep 4 23:42:38.617721 kubelet[2659]: E0904 23:42:38.617668 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:38.790332 systemd-networkd[1443]: lxc46e8198a4acf: Gained IPv6LL Sep 4 23:42:39.110115 systemd-networkd[1443]: lxc_health: Gained IPv6LL Sep 4 23:42:39.346481 kubelet[2659]: E0904 23:42:39.346437 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:40.348584 kubelet[2659]: E0904 23:42:40.348522 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:41.217216 containerd[1512]: time="2025-09-04T23:42:41.217063522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:42:41.217216 containerd[1512]: time="2025-09-04T23:42:41.217163875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:42:41.217216 containerd[1512]: time="2025-09-04T23:42:41.217180096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:42:41.217807 containerd[1512]: time="2025-09-04T23:42:41.217441700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:42:41.228028 containerd[1512]: time="2025-09-04T23:42:41.226777699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:42:41.228028 containerd[1512]: time="2025-09-04T23:42:41.226915514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:42:41.228028 containerd[1512]: time="2025-09-04T23:42:41.226943439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:42:41.228028 containerd[1512]: time="2025-09-04T23:42:41.227079310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:42:41.247752 systemd[1]: run-containerd-runc-k8s.io-ed53d3b4493e9724fe04dc66e98dbb8678956a1502424da925a635a5a9d49e01-runc.gFoyBP.mount: Deactivated successfully. Sep 4 23:42:41.266164 systemd[1]: Started cri-containerd-2b48752afe0488907ca419f9fe46c57e78bb3fd8fc0f5f42253c3d2322cd282e.scope - libcontainer container 2b48752afe0488907ca419f9fe46c57e78bb3fd8fc0f5f42253c3d2322cd282e. Sep 4 23:42:41.269295 systemd[1]: Started cri-containerd-ed53d3b4493e9724fe04dc66e98dbb8678956a1502424da925a635a5a9d49e01.scope - libcontainer container ed53d3b4493e9724fe04dc66e98dbb8678956a1502424da925a635a5a9d49e01. Sep 4 23:42:41.291963 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:42:41.295909 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:42:41.328647 containerd[1512]: time="2025-09-04T23:42:41.328590465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qb82q,Uid:32766d13-67fb-43f5-b192-a1dd7b2e6020,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b48752afe0488907ca419f9fe46c57e78bb3fd8fc0f5f42253c3d2322cd282e\"" Sep 4 23:42:41.329693 kubelet[2659]: E0904 23:42:41.329651 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:41.340120 containerd[1512]: time="2025-09-04T23:42:41.340056414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hjjm4,Uid:553cf5a3-8301-4038-9a6d-f90cf6ada490,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed53d3b4493e9724fe04dc66e98dbb8678956a1502424da925a635a5a9d49e01\"" Sep 4 23:42:41.342409 kubelet[2659]: E0904 23:42:41.342371 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:41.372569 containerd[1512]: time="2025-09-04T23:42:41.372504059Z" level=info msg="CreateContainer within sandbox \"2b48752afe0488907ca419f9fe46c57e78bb3fd8fc0f5f42253c3d2322cd282e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:42:41.434467 containerd[1512]: time="2025-09-04T23:42:41.434393302Z" level=info msg="CreateContainer within sandbox \"ed53d3b4493e9724fe04dc66e98dbb8678956a1502424da925a635a5a9d49e01\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:42:41.778920 containerd[1512]: time="2025-09-04T23:42:41.776504197Z" level=info msg="CreateContainer within sandbox \"2b48752afe0488907ca419f9fe46c57e78bb3fd8fc0f5f42253c3d2322cd282e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e40666fb9190d9a35acc884620f3aaccb637e2cfa3bee9d3cc4c4c5de1c7c7e1\"" Sep 4 23:42:41.778920 containerd[1512]: time="2025-09-04T23:42:41.778379055Z" level=info msg="StartContainer for \"e40666fb9190d9a35acc884620f3aaccb637e2cfa3bee9d3cc4c4c5de1c7c7e1\"" Sep 4 23:42:41.806611 containerd[1512]: time="2025-09-04T23:42:41.805272599Z" level=info msg="CreateContainer within sandbox \"ed53d3b4493e9724fe04dc66e98dbb8678956a1502424da925a635a5a9d49e01\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca4dfe0f3636320a0827837f249f8854fb993c510fa7c433f2ceceea1f7d6bf5\"" Sep 4 23:42:41.811352 containerd[1512]: time="2025-09-04T23:42:41.811117929Z" level=info msg="StartContainer for \"ca4dfe0f3636320a0827837f249f8854fb993c510fa7c433f2ceceea1f7d6bf5\"" Sep 4 23:42:41.841284 systemd[1]: Started cri-containerd-e40666fb9190d9a35acc884620f3aaccb637e2cfa3bee9d3cc4c4c5de1c7c7e1.scope - libcontainer container e40666fb9190d9a35acc884620f3aaccb637e2cfa3bee9d3cc4c4c5de1c7c7e1. Sep 4 23:42:41.902629 systemd[1]: Started cri-containerd-ca4dfe0f3636320a0827837f249f8854fb993c510fa7c433f2ceceea1f7d6bf5.scope - libcontainer container ca4dfe0f3636320a0827837f249f8854fb993c510fa7c433f2ceceea1f7d6bf5. Sep 4 23:42:41.928216 containerd[1512]: time="2025-09-04T23:42:41.928148727Z" level=info msg="StartContainer for \"e40666fb9190d9a35acc884620f3aaccb637e2cfa3bee9d3cc4c4c5de1c7c7e1\" returns successfully" Sep 4 23:42:41.963223 containerd[1512]: time="2025-09-04T23:42:41.963157742Z" level=info msg="StartContainer for \"ca4dfe0f3636320a0827837f249f8854fb993c510fa7c433f2ceceea1f7d6bf5\" returns successfully" Sep 4 23:42:42.362088 kubelet[2659]: E0904 23:42:42.361742 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:42.370252 kubelet[2659]: E0904 23:42:42.370200 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:42.452848 kubelet[2659]: I0904 23:42:42.451309 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hjjm4" podStartSLOduration=38.451286385 podStartE2EDuration="38.451286385s" podCreationTimestamp="2025-09-04 23:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:42:42.403775483 +0000 UTC m=+45.361776281" watchObservedRunningTime="2025-09-04 23:42:42.451286385 +0000 UTC m=+45.409287183" Sep 4 23:42:43.372562 kubelet[2659]: E0904 23:42:43.372309 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:43.372562 kubelet[2659]: E0904 23:42:43.372496 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:43.455216 kubelet[2659]: I0904 23:42:43.455085 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qb82q" podStartSLOduration=39.455060453 podStartE2EDuration="39.455060453s" podCreationTimestamp="2025-09-04 23:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:42:42.451544903 +0000 UTC m=+45.409545701" watchObservedRunningTime="2025-09-04 23:42:43.455060453 +0000 UTC m=+46.413061251" Sep 4 23:42:44.374361 kubelet[2659]: E0904 23:42:44.374293 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:44.374361 kubelet[2659]: E0904 23:42:44.374326 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:45.380554 kubelet[2659]: E0904 23:42:45.379674 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:45.382570 kubelet[2659]: E0904 23:42:45.381351 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:42:49.269744 systemd[1]: Started sshd@7-10.0.0.28:22-10.0.0.1:58656.service - OpenSSH per-connection server daemon (10.0.0.1:58656). Sep 4 23:42:49.356727 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 58656 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:42:49.358747 sshd-session[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:42:49.363882 systemd-logind[1494]: New session 8 of user core. Sep 4 23:42:49.371045 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 23:42:50.014268 sshd[4056]: Connection closed by 10.0.0.1 port 58656 Sep 4 23:42:50.014816 sshd-session[4054]: pam_unix(sshd:session): session closed for user core Sep 4 23:42:50.019775 systemd[1]: sshd@7-10.0.0.28:22-10.0.0.1:58656.service: Deactivated successfully. Sep 4 23:42:50.023100 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 23:42:50.023974 systemd-logind[1494]: Session 8 logged out. Waiting for processes to exit. Sep 4 23:42:50.025022 systemd-logind[1494]: Removed session 8. Sep 4 23:42:55.028521 systemd[1]: Started sshd@8-10.0.0.28:22-10.0.0.1:59056.service - OpenSSH per-connection server daemon (10.0.0.1:59056). Sep 4 23:42:55.075320 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 59056 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:42:55.077274 sshd-session[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:42:55.082392 systemd-logind[1494]: New session 9 of user core. Sep 4 23:42:55.091219 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 23:42:55.225633 sshd[4074]: Connection closed by 10.0.0.1 port 59056 Sep 4 23:42:55.226095 sshd-session[4072]: pam_unix(sshd:session): session closed for user core Sep 4 23:42:55.231280 systemd[1]: sshd@8-10.0.0.28:22-10.0.0.1:59056.service: Deactivated successfully. Sep 4 23:42:55.233958 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 23:42:55.234760 systemd-logind[1494]: Session 9 logged out. Waiting for processes to exit. Sep 4 23:42:55.236017 systemd-logind[1494]: Removed session 9. Sep 4 23:43:00.241669 systemd[1]: Started sshd@9-10.0.0.28:22-10.0.0.1:44098.service - OpenSSH per-connection server daemon (10.0.0.1:44098). Sep 4 23:43:00.289476 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 44098 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:00.291382 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:00.296652 systemd-logind[1494]: New session 10 of user core. Sep 4 23:43:00.308074 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 23:43:00.484168 sshd[4092]: Connection closed by 10.0.0.1 port 44098 Sep 4 23:43:00.484665 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:00.488498 systemd[1]: sshd@9-10.0.0.28:22-10.0.0.1:44098.service: Deactivated successfully. Sep 4 23:43:00.490974 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 23:43:00.492718 systemd-logind[1494]: Session 10 logged out. Waiting for processes to exit. Sep 4 23:43:00.493821 systemd-logind[1494]: Removed session 10. Sep 4 23:43:05.512469 systemd[1]: Started sshd@10-10.0.0.28:22-10.0.0.1:44108.service - OpenSSH per-connection server daemon (10.0.0.1:44108). Sep 4 23:43:05.560946 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 44108 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:05.563044 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:05.571341 systemd-logind[1494]: New session 11 of user core. Sep 4 23:43:05.588111 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 23:43:05.841359 sshd[4108]: Connection closed by 10.0.0.1 port 44108 Sep 4 23:43:05.841837 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:05.846715 systemd[1]: sshd@10-10.0.0.28:22-10.0.0.1:44108.service: Deactivated successfully. Sep 4 23:43:05.849635 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 23:43:05.851750 systemd-logind[1494]: Session 11 logged out. Waiting for processes to exit. Sep 4 23:43:05.853527 systemd-logind[1494]: Removed session 11. Sep 4 23:43:10.855979 systemd[1]: Started sshd@11-10.0.0.28:22-10.0.0.1:52454.service - OpenSSH per-connection server daemon (10.0.0.1:52454). Sep 4 23:43:10.900727 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 52454 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:10.902699 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:10.907913 systemd-logind[1494]: New session 12 of user core. Sep 4 23:43:10.921059 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 23:43:11.058261 sshd[4127]: Connection closed by 10.0.0.1 port 52454 Sep 4 23:43:11.058771 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:11.064289 systemd[1]: sshd@11-10.0.0.28:22-10.0.0.1:52454.service: Deactivated successfully. Sep 4 23:43:11.067442 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 23:43:11.068672 systemd-logind[1494]: Session 12 logged out. Waiting for processes to exit. Sep 4 23:43:11.069708 systemd-logind[1494]: Removed session 12. Sep 4 23:43:16.113001 systemd[1]: Started sshd@12-10.0.0.28:22-10.0.0.1:52470.service - OpenSSH per-connection server daemon (10.0.0.1:52470). Sep 4 23:43:16.183503 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 52470 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:16.188224 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:16.210413 systemd-logind[1494]: New session 13 of user core. Sep 4 23:43:16.221189 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 23:43:16.509932 sshd[4144]: Connection closed by 10.0.0.1 port 52470 Sep 4 23:43:16.509502 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:16.543234 systemd[1]: sshd@12-10.0.0.28:22-10.0.0.1:52470.service: Deactivated successfully. Sep 4 23:43:16.548929 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 23:43:16.561309 systemd-logind[1494]: Session 13 logged out. Waiting for processes to exit. Sep 4 23:43:16.571388 systemd[1]: Started sshd@13-10.0.0.28:22-10.0.0.1:52478.service - OpenSSH per-connection server daemon (10.0.0.1:52478). Sep 4 23:43:16.577283 systemd-logind[1494]: Removed session 13. Sep 4 23:43:16.674576 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 52478 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:16.676932 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:16.715249 systemd-logind[1494]: New session 14 of user core. Sep 4 23:43:16.723123 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 23:43:17.113879 sshd[4160]: Connection closed by 10.0.0.1 port 52478 Sep 4 23:43:17.116013 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:17.137228 systemd[1]: sshd@13-10.0.0.28:22-10.0.0.1:52478.service: Deactivated successfully. Sep 4 23:43:17.147616 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 23:43:17.166262 systemd-logind[1494]: Session 14 logged out. Waiting for processes to exit. Sep 4 23:43:17.180100 systemd[1]: Started sshd@14-10.0.0.28:22-10.0.0.1:52494.service - OpenSSH per-connection server daemon (10.0.0.1:52494). Sep 4 23:43:17.186516 systemd-logind[1494]: Removed session 14. Sep 4 23:43:17.257692 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 52494 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:17.260404 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:17.288358 systemd-logind[1494]: New session 15 of user core. Sep 4 23:43:17.304483 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 23:43:17.566728 sshd[4173]: Connection closed by 10.0.0.1 port 52494 Sep 4 23:43:17.567654 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:17.574722 systemd[1]: sshd@14-10.0.0.28:22-10.0.0.1:52494.service: Deactivated successfully. Sep 4 23:43:17.581463 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 23:43:17.588281 systemd-logind[1494]: Session 15 logged out. Waiting for processes to exit. Sep 4 23:43:17.603602 systemd-logind[1494]: Removed session 15. Sep 4 23:43:19.165143 kubelet[2659]: E0904 23:43:19.165065 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:43:22.165412 kubelet[2659]: E0904 23:43:22.165351 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:43:22.590596 systemd[1]: Started sshd@15-10.0.0.28:22-10.0.0.1:39418.service - OpenSSH per-connection server daemon (10.0.0.1:39418). Sep 4 23:43:22.638826 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 39418 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:22.642090 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:22.652518 systemd-logind[1494]: New session 16 of user core. Sep 4 23:43:22.660217 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 23:43:22.803987 sshd[4189]: Connection closed by 10.0.0.1 port 39418 Sep 4 23:43:22.804649 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:22.812789 systemd[1]: sshd@15-10.0.0.28:22-10.0.0.1:39418.service: Deactivated successfully. Sep 4 23:43:22.817158 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 23:43:22.818460 systemd-logind[1494]: Session 16 logged out. Waiting for processes to exit. Sep 4 23:43:22.819853 systemd-logind[1494]: Removed session 16. Sep 4 23:43:26.165452 kubelet[2659]: E0904 23:43:26.165387 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:43:27.820673 systemd[1]: Started sshd@16-10.0.0.28:22-10.0.0.1:39430.service - OpenSSH per-connection server daemon (10.0.0.1:39430). Sep 4 23:43:27.863650 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 39430 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:27.865983 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:27.871482 systemd-logind[1494]: New session 17 of user core. Sep 4 23:43:27.888132 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 23:43:28.034462 sshd[4205]: Connection closed by 10.0.0.1 port 39430 Sep 4 23:43:28.034965 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:28.040582 systemd[1]: sshd@16-10.0.0.28:22-10.0.0.1:39430.service: Deactivated successfully. Sep 4 23:43:28.044073 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 23:43:28.045141 systemd-logind[1494]: Session 17 logged out. Waiting for processes to exit. Sep 4 23:43:28.046381 systemd-logind[1494]: Removed session 17. Sep 4 23:43:31.165987 kubelet[2659]: E0904 23:43:31.165863 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:43:32.165408 kubelet[2659]: E0904 23:43:32.165348 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:43:33.060100 systemd[1]: Started sshd@17-10.0.0.28:22-10.0.0.1:54366.service - OpenSSH per-connection server daemon (10.0.0.1:54366). Sep 4 23:43:33.108601 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 54366 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:33.110634 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:33.115992 systemd-logind[1494]: New session 18 of user core. Sep 4 23:43:33.123076 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 23:43:33.239015 sshd[4220]: Connection closed by 10.0.0.1 port 54366 Sep 4 23:43:33.239606 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:33.257438 systemd[1]: sshd@17-10.0.0.28:22-10.0.0.1:54366.service: Deactivated successfully. Sep 4 23:43:33.259863 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 23:43:33.261525 systemd-logind[1494]: Session 18 logged out. Waiting for processes to exit. Sep 4 23:43:33.280409 systemd[1]: Started sshd@18-10.0.0.28:22-10.0.0.1:54372.service - OpenSSH per-connection server daemon (10.0.0.1:54372). Sep 4 23:43:33.282018 systemd-logind[1494]: Removed session 18. Sep 4 23:43:33.317674 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 54372 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:33.319228 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:33.325022 systemd-logind[1494]: New session 19 of user core. Sep 4 23:43:33.332089 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 23:43:34.232061 sshd[4235]: Connection closed by 10.0.0.1 port 54372 Sep 4 23:43:34.232826 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:34.246743 systemd[1]: sshd@18-10.0.0.28:22-10.0.0.1:54372.service: Deactivated successfully. Sep 4 23:43:34.249400 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 23:43:34.251655 systemd-logind[1494]: Session 19 logged out. Waiting for processes to exit. Sep 4 23:43:34.264158 systemd[1]: Started sshd@19-10.0.0.28:22-10.0.0.1:54382.service - OpenSSH per-connection server daemon (10.0.0.1:54382). Sep 4 23:43:34.266065 systemd-logind[1494]: Removed session 19. Sep 4 23:43:34.309646 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 54382 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:34.311631 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:34.316678 systemd-logind[1494]: New session 20 of user core. Sep 4 23:43:34.328052 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 23:43:35.588943 sshd[4249]: Connection closed by 10.0.0.1 port 54382 Sep 4 23:43:35.593139 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:35.602401 systemd[1]: sshd@19-10.0.0.28:22-10.0.0.1:54382.service: Deactivated successfully. Sep 4 23:43:35.609781 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 23:43:35.612917 systemd-logind[1494]: Session 20 logged out. Waiting for processes to exit. Sep 4 23:43:35.628449 systemd[1]: Started sshd@20-10.0.0.28:22-10.0.0.1:54390.service - OpenSSH per-connection server daemon (10.0.0.1:54390). Sep 4 23:43:35.631468 systemd-logind[1494]: Removed session 20. Sep 4 23:43:35.670354 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 54390 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:35.672461 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:35.677927 systemd-logind[1494]: New session 21 of user core. Sep 4 23:43:35.686085 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 23:43:36.561502 sshd[4270]: Connection closed by 10.0.0.1 port 54390 Sep 4 23:43:36.562206 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:36.573394 systemd[1]: sshd@20-10.0.0.28:22-10.0.0.1:54390.service: Deactivated successfully. Sep 4 23:43:36.576786 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 23:43:36.579283 systemd-logind[1494]: Session 21 logged out. Waiting for processes to exit. Sep 4 23:43:36.586252 systemd[1]: Started sshd@21-10.0.0.28:22-10.0.0.1:54396.service - OpenSSH per-connection server daemon (10.0.0.1:54396). Sep 4 23:43:36.587351 systemd-logind[1494]: Removed session 21. Sep 4 23:43:36.629590 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 54396 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:36.631558 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:36.636646 systemd-logind[1494]: New session 22 of user core. Sep 4 23:43:36.645051 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 23:43:37.166303 sshd[4283]: Connection closed by 10.0.0.1 port 54396 Sep 4 23:43:37.166752 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:37.171604 systemd[1]: sshd@21-10.0.0.28:22-10.0.0.1:54396.service: Deactivated successfully. Sep 4 23:43:37.174462 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 23:43:37.175425 systemd-logind[1494]: Session 22 logged out. Waiting for processes to exit. Sep 4 23:43:37.177371 systemd-logind[1494]: Removed session 22. Sep 4 23:43:42.201732 systemd[1]: Started sshd@22-10.0.0.28:22-10.0.0.1:44424.service - OpenSSH per-connection server daemon (10.0.0.1:44424). Sep 4 23:43:42.278376 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 44424 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:42.280886 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:42.286488 systemd-logind[1494]: New session 23 of user core. Sep 4 23:43:42.296169 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 23:43:42.468005 sshd[4300]: Connection closed by 10.0.0.1 port 44424 Sep 4 23:43:42.468308 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:42.473088 systemd[1]: sshd@22-10.0.0.28:22-10.0.0.1:44424.service: Deactivated successfully. Sep 4 23:43:42.475594 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 23:43:42.476558 systemd-logind[1494]: Session 23 logged out. Waiting for processes to exit. Sep 4 23:43:42.477653 systemd-logind[1494]: Removed session 23. Sep 4 23:43:47.481739 systemd[1]: Started sshd@23-10.0.0.28:22-10.0.0.1:44440.service - OpenSSH per-connection server daemon (10.0.0.1:44440). Sep 4 23:43:47.523935 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 44440 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:47.525239 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:47.530439 systemd-logind[1494]: New session 24 of user core. Sep 4 23:43:47.544198 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 23:43:47.857826 sshd[4315]: Connection closed by 10.0.0.1 port 44440 Sep 4 23:43:47.858359 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:47.863719 systemd[1]: sshd@23-10.0.0.28:22-10.0.0.1:44440.service: Deactivated successfully. Sep 4 23:43:47.866884 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 23:43:47.867944 systemd-logind[1494]: Session 24 logged out. Waiting for processes to exit. Sep 4 23:43:47.869611 systemd-logind[1494]: Removed session 24. Sep 4 23:43:52.876973 systemd[1]: Started sshd@24-10.0.0.28:22-10.0.0.1:35954.service - OpenSSH per-connection server daemon (10.0.0.1:35954). Sep 4 23:43:52.918811 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 35954 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:52.920710 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:52.926096 systemd-logind[1494]: New session 25 of user core. Sep 4 23:43:52.934027 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 23:43:53.330227 sshd[4333]: Connection closed by 10.0.0.1 port 35954 Sep 4 23:43:53.330603 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:53.334714 systemd[1]: sshd@24-10.0.0.28:22-10.0.0.1:35954.service: Deactivated successfully. Sep 4 23:43:53.336840 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 23:43:53.337542 systemd-logind[1494]: Session 25 logged out. Waiting for processes to exit. Sep 4 23:43:53.338519 systemd-logind[1494]: Removed session 25. Sep 4 23:43:58.345664 systemd[1]: Started sshd@25-10.0.0.28:22-10.0.0.1:35970.service - OpenSSH per-connection server daemon (10.0.0.1:35970). Sep 4 23:43:58.388581 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 35970 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:43:58.390600 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:43:58.395310 systemd-logind[1494]: New session 26 of user core. Sep 4 23:43:58.405059 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 23:43:58.527330 sshd[4351]: Connection closed by 10.0.0.1 port 35970 Sep 4 23:43:58.527775 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Sep 4 23:43:58.532888 systemd[1]: sshd@25-10.0.0.28:22-10.0.0.1:35970.service: Deactivated successfully. Sep 4 23:43:58.536024 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 23:43:58.536999 systemd-logind[1494]: Session 26 logged out. Waiting for processes to exit. Sep 4 23:43:58.538467 systemd-logind[1494]: Removed session 26. Sep 4 23:43:59.165011 kubelet[2659]: E0904 23:43:59.164962 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:03.542766 systemd[1]: Started sshd@26-10.0.0.28:22-10.0.0.1:43320.service - OpenSSH per-connection server daemon (10.0.0.1:43320). Sep 4 23:44:03.592827 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 43320 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:44:03.595657 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:44:03.605597 systemd-logind[1494]: New session 27 of user core. Sep 4 23:44:03.613214 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 23:44:03.747975 sshd[4366]: Connection closed by 10.0.0.1 port 43320 Sep 4 23:44:03.748517 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Sep 4 23:44:03.758948 systemd[1]: sshd@26-10.0.0.28:22-10.0.0.1:43320.service: Deactivated successfully. Sep 4 23:44:03.761785 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 23:44:03.763817 systemd-logind[1494]: Session 27 logged out. Waiting for processes to exit. Sep 4 23:44:03.774464 systemd[1]: Started sshd@27-10.0.0.28:22-10.0.0.1:43324.service - OpenSSH per-connection server daemon (10.0.0.1:43324). Sep 4 23:44:03.776155 systemd-logind[1494]: Removed session 27. Sep 4 23:44:03.814070 sshd[4379]: Accepted publickey for core from 10.0.0.1 port 43324 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:44:03.816211 sshd-session[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:44:03.822117 systemd-logind[1494]: New session 28 of user core. Sep 4 23:44:03.833120 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 23:44:04.168289 kubelet[2659]: E0904 23:44:04.166316 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:05.210522 containerd[1512]: time="2025-09-04T23:44:05.210466895Z" level=info msg="StopContainer for \"05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18\" with timeout 30 (s)" Sep 4 23:44:05.212280 containerd[1512]: time="2025-09-04T23:44:05.212235718Z" level=info msg="Stop container \"05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18\" with signal terminated" Sep 4 23:44:05.257627 systemd[1]: cri-containerd-05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18.scope: Deactivated successfully. Sep 4 23:44:05.273782 containerd[1512]: time="2025-09-04T23:44:05.271591486Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:44:05.281765 containerd[1512]: time="2025-09-04T23:44:05.281730061Z" level=info msg="StopContainer for \"cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812\" with timeout 2 (s)" Sep 4 23:44:05.283043 containerd[1512]: time="2025-09-04T23:44:05.283004262Z" level=info msg="Stop container \"cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812\" with signal terminated" Sep 4 23:44:05.285672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18-rootfs.mount: Deactivated successfully. Sep 4 23:44:05.291878 systemd-networkd[1443]: lxc_health: Link DOWN Sep 4 23:44:05.291887 systemd-networkd[1443]: lxc_health: Lost carrier Sep 4 23:44:05.298162 containerd[1512]: time="2025-09-04T23:44:05.298071755Z" level=info msg="shim disconnected" id=05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18 namespace=k8s.io Sep 4 23:44:05.298286 containerd[1512]: time="2025-09-04T23:44:05.298160994Z" level=warning msg="cleaning up after shim disconnected" id=05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18 namespace=k8s.io Sep 4 23:44:05.298286 containerd[1512]: time="2025-09-04T23:44:05.298180099Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:44:05.317411 systemd[1]: cri-containerd-cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812.scope: Deactivated successfully. Sep 4 23:44:05.317934 systemd[1]: cri-containerd-cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812.scope: Consumed 8.174s CPU time, 125.5M memory peak, 332K read from disk, 13.3M written to disk. Sep 4 23:44:05.321339 containerd[1512]: time="2025-09-04T23:44:05.321285599Z" level=info msg="StopContainer for \"05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18\" returns successfully" Sep 4 23:44:05.322578 containerd[1512]: time="2025-09-04T23:44:05.322529333Z" level=info msg="StopPodSandbox for \"5cc9ed24615f88049f17ff0edd8d077cdf57a579e3ab2c61a25caba281a8f8e6\"" Sep 4 23:44:05.329227 containerd[1512]: time="2025-09-04T23:44:05.322580218Z" level=info msg="Container to stop \"05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:44:05.331662 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5cc9ed24615f88049f17ff0edd8d077cdf57a579e3ab2c61a25caba281a8f8e6-shm.mount: Deactivated successfully. Sep 4 23:44:05.336333 systemd[1]: cri-containerd-5cc9ed24615f88049f17ff0edd8d077cdf57a579e3ab2c61a25caba281a8f8e6.scope: Deactivated successfully. Sep 4 23:44:05.347235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812-rootfs.mount: Deactivated successfully. Sep 4 23:44:05.358940 containerd[1512]: time="2025-09-04T23:44:05.358835407Z" level=info msg="shim disconnected" id=cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812 namespace=k8s.io Sep 4 23:44:05.358940 containerd[1512]: time="2025-09-04T23:44:05.358936638Z" level=warning msg="cleaning up after shim disconnected" id=cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812 namespace=k8s.io Sep 4 23:44:05.359189 containerd[1512]: time="2025-09-04T23:44:05.358948179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:44:05.366684 containerd[1512]: time="2025-09-04T23:44:05.366592534Z" level=info msg="shim disconnected" id=5cc9ed24615f88049f17ff0edd8d077cdf57a579e3ab2c61a25caba281a8f8e6 namespace=k8s.io Sep 4 23:44:05.366684 containerd[1512]: time="2025-09-04T23:44:05.366674598Z" level=warning msg="cleaning up after shim disconnected" id=5cc9ed24615f88049f17ff0edd8d077cdf57a579e3ab2c61a25caba281a8f8e6 namespace=k8s.io Sep 4 23:44:05.366684 containerd[1512]: time="2025-09-04T23:44:05.366686752Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:44:05.384351 containerd[1512]: time="2025-09-04T23:44:05.384158575Z" level=info msg="StopContainer for \"cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812\" returns successfully" Sep 4 23:44:05.384806 containerd[1512]: time="2025-09-04T23:44:05.384768614Z" level=info msg="StopPodSandbox for \"c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47\"" Sep 4 23:44:05.384863 containerd[1512]: time="2025-09-04T23:44:05.384812277Z" level=info msg="Container to stop \"eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:44:05.384937 containerd[1512]: time="2025-09-04T23:44:05.384863203Z" level=info msg="Container to stop \"eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:44:05.384937 containerd[1512]: time="2025-09-04T23:44:05.384879303Z" level=info msg="Container to stop \"cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:44:05.384937 containerd[1512]: time="2025-09-04T23:44:05.384912296Z" level=info msg="Container to stop \"ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:44:05.384937 containerd[1512]: time="2025-09-04T23:44:05.384922184Z" level=info msg="Container to stop \"0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:44:05.393420 systemd[1]: cri-containerd-c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47.scope: Deactivated successfully. Sep 4 23:44:05.404550 containerd[1512]: time="2025-09-04T23:44:05.404466973Z" level=info msg="TearDown network for sandbox \"5cc9ed24615f88049f17ff0edd8d077cdf57a579e3ab2c61a25caba281a8f8e6\" successfully" Sep 4 23:44:05.404550 containerd[1512]: time="2025-09-04T23:44:05.404538508Z" level=info msg="StopPodSandbox for \"5cc9ed24615f88049f17ff0edd8d077cdf57a579e3ab2c61a25caba281a8f8e6\" returns successfully" Sep 4 23:44:05.424968 containerd[1512]: time="2025-09-04T23:44:05.424831378Z" level=info msg="shim disconnected" id=c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47 namespace=k8s.io Sep 4 23:44:05.424968 containerd[1512]: time="2025-09-04T23:44:05.424948579Z" level=warning msg="cleaning up after shim disconnected" id=c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47 namespace=k8s.io Sep 4 23:44:05.424968 containerd[1512]: time="2025-09-04T23:44:05.424962726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:44:05.429777 kubelet[2659]: I0904 23:44:05.429745 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f-cilium-config-path\") pod \"53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f\" (UID: \"53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f\") " Sep 4 23:44:05.430177 kubelet[2659]: I0904 23:44:05.429796 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5jf2\" (UniqueName: \"kubernetes.io/projected/53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f-kube-api-access-b5jf2\") pod \"53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f\" (UID: \"53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f\") " Sep 4 23:44:05.434959 kubelet[2659]: I0904 23:44:05.434920 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f-kube-api-access-b5jf2" (OuterVolumeSpecName: "kube-api-access-b5jf2") pod "53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f" (UID: "53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f"). InnerVolumeSpecName "kube-api-access-b5jf2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:44:05.437383 kubelet[2659]: I0904 23:44:05.437305 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f" (UID: "53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:44:05.443242 containerd[1512]: time="2025-09-04T23:44:05.443193479Z" level=info msg="TearDown network for sandbox \"c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47\" successfully" Sep 4 23:44:05.443242 containerd[1512]: time="2025-09-04T23:44:05.443240508Z" level=info msg="StopPodSandbox for \"c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47\" returns successfully" Sep 4 23:44:05.530963 kubelet[2659]: I0904 23:44:05.530849 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-lib-modules\") pod \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " Sep 4 23:44:05.531136 kubelet[2659]: I0904 23:44:05.530983 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75d2f65c-be6b-49bc-b83f-56af452cdd2b-cilium-config-path\") pod \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " Sep 4 23:44:05.531136 kubelet[2659]: I0904 23:44:05.531010 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75d2f65c-be6b-49bc-b83f-56af452cdd2b-clustermesh-secrets\") pod \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " Sep 4 23:44:05.531136 kubelet[2659]: I0904 23:44:05.531029 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4m9n\" (UniqueName: \"kubernetes.io/projected/75d2f65c-be6b-49bc-b83f-56af452cdd2b-kube-api-access-t4m9n\") pod \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " Sep 4 23:44:05.531136 kubelet[2659]: I0904 23:44:05.531019 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "75d2f65c-be6b-49bc-b83f-56af452cdd2b" (UID: "75d2f65c-be6b-49bc-b83f-56af452cdd2b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:44:05.531136 kubelet[2659]: I0904 23:44:05.531060 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-bpf-maps\") pod \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " Sep 4 23:44:05.531136 kubelet[2659]: I0904 23:44:05.531080 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75d2f65c-be6b-49bc-b83f-56af452cdd2b-hubble-tls\") pod \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " Sep 4 23:44:05.531290 kubelet[2659]: I0904 23:44:05.531096 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-cni-path\") pod \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " Sep 4 23:44:05.531290 kubelet[2659]: I0904 23:44:05.531121 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "75d2f65c-be6b-49bc-b83f-56af452cdd2b" (UID: "75d2f65c-be6b-49bc-b83f-56af452cdd2b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:44:05.531290 kubelet[2659]: I0904 23:44:05.531151 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-cni-path" (OuterVolumeSpecName: "cni-path") pod "75d2f65c-be6b-49bc-b83f-56af452cdd2b" (UID: "75d2f65c-be6b-49bc-b83f-56af452cdd2b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:44:05.533915 kubelet[2659]: I0904 23:44:05.531421 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-xtables-lock\") pod \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " Sep 4 23:44:05.533915 kubelet[2659]: I0904 23:44:05.531495 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-cilium-run\") pod \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " Sep 4 23:44:05.533915 kubelet[2659]: I0904 23:44:05.531524 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-hostproc\") pod \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " Sep 4 23:44:05.533915 kubelet[2659]: I0904 23:44:05.531549 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-host-proc-sys-net\") pod \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " Sep 4 23:44:05.533915 kubelet[2659]: I0904 23:44:05.531613 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-etc-cni-netd\") pod \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " Sep 4 23:44:05.533915 kubelet[2659]: I0904 23:44:05.531637 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-host-proc-sys-kernel\") pod \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " Sep 4 23:44:05.534176 kubelet[2659]: I0904 23:44:05.531680 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-cilium-cgroup\") pod \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\" (UID: \"75d2f65c-be6b-49bc-b83f-56af452cdd2b\") " Sep 4 23:44:05.534176 kubelet[2659]: I0904 23:44:05.531768 2659 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 23:44:05.534176 kubelet[2659]: I0904 23:44:05.531786 2659 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 23:44:05.534176 kubelet[2659]: I0904 23:44:05.531800 2659 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b5jf2\" (UniqueName: \"kubernetes.io/projected/53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f-kube-api-access-b5jf2\") on node \"localhost\" DevicePath \"\"" Sep 4 23:44:05.534176 kubelet[2659]: I0904 23:44:05.531831 2659 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 23:44:05.534176 kubelet[2659]: I0904 23:44:05.531844 2659 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 23:44:05.534176 kubelet[2659]: I0904 23:44:05.531872 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "75d2f65c-be6b-49bc-b83f-56af452cdd2b" (UID: "75d2f65c-be6b-49bc-b83f-56af452cdd2b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:44:05.534416 kubelet[2659]: I0904 23:44:05.532392 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-hostproc" (OuterVolumeSpecName: "hostproc") pod "75d2f65c-be6b-49bc-b83f-56af452cdd2b" (UID: "75d2f65c-be6b-49bc-b83f-56af452cdd2b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:44:05.534416 kubelet[2659]: I0904 23:44:05.532450 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "75d2f65c-be6b-49bc-b83f-56af452cdd2b" (UID: "75d2f65c-be6b-49bc-b83f-56af452cdd2b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:44:05.534416 kubelet[2659]: I0904 23:44:05.532475 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "75d2f65c-be6b-49bc-b83f-56af452cdd2b" (UID: "75d2f65c-be6b-49bc-b83f-56af452cdd2b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:44:05.534416 kubelet[2659]: I0904 23:44:05.532614 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "75d2f65c-be6b-49bc-b83f-56af452cdd2b" (UID: "75d2f65c-be6b-49bc-b83f-56af452cdd2b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:44:05.534416 kubelet[2659]: I0904 23:44:05.532650 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "75d2f65c-be6b-49bc-b83f-56af452cdd2b" (UID: "75d2f65c-be6b-49bc-b83f-56af452cdd2b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:44:05.534587 kubelet[2659]: I0904 23:44:05.532783 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "75d2f65c-be6b-49bc-b83f-56af452cdd2b" (UID: "75d2f65c-be6b-49bc-b83f-56af452cdd2b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:44:05.535043 kubelet[2659]: I0904 23:44:05.534998 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d2f65c-be6b-49bc-b83f-56af452cdd2b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "75d2f65c-be6b-49bc-b83f-56af452cdd2b" (UID: "75d2f65c-be6b-49bc-b83f-56af452cdd2b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:44:05.535567 kubelet[2659]: I0904 23:44:05.535513 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d2f65c-be6b-49bc-b83f-56af452cdd2b-kube-api-access-t4m9n" (OuterVolumeSpecName: "kube-api-access-t4m9n") pod "75d2f65c-be6b-49bc-b83f-56af452cdd2b" (UID: "75d2f65c-be6b-49bc-b83f-56af452cdd2b"). InnerVolumeSpecName "kube-api-access-t4m9n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:44:05.535924 kubelet[2659]: I0904 23:44:05.535878 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d2f65c-be6b-49bc-b83f-56af452cdd2b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "75d2f65c-be6b-49bc-b83f-56af452cdd2b" (UID: "75d2f65c-be6b-49bc-b83f-56af452cdd2b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 23:44:05.536647 kubelet[2659]: I0904 23:44:05.536611 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d2f65c-be6b-49bc-b83f-56af452cdd2b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "75d2f65c-be6b-49bc-b83f-56af452cdd2b" (UID: "75d2f65c-be6b-49bc-b83f-56af452cdd2b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:44:05.579921 kubelet[2659]: I0904 23:44:05.579863 2659 scope.go:117] "RemoveContainer" containerID="cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812" Sep 4 23:44:05.589650 systemd[1]: Removed slice kubepods-burstable-pod75d2f65c_be6b_49bc_b83f_56af452cdd2b.slice - libcontainer container kubepods-burstable-pod75d2f65c_be6b_49bc_b83f_56af452cdd2b.slice. Sep 4 23:44:05.589765 systemd[1]: kubepods-burstable-pod75d2f65c_be6b_49bc_b83f_56af452cdd2b.slice: Consumed 8.306s CPU time, 125.8M memory peak, 348K read from disk, 13.3M written to disk. Sep 4 23:44:05.590850 systemd[1]: Removed slice kubepods-besteffort-pod53e99c6e_dda3_4308_a6d4_e6e9e5a2ed1f.slice - libcontainer container kubepods-besteffort-pod53e99c6e_dda3_4308_a6d4_e6e9e5a2ed1f.slice. Sep 4 23:44:05.592114 containerd[1512]: time="2025-09-04T23:44:05.592062014Z" level=info msg="RemoveContainer for \"cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812\"" Sep 4 23:44:05.597232 containerd[1512]: time="2025-09-04T23:44:05.597176571Z" level=info msg="RemoveContainer for \"cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812\" returns successfully" Sep 4 23:44:05.597535 kubelet[2659]: I0904 23:44:05.597501 2659 scope.go:117] "RemoveContainer" containerID="eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45" Sep 4 23:44:05.598643 containerd[1512]: time="2025-09-04T23:44:05.598586237Z" level=info msg="RemoveContainer for \"eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45\"" Sep 4 23:44:05.603885 containerd[1512]: time="2025-09-04T23:44:05.603809971Z" level=info msg="RemoveContainer for \"eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45\" returns successfully" Sep 4 23:44:05.604295 kubelet[2659]: I0904 23:44:05.604214 2659 scope.go:117] "RemoveContainer" containerID="0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03" Sep 4 23:44:05.605859 containerd[1512]: time="2025-09-04T23:44:05.605827493Z" level=info msg="RemoveContainer for \"0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03\"" Sep 4 23:44:05.617720 containerd[1512]: time="2025-09-04T23:44:05.617673956Z" level=info msg="RemoveContainer for \"0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03\" returns successfully" Sep 4 23:44:05.617991 kubelet[2659]: I0904 23:44:05.617953 2659 scope.go:117] "RemoveContainer" containerID="ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7" Sep 4 23:44:05.619226 containerd[1512]: time="2025-09-04T23:44:05.619181827Z" level=info msg="RemoveContainer for \"ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7\"" Sep 4 23:44:05.623864 containerd[1512]: time="2025-09-04T23:44:05.623823364Z" level=info msg="RemoveContainer for \"ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7\" returns successfully" Sep 4 23:44:05.624068 kubelet[2659]: I0904 23:44:05.624042 2659 scope.go:117] "RemoveContainer" containerID="eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5" Sep 4 23:44:05.624977 containerd[1512]: time="2025-09-04T23:44:05.624953985Z" level=info msg="RemoveContainer for \"eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5\"" Sep 4 23:44:05.629147 containerd[1512]: time="2025-09-04T23:44:05.629108453Z" level=info msg="RemoveContainer for \"eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5\" returns successfully" Sep 4 23:44:05.629284 kubelet[2659]: I0904 23:44:05.629259 2659 scope.go:117] "RemoveContainer" containerID="cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812" Sep 4 23:44:05.629551 containerd[1512]: time="2025-09-04T23:44:05.629501213Z" level=error msg="ContainerStatus for \"cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812\": not found" Sep 4 23:44:05.629691 kubelet[2659]: E0904 23:44:05.629668 2659 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812\": not found" containerID="cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812" Sep 4 23:44:05.629738 kubelet[2659]: I0904 23:44:05.629699 2659 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812"} err="failed to get container status \"cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbbfed0db8a58d4c65a894cc35df9b64d91bb285d5e758c36c3dfbdb9284f812\": not found" Sep 4 23:44:05.629770 kubelet[2659]: I0904 23:44:05.629738 2659 scope.go:117] "RemoveContainer" containerID="eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45" Sep 4 23:44:05.629969 containerd[1512]: time="2025-09-04T23:44:05.629933988Z" level=error msg="ContainerStatus for \"eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45\": not found" Sep 4 23:44:05.630102 kubelet[2659]: E0904 23:44:05.630045 2659 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45\": not found" containerID="eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45" Sep 4 23:44:05.630102 kubelet[2659]: I0904 23:44:05.630079 2659 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45"} err="failed to get container status \"eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45\": rpc error: code = NotFound desc = an error occurred when try to find container \"eef92eae8e4a47ba0f5ecdbbe53b08d3858649fc459f3d5390123a336167da45\": not found" Sep 4 23:44:05.630102 kubelet[2659]: I0904 23:44:05.630094 2659 scope.go:117] "RemoveContainer" containerID="0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03" Sep 4 23:44:05.630399 containerd[1512]: time="2025-09-04T23:44:05.630254472Z" level=error msg="ContainerStatus for \"0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03\": not found" Sep 4 23:44:05.630460 kubelet[2659]: E0904 23:44:05.630394 2659 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03\": not found" containerID="0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03" Sep 4 23:44:05.630460 kubelet[2659]: I0904 23:44:05.630412 2659 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03"} err="failed to get container status \"0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03\": rpc error: code = NotFound desc = an error occurred when try to find container \"0dcadd1a2badd3c09271fd1df445b4abe5056808cbdb134c2c0c0f4892389e03\": not found" Sep 4 23:44:05.630460 kubelet[2659]: I0904 23:44:05.630425 2659 scope.go:117] "RemoveContainer" containerID="ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7" Sep 4 23:44:05.630614 containerd[1512]: time="2025-09-04T23:44:05.630571349Z" level=error msg="ContainerStatus for \"ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7\": not found" Sep 4 23:44:05.630717 kubelet[2659]: E0904 23:44:05.630690 2659 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7\": not found" containerID="ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7" Sep 4 23:44:05.630780 kubelet[2659]: I0904 23:44:05.630717 2659 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7"} err="failed to get container status \"ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec2af8a5541667d7ec719d918bbf5186b74bf9eae52dc4bb24fe8b7c807749a7\": not found" Sep 4 23:44:05.630780 kubelet[2659]: I0904 23:44:05.630739 2659 scope.go:117] "RemoveContainer" containerID="eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5" Sep 4 23:44:05.630928 containerd[1512]: time="2025-09-04T23:44:05.630881994Z" level=error msg="ContainerStatus for \"eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5\": not found" Sep 4 23:44:05.631005 kubelet[2659]: E0904 23:44:05.630984 2659 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5\": not found" containerID="eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5" Sep 4 23:44:05.631051 kubelet[2659]: I0904 23:44:05.631003 2659 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5"} err="failed to get container status \"eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb92324ad88cbde55e6e58f02810776c7b3c04ac5a866ae6a552e70f4b7cfae5\": not found" Sep 4 23:44:05.631051 kubelet[2659]: I0904 23:44:05.631026 2659 scope.go:117] "RemoveContainer" containerID="05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18" Sep 4 23:44:05.632037 kubelet[2659]: I0904 23:44:05.631999 2659 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 23:44:05.632157 kubelet[2659]: I0904 23:44:05.632133 2659 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 23:44:05.632157 kubelet[2659]: I0904 23:44:05.632154 2659 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 23:44:05.632275 kubelet[2659]: I0904 23:44:05.632168 2659 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 23:44:05.632275 kubelet[2659]: I0904 23:44:05.632182 2659 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 23:44:05.632275 kubelet[2659]: I0904 23:44:05.632195 2659 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 23:44:05.632275 kubelet[2659]: I0904 23:44:05.632208 2659 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75d2f65c-be6b-49bc-b83f-56af452cdd2b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 23:44:05.632275 kubelet[2659]: I0904 23:44:05.632220 2659 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75d2f65c-be6b-49bc-b83f-56af452cdd2b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 23:44:05.632275 kubelet[2659]: I0904 23:44:05.632232 2659 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75d2f65c-be6b-49bc-b83f-56af452cdd2b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 23:44:05.632275 kubelet[2659]: I0904 23:44:05.632244 2659 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t4m9n\" (UniqueName: \"kubernetes.io/projected/75d2f65c-be6b-49bc-b83f-56af452cdd2b-kube-api-access-t4m9n\") on node \"localhost\" DevicePath \"\"" Sep 4 23:44:05.632275 kubelet[2659]: I0904 23:44:05.632257 2659 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75d2f65c-be6b-49bc-b83f-56af452cdd2b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 23:44:05.632524 containerd[1512]: time="2025-09-04T23:44:05.632368526Z" level=info msg="RemoveContainer for \"05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18\"" Sep 4 23:44:05.637012 containerd[1512]: time="2025-09-04T23:44:05.636983342Z" level=info msg="RemoveContainer for \"05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18\" returns successfully" Sep 4 23:44:05.637207 kubelet[2659]: I0904 23:44:05.637182 2659 scope.go:117] "RemoveContainer" containerID="05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18" Sep 4 23:44:05.637471 containerd[1512]: time="2025-09-04T23:44:05.637422590Z" level=error msg="ContainerStatus for \"05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18\": not found" Sep 4 23:44:05.637608 kubelet[2659]: E0904 23:44:05.637573 2659 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18\": not found" containerID="05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18" Sep 4 23:44:05.637657 kubelet[2659]: I0904 23:44:05.637626 2659 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18"} err="failed to get container status \"05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18\": rpc error: code = NotFound desc = an error occurred when try to find container \"05e516aaf56b0b3959344893283cf632afea1f125054c26295fd0e855c122d18\": not found" Sep 4 23:44:06.253047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47-rootfs.mount: Deactivated successfully. Sep 4 23:44:06.253196 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5449ac532d784cd167197f5139dd6e9b33cea682dab4704320fee99418dbc47-shm.mount: Deactivated successfully. Sep 4 23:44:06.253305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cc9ed24615f88049f17ff0edd8d077cdf57a579e3ab2c61a25caba281a8f8e6-rootfs.mount: Deactivated successfully. Sep 4 23:44:06.253484 systemd[1]: var-lib-kubelet-pods-75d2f65c\x2dbe6b\x2d49bc\x2db83f\x2d56af452cdd2b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt4m9n.mount: Deactivated successfully. Sep 4 23:44:06.253644 systemd[1]: var-lib-kubelet-pods-53e99c6e\x2ddda3\x2d4308\x2da6d4\x2de6e9e5a2ed1f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db5jf2.mount: Deactivated successfully. Sep 4 23:44:06.253769 systemd[1]: var-lib-kubelet-pods-75d2f65c\x2dbe6b\x2d49bc\x2db83f\x2d56af452cdd2b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 23:44:06.253904 systemd[1]: var-lib-kubelet-pods-75d2f65c\x2dbe6b\x2d49bc\x2db83f\x2d56af452cdd2b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 23:44:07.167502 kubelet[2659]: I0904 23:44:07.167435 2659 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f" path="/var/lib/kubelet/pods/53e99c6e-dda3-4308-a6d4-e6e9e5a2ed1f/volumes" Sep 4 23:44:07.168349 kubelet[2659]: I0904 23:44:07.168321 2659 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75d2f65c-be6b-49bc-b83f-56af452cdd2b" path="/var/lib/kubelet/pods/75d2f65c-be6b-49bc-b83f-56af452cdd2b/volumes" Sep 4 23:44:07.184495 sshd[4382]: Connection closed by 10.0.0.1 port 43324 Sep 4 23:44:07.185201 sshd-session[4379]: pam_unix(sshd:session): session closed for user core Sep 4 23:44:07.202032 systemd[1]: sshd@27-10.0.0.28:22-10.0.0.1:43324.service: Deactivated successfully. Sep 4 23:44:07.205969 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 23:44:07.208416 systemd-logind[1494]: Session 28 logged out. Waiting for processes to exit. Sep 4 23:44:07.218638 systemd[1]: Started sshd@28-10.0.0.28:22-10.0.0.1:43328.service - OpenSSH per-connection server daemon (10.0.0.1:43328). Sep 4 23:44:07.220334 systemd-logind[1494]: Removed session 28. Sep 4 23:44:07.240078 kubelet[2659]: E0904 23:44:07.240018 2659 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:44:07.305843 sshd[4539]: Accepted publickey for core from 10.0.0.1 port 43328 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:44:07.308080 sshd-session[4539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:44:07.313687 systemd-logind[1494]: New session 29 of user core. Sep 4 23:44:07.324098 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 23:44:08.276954 sshd[4542]: Connection closed by 10.0.0.1 port 43328 Sep 4 23:44:08.277942 sshd-session[4539]: pam_unix(sshd:session): session closed for user core Sep 4 23:44:08.302615 systemd[1]: Started sshd@29-10.0.0.28:22-10.0.0.1:43340.service - OpenSSH per-connection server daemon (10.0.0.1:43340). Sep 4 23:44:08.304042 systemd[1]: sshd@28-10.0.0.28:22-10.0.0.1:43328.service: Deactivated successfully. Sep 4 23:44:08.307016 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 23:44:08.309054 systemd-logind[1494]: Session 29 logged out. Waiting for processes to exit. Sep 4 23:44:08.311306 systemd-logind[1494]: Removed session 29. Sep 4 23:44:08.340966 sshd[4551]: Accepted publickey for core from 10.0.0.1 port 43340 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:44:08.342342 sshd-session[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:44:08.350567 systemd-logind[1494]: New session 30 of user core. Sep 4 23:44:08.361214 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 4 23:44:08.369281 systemd[1]: Created slice kubepods-burstable-pod8c3fde70_f654_4006_9d8f_caca49f4af9b.slice - libcontainer container kubepods-burstable-pod8c3fde70_f654_4006_9d8f_caca49f4af9b.slice. Sep 4 23:44:08.425266 sshd[4556]: Connection closed by 10.0.0.1 port 43340 Sep 4 23:44:08.425641 sshd-session[4551]: pam_unix(sshd:session): session closed for user core Sep 4 23:44:08.451011 systemd[1]: sshd@29-10.0.0.28:22-10.0.0.1:43340.service: Deactivated successfully. Sep 4 23:44:08.453420 kubelet[2659]: I0904 23:44:08.453369 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8c3fde70-f654-4006-9d8f-caca49f4af9b-cilium-run\") pod \"cilium-cxj4w\" (UID: \"8c3fde70-f654-4006-9d8f-caca49f4af9b\") " pod="kube-system/cilium-cxj4w" Sep 4 23:44:08.453420 kubelet[2659]: I0904 23:44:08.453416 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8c3fde70-f654-4006-9d8f-caca49f4af9b-bpf-maps\") pod \"cilium-cxj4w\" (UID: \"8c3fde70-f654-4006-9d8f-caca49f4af9b\") " pod="kube-system/cilium-cxj4w" Sep 4 23:44:08.454040 kubelet[2659]: I0904 23:44:08.453438 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8c3fde70-f654-4006-9d8f-caca49f4af9b-cni-path\") pod \"cilium-cxj4w\" (UID: \"8c3fde70-f654-4006-9d8f-caca49f4af9b\") " pod="kube-system/cilium-cxj4w" Sep 4 23:44:08.454040 kubelet[2659]: I0904 23:44:08.453452 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c3fde70-f654-4006-9d8f-caca49f4af9b-etc-cni-netd\") pod \"cilium-cxj4w\" (UID: \"8c3fde70-f654-4006-9d8f-caca49f4af9b\") " pod="kube-system/cilium-cxj4w" Sep 4 23:44:08.454040 kubelet[2659]: I0904 23:44:08.453469 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c3fde70-f654-4006-9d8f-caca49f4af9b-lib-modules\") pod \"cilium-cxj4w\" (UID: \"8c3fde70-f654-4006-9d8f-caca49f4af9b\") " pod="kube-system/cilium-cxj4w" Sep 4 23:44:08.454040 kubelet[2659]: I0904 23:44:08.453485 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8c3fde70-f654-4006-9d8f-caca49f4af9b-clustermesh-secrets\") pod \"cilium-cxj4w\" (UID: \"8c3fde70-f654-4006-9d8f-caca49f4af9b\") " pod="kube-system/cilium-cxj4w" Sep 4 23:44:08.454040 kubelet[2659]: I0904 23:44:08.453503 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g47lc\" (UniqueName: \"kubernetes.io/projected/8c3fde70-f654-4006-9d8f-caca49f4af9b-kube-api-access-g47lc\") pod \"cilium-cxj4w\" (UID: \"8c3fde70-f654-4006-9d8f-caca49f4af9b\") " pod="kube-system/cilium-cxj4w" Sep 4 23:44:08.454040 kubelet[2659]: I0904 23:44:08.453519 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8c3fde70-f654-4006-9d8f-caca49f4af9b-host-proc-sys-kernel\") pod \"cilium-cxj4w\" (UID: \"8c3fde70-f654-4006-9d8f-caca49f4af9b\") " pod="kube-system/cilium-cxj4w" Sep 4 23:44:08.454276 kubelet[2659]: I0904 23:44:08.453533 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c3fde70-f654-4006-9d8f-caca49f4af9b-xtables-lock\") pod \"cilium-cxj4w\" (UID: \"8c3fde70-f654-4006-9d8f-caca49f4af9b\") " pod="kube-system/cilium-cxj4w" Sep 4 23:44:08.454276 kubelet[2659]: I0904 23:44:08.453547 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8c3fde70-f654-4006-9d8f-caca49f4af9b-cilium-ipsec-secrets\") pod \"cilium-cxj4w\" (UID: \"8c3fde70-f654-4006-9d8f-caca49f4af9b\") " pod="kube-system/cilium-cxj4w" Sep 4 23:44:08.454276 kubelet[2659]: I0904 23:44:08.453577 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8c3fde70-f654-4006-9d8f-caca49f4af9b-hubble-tls\") pod \"cilium-cxj4w\" (UID: \"8c3fde70-f654-4006-9d8f-caca49f4af9b\") " pod="kube-system/cilium-cxj4w" Sep 4 23:44:08.454276 kubelet[2659]: I0904 23:44:08.453601 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8c3fde70-f654-4006-9d8f-caca49f4af9b-host-proc-sys-net\") pod \"cilium-cxj4w\" (UID: \"8c3fde70-f654-4006-9d8f-caca49f4af9b\") " pod="kube-system/cilium-cxj4w" Sep 4 23:44:08.454276 kubelet[2659]: I0904 23:44:08.453615 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8c3fde70-f654-4006-9d8f-caca49f4af9b-cilium-cgroup\") pod \"cilium-cxj4w\" (UID: \"8c3fde70-f654-4006-9d8f-caca49f4af9b\") " pod="kube-system/cilium-cxj4w" Sep 4 23:44:08.454276 kubelet[2659]: I0904 23:44:08.453629 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c3fde70-f654-4006-9d8f-caca49f4af9b-cilium-config-path\") pod \"cilium-cxj4w\" (UID: \"8c3fde70-f654-4006-9d8f-caca49f4af9b\") " pod="kube-system/cilium-cxj4w" Sep 4 23:44:08.454468 kubelet[2659]: I0904 23:44:08.453649 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8c3fde70-f654-4006-9d8f-caca49f4af9b-hostproc\") pod \"cilium-cxj4w\" (UID: \"8c3fde70-f654-4006-9d8f-caca49f4af9b\") " pod="kube-system/cilium-cxj4w" Sep 4 23:44:08.454760 systemd[1]: session-30.scope: Deactivated successfully. Sep 4 23:44:08.457878 systemd-logind[1494]: Session 30 logged out. Waiting for processes to exit. Sep 4 23:44:08.468478 systemd[1]: Started sshd@30-10.0.0.28:22-10.0.0.1:43350.service - OpenSSH per-connection server daemon (10.0.0.1:43350). Sep 4 23:44:08.469887 systemd-logind[1494]: Removed session 30. Sep 4 23:44:08.504920 sshd[4562]: Accepted publickey for core from 10.0.0.1 port 43350 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:44:08.507071 sshd-session[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:44:08.513518 systemd-logind[1494]: New session 31 of user core. Sep 4 23:44:08.522096 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 4 23:44:08.674532 kubelet[2659]: E0904 23:44:08.674356 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:08.675324 containerd[1512]: time="2025-09-04T23:44:08.675008198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cxj4w,Uid:8c3fde70-f654-4006-9d8f-caca49f4af9b,Namespace:kube-system,Attempt:0,}" Sep 4 23:44:08.701946 containerd[1512]: time="2025-09-04T23:44:08.701752557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:44:08.701946 containerd[1512]: time="2025-09-04T23:44:08.701868706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:44:08.701946 containerd[1512]: time="2025-09-04T23:44:08.701918138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:44:08.702141 containerd[1512]: time="2025-09-04T23:44:08.702047161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:44:08.736105 systemd[1]: Started cri-containerd-556432376ff46a13b8a724bc7defdd544ff7ecfcd5441e8cfbb89a83be34981d.scope - libcontainer container 556432376ff46a13b8a724bc7defdd544ff7ecfcd5441e8cfbb89a83be34981d. Sep 4 23:44:08.763837 containerd[1512]: time="2025-09-04T23:44:08.763783940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cxj4w,Uid:8c3fde70-f654-4006-9d8f-caca49f4af9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"556432376ff46a13b8a724bc7defdd544ff7ecfcd5441e8cfbb89a83be34981d\"" Sep 4 23:44:08.787696 kubelet[2659]: E0904 23:44:08.787471 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:08.794490 containerd[1512]: time="2025-09-04T23:44:08.794435160Z" level=info msg="CreateContainer within sandbox \"556432376ff46a13b8a724bc7defdd544ff7ecfcd5441e8cfbb89a83be34981d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:44:08.806230 containerd[1512]: time="2025-09-04T23:44:08.806187863Z" level=info msg="CreateContainer within sandbox \"556432376ff46a13b8a724bc7defdd544ff7ecfcd5441e8cfbb89a83be34981d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"310b5a2c7205c002442853c2bfb66bdf7d2a4eebfdc1ce5e342e27f02e4b72ac\"" Sep 4 23:44:08.806679 containerd[1512]: time="2025-09-04T23:44:08.806643542Z" level=info msg="StartContainer for \"310b5a2c7205c002442853c2bfb66bdf7d2a4eebfdc1ce5e342e27f02e4b72ac\"" Sep 4 23:44:08.837044 systemd[1]: Started cri-containerd-310b5a2c7205c002442853c2bfb66bdf7d2a4eebfdc1ce5e342e27f02e4b72ac.scope - libcontainer container 310b5a2c7205c002442853c2bfb66bdf7d2a4eebfdc1ce5e342e27f02e4b72ac. Sep 4 23:44:08.867043 containerd[1512]: time="2025-09-04T23:44:08.866996993Z" level=info msg="StartContainer for \"310b5a2c7205c002442853c2bfb66bdf7d2a4eebfdc1ce5e342e27f02e4b72ac\" returns successfully" Sep 4 23:44:08.879524 systemd[1]: cri-containerd-310b5a2c7205c002442853c2bfb66bdf7d2a4eebfdc1ce5e342e27f02e4b72ac.scope: Deactivated successfully. Sep 4 23:44:08.920775 containerd[1512]: time="2025-09-04T23:44:08.920694776Z" level=info msg="shim disconnected" id=310b5a2c7205c002442853c2bfb66bdf7d2a4eebfdc1ce5e342e27f02e4b72ac namespace=k8s.io Sep 4 23:44:08.920775 containerd[1512]: time="2025-09-04T23:44:08.920762484Z" level=warning msg="cleaning up after shim disconnected" id=310b5a2c7205c002442853c2bfb66bdf7d2a4eebfdc1ce5e342e27f02e4b72ac namespace=k8s.io Sep 4 23:44:08.920775 containerd[1512]: time="2025-09-04T23:44:08.920773124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:44:09.167393 kubelet[2659]: E0904 23:44:09.167341 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-qb82q" podUID="32766d13-67fb-43f5-b192-a1dd7b2e6020" Sep 4 23:44:09.563407 systemd[1]: run-containerd-runc-k8s.io-556432376ff46a13b8a724bc7defdd544ff7ecfcd5441e8cfbb89a83be34981d-runc.ubKpbQ.mount: Deactivated successfully. Sep 4 23:44:09.592966 kubelet[2659]: E0904 23:44:09.592919 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:09.606294 containerd[1512]: time="2025-09-04T23:44:09.606242865Z" level=info msg="CreateContainer within sandbox \"556432376ff46a13b8a724bc7defdd544ff7ecfcd5441e8cfbb89a83be34981d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:44:09.630050 containerd[1512]: time="2025-09-04T23:44:09.629986349Z" level=info msg="CreateContainer within sandbox \"556432376ff46a13b8a724bc7defdd544ff7ecfcd5441e8cfbb89a83be34981d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b562a3ae12b6389740ff654189175b8940c3fd087c93e2a51ef34bbe60f8cd75\"" Sep 4 23:44:09.631333 containerd[1512]: time="2025-09-04T23:44:09.631281990Z" level=info msg="StartContainer for \"b562a3ae12b6389740ff654189175b8940c3fd087c93e2a51ef34bbe60f8cd75\"" Sep 4 23:44:09.670121 systemd[1]: Started cri-containerd-b562a3ae12b6389740ff654189175b8940c3fd087c93e2a51ef34bbe60f8cd75.scope - libcontainer container b562a3ae12b6389740ff654189175b8940c3fd087c93e2a51ef34bbe60f8cd75. Sep 4 23:44:09.705373 containerd[1512]: time="2025-09-04T23:44:09.705304489Z" level=info msg="StartContainer for \"b562a3ae12b6389740ff654189175b8940c3fd087c93e2a51ef34bbe60f8cd75\" returns successfully" Sep 4 23:44:09.714521 systemd[1]: cri-containerd-b562a3ae12b6389740ff654189175b8940c3fd087c93e2a51ef34bbe60f8cd75.scope: Deactivated successfully. Sep 4 23:44:10.093056 containerd[1512]: time="2025-09-04T23:44:10.092985660Z" level=info msg="shim disconnected" id=b562a3ae12b6389740ff654189175b8940c3fd087c93e2a51ef34bbe60f8cd75 namespace=k8s.io Sep 4 23:44:10.093056 containerd[1512]: time="2025-09-04T23:44:10.093050763Z" level=warning msg="cleaning up after shim disconnected" id=b562a3ae12b6389740ff654189175b8940c3fd087c93e2a51ef34bbe60f8cd75 namespace=k8s.io Sep 4 23:44:10.093056 containerd[1512]: time="2025-09-04T23:44:10.093062315Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:44:10.563122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b562a3ae12b6389740ff654189175b8940c3fd087c93e2a51ef34bbe60f8cd75-rootfs.mount: Deactivated successfully. Sep 4 23:44:10.596619 kubelet[2659]: E0904 23:44:10.596390 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:10.966230 containerd[1512]: time="2025-09-04T23:44:10.966039976Z" level=info msg="CreateContainer within sandbox \"556432376ff46a13b8a724bc7defdd544ff7ecfcd5441e8cfbb89a83be34981d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:44:11.165106 kubelet[2659]: E0904 23:44:11.165040 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-qb82q" podUID="32766d13-67fb-43f5-b192-a1dd7b2e6020" Sep 4 23:44:12.227346 kubelet[2659]: I0904 23:44:12.227278 2659 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T23:44:12Z","lastTransitionTime":"2025-09-04T23:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 23:44:12.241003 kubelet[2659]: E0904 23:44:12.240951 2659 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:44:12.481640 containerd[1512]: time="2025-09-04T23:44:12.481433569Z" level=info msg="CreateContainer within sandbox \"556432376ff46a13b8a724bc7defdd544ff7ecfcd5441e8cfbb89a83be34981d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"201ddc997a720ccf1fbf46f461a42fcff2768f5a834a734d0379d01c4a9c9121\"" Sep 4 23:44:12.482246 containerd[1512]: time="2025-09-04T23:44:12.482053887Z" level=info msg="StartContainer for \"201ddc997a720ccf1fbf46f461a42fcff2768f5a834a734d0379d01c4a9c9121\"" Sep 4 23:44:12.516037 systemd[1]: Started cri-containerd-201ddc997a720ccf1fbf46f461a42fcff2768f5a834a734d0379d01c4a9c9121.scope - libcontainer container 201ddc997a720ccf1fbf46f461a42fcff2768f5a834a734d0379d01c4a9c9121. Sep 4 23:44:12.843195 systemd[1]: cri-containerd-201ddc997a720ccf1fbf46f461a42fcff2768f5a834a734d0379d01c4a9c9121.scope: Deactivated successfully. Sep 4 23:44:12.971536 containerd[1512]: time="2025-09-04T23:44:12.971406457Z" level=info msg="StartContainer for \"201ddc997a720ccf1fbf46f461a42fcff2768f5a834a734d0379d01c4a9c9121\" returns successfully" Sep 4 23:44:12.974545 kubelet[2659]: E0904 23:44:12.974509 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:12.994394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-201ddc997a720ccf1fbf46f461a42fcff2768f5a834a734d0379d01c4a9c9121-rootfs.mount: Deactivated successfully. Sep 4 23:44:13.165136 kubelet[2659]: E0904 23:44:13.164865 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-qb82q" podUID="32766d13-67fb-43f5-b192-a1dd7b2e6020" Sep 4 23:44:13.630280 containerd[1512]: time="2025-09-04T23:44:13.630206170Z" level=info msg="shim disconnected" id=201ddc997a720ccf1fbf46f461a42fcff2768f5a834a734d0379d01c4a9c9121 namespace=k8s.io Sep 4 23:44:13.630280 containerd[1512]: time="2025-09-04T23:44:13.630270932Z" level=warning msg="cleaning up after shim disconnected" id=201ddc997a720ccf1fbf46f461a42fcff2768f5a834a734d0379d01c4a9c9121 namespace=k8s.io Sep 4 23:44:13.630280 containerd[1512]: time="2025-09-04T23:44:13.630279178Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:44:13.979319 kubelet[2659]: E0904 23:44:13.979163 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:14.199278 containerd[1512]: time="2025-09-04T23:44:14.199207593Z" level=info msg="CreateContainer within sandbox \"556432376ff46a13b8a724bc7defdd544ff7ecfcd5441e8cfbb89a83be34981d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:44:14.703203 containerd[1512]: time="2025-09-04T23:44:14.703093575Z" level=info msg="CreateContainer within sandbox \"556432376ff46a13b8a724bc7defdd544ff7ecfcd5441e8cfbb89a83be34981d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bd486229ff9fb26c043c2b3b29bbe31f705c650387f014ba65640b9c2828be0b\"" Sep 4 23:44:14.704282 containerd[1512]: time="2025-09-04T23:44:14.704181425Z" level=info msg="StartContainer for \"bd486229ff9fb26c043c2b3b29bbe31f705c650387f014ba65640b9c2828be0b\"" Sep 4 23:44:14.742987 systemd[1]: run-containerd-runc-k8s.io-bd486229ff9fb26c043c2b3b29bbe31f705c650387f014ba65640b9c2828be0b-runc.rJKigY.mount: Deactivated successfully. Sep 4 23:44:14.756123 systemd[1]: Started cri-containerd-bd486229ff9fb26c043c2b3b29bbe31f705c650387f014ba65640b9c2828be0b.scope - libcontainer container bd486229ff9fb26c043c2b3b29bbe31f705c650387f014ba65640b9c2828be0b. Sep 4 23:44:14.786251 systemd[1]: cri-containerd-bd486229ff9fb26c043c2b3b29bbe31f705c650387f014ba65640b9c2828be0b.scope: Deactivated successfully. Sep 4 23:44:15.003101 containerd[1512]: time="2025-09-04T23:44:15.002824956Z" level=info msg="StartContainer for \"bd486229ff9fb26c043c2b3b29bbe31f705c650387f014ba65640b9c2828be0b\" returns successfully" Sep 4 23:44:15.007689 kubelet[2659]: E0904 23:44:15.007648 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:15.165178 kubelet[2659]: E0904 23:44:15.165081 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-qb82q" podUID="32766d13-67fb-43f5-b192-a1dd7b2e6020" Sep 4 23:44:15.314500 containerd[1512]: time="2025-09-04T23:44:15.314409173Z" level=info msg="shim disconnected" id=bd486229ff9fb26c043c2b3b29bbe31f705c650387f014ba65640b9c2828be0b namespace=k8s.io Sep 4 23:44:15.314500 containerd[1512]: time="2025-09-04T23:44:15.314493141Z" level=warning msg="cleaning up after shim disconnected" id=bd486229ff9fb26c043c2b3b29bbe31f705c650387f014ba65640b9c2828be0b namespace=k8s.io Sep 4 23:44:15.314500 containerd[1512]: time="2025-09-04T23:44:15.314504613Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:44:15.642871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd486229ff9fb26c043c2b3b29bbe31f705c650387f014ba65640b9c2828be0b-rootfs.mount: Deactivated successfully. Sep 4 23:44:16.012520 kubelet[2659]: E0904 23:44:16.012349 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:16.049104 containerd[1512]: time="2025-09-04T23:44:16.049026432Z" level=info msg="CreateContainer within sandbox \"556432376ff46a13b8a724bc7defdd544ff7ecfcd5441e8cfbb89a83be34981d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:44:16.111544 containerd[1512]: time="2025-09-04T23:44:16.111468491Z" level=info msg="CreateContainer within sandbox \"556432376ff46a13b8a724bc7defdd544ff7ecfcd5441e8cfbb89a83be34981d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1cb9daf0cdab53a694fa89e41cc2d4129e1563451bf7fd945388cc948252db74\"" Sep 4 23:44:16.112109 containerd[1512]: time="2025-09-04T23:44:16.112074813Z" level=info msg="StartContainer for \"1cb9daf0cdab53a694fa89e41cc2d4129e1563451bf7fd945388cc948252db74\"" Sep 4 23:44:16.155207 systemd[1]: Started cri-containerd-1cb9daf0cdab53a694fa89e41cc2d4129e1563451bf7fd945388cc948252db74.scope - libcontainer container 1cb9daf0cdab53a694fa89e41cc2d4129e1563451bf7fd945388cc948252db74. Sep 4 23:44:16.207341 containerd[1512]: time="2025-09-04T23:44:16.207263851Z" level=info msg="StartContainer for \"1cb9daf0cdab53a694fa89e41cc2d4129e1563451bf7fd945388cc948252db74\" returns successfully" Sep 4 23:44:16.770932 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 4 23:44:17.017181 kubelet[2659]: E0904 23:44:17.017148 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:17.165562 kubelet[2659]: E0904 23:44:17.165328 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-qb82q" podUID="32766d13-67fb-43f5-b192-a1dd7b2e6020" Sep 4 23:44:17.169601 kubelet[2659]: I0904 23:44:17.169495 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cxj4w" podStartSLOduration=9.169466064 podStartE2EDuration="9.169466064s" podCreationTimestamp="2025-09-04 23:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:44:17.168857157 +0000 UTC m=+140.126857976" watchObservedRunningTime="2025-09-04 23:44:17.169466064 +0000 UTC m=+140.127466872" Sep 4 23:44:18.675775 kubelet[2659]: E0904 23:44:18.675716 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:19.166881 kubelet[2659]: E0904 23:44:19.166704 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:20.363665 systemd-networkd[1443]: lxc_health: Link UP Sep 4 23:44:20.373518 systemd-networkd[1443]: lxc_health: Gained carrier Sep 4 23:44:20.677117 kubelet[2659]: E0904 23:44:20.676925 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:21.026672 kubelet[2659]: E0904 23:44:21.026631 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:21.382273 systemd-networkd[1443]: lxc_health: Gained IPv6LL Sep 4 23:44:22.028114 kubelet[2659]: E0904 23:44:22.028078 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:27.255671 systemd[1]: run-containerd-runc-k8s.io-1cb9daf0cdab53a694fa89e41cc2d4129e1563451bf7fd945388cc948252db74-runc.pgBkZH.mount: Deactivated successfully. Sep 4 23:44:28.165491 kubelet[2659]: E0904 23:44:28.165426 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:29.164885 kubelet[2659]: E0904 23:44:29.164824 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:44:33.027359 sshd[4565]: Connection closed by 10.0.0.1 port 43350 Sep 4 23:44:33.028022 sshd-session[4562]: pam_unix(sshd:session): session closed for user core Sep 4 23:44:33.034327 systemd[1]: sshd@30-10.0.0.28:22-10.0.0.1:43350.service: Deactivated successfully. Sep 4 23:44:33.037202 systemd[1]: session-31.scope: Deactivated successfully. Sep 4 23:44:33.038079 systemd-logind[1494]: Session 31 logged out. Waiting for processes to exit. Sep 4 23:44:33.039205 systemd-logind[1494]: Removed session 31.