Sep 5 00:04:50.917564 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:33:49 -00 2025 Sep 5 00:04:50.917586 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:04:50.917597 kernel: BIOS-provided physical RAM map: Sep 5 00:04:50.917604 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 5 00:04:50.917610 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 5 00:04:50.917616 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 5 00:04:50.917625 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 5 00:04:50.917633 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 5 00:04:50.917641 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 5 00:04:50.917649 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 5 00:04:50.917660 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 5 00:04:50.917668 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 5 00:04:50.917680 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 5 00:04:50.917688 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 5 00:04:50.917701 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 5 00:04:50.917710 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 5 00:04:50.917722 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 5 00:04:50.917728 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 5 00:04:50.917735 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 5 00:04:50.917742 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 5 00:04:50.917749 kernel: NX (Execute Disable) protection: active Sep 5 00:04:50.917756 kernel: APIC: Static calls initialized Sep 5 00:04:50.917763 kernel: efi: EFI v2.7 by EDK II Sep 5 00:04:50.917770 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Sep 5 00:04:50.917776 kernel: SMBIOS 2.8 present. Sep 5 00:04:50.917783 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 5 00:04:50.917790 kernel: Hypervisor detected: KVM Sep 5 00:04:50.917799 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 00:04:50.917806 kernel: kvm-clock: using sched offset of 6651898520 cycles Sep 5 00:04:50.917813 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 00:04:50.917821 kernel: tsc: Detected 2794.748 MHz processor Sep 5 00:04:50.917828 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:04:50.917835 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:04:50.917843 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 5 00:04:50.917850 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 5 00:04:50.917857 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:04:50.917866 kernel: Using GB pages for direct mapping Sep 5 00:04:50.917873 kernel: Secure boot disabled Sep 5 00:04:50.917880 kernel: ACPI: Early table checksum verification disabled Sep 5 00:04:50.917888 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 5 00:04:50.917899 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 5 00:04:50.917906 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:04:50.917914 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:04:50.917924 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 5 00:04:50.917931 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:04:50.917941 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:04:50.917948 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:04:50.917956 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:04:50.917963 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 5 00:04:50.917970 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 5 00:04:50.917980 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 5 00:04:50.917988 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 5 00:04:50.917995 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 5 00:04:50.918002 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 5 00:04:50.918010 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 5 00:04:50.918017 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 5 00:04:50.918024 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 5 00:04:50.918032 kernel: No NUMA configuration found Sep 5 00:04:50.918041 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 5 00:04:50.918051 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 5 00:04:50.918059 kernel: Zone ranges: Sep 5 00:04:50.918066 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:04:50.918073 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 5 00:04:50.918081 kernel: Normal empty Sep 5 00:04:50.918088 kernel: Movable zone start for each node Sep 5 00:04:50.918095 kernel: Early memory node ranges Sep 5 00:04:50.918102 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 5 00:04:50.918110 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 5 00:04:50.918117 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 5 00:04:50.918127 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 5 00:04:50.918134 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 5 00:04:50.918141 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 5 00:04:50.918151 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 5 00:04:50.918158 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:04:50.918165 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 5 00:04:50.918173 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 5 00:04:50.918180 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:04:50.918187 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 5 00:04:50.918197 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 5 00:04:50.918205 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 5 00:04:50.918212 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 00:04:50.918238 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 00:04:50.918247 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 00:04:50.918254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:04:50.918261 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 00:04:50.918269 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:04:50.918276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 00:04:50.918287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 00:04:50.918294 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:04:50.918301 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:04:50.918309 kernel: TSC deadline timer available Sep 5 00:04:50.918316 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 5 00:04:50.918324 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 00:04:50.918331 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 5 00:04:50.918338 kernel: kvm-guest: setup PV sched yield Sep 5 00:04:50.918346 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 5 00:04:50.918355 kernel: Booting paravirtualized kernel on KVM Sep 5 00:04:50.918363 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:04:50.918370 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 5 00:04:50.918378 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 5 00:04:50.918385 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 5 00:04:50.918392 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 5 00:04:50.918399 kernel: kvm-guest: PV spinlocks enabled Sep 5 00:04:50.918407 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 00:04:50.918415 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:04:50.918428 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:04:50.918436 kernel: random: crng init done Sep 5 00:04:50.918443 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:04:50.918451 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:04:50.918464 kernel: Fallback order for Node 0: 0 Sep 5 00:04:50.918472 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 5 00:04:50.918479 kernel: Policy zone: DMA32 Sep 5 00:04:50.918487 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:04:50.918495 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42872K init, 2324K bss, 166140K reserved, 0K cma-reserved) Sep 5 00:04:50.918505 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:04:50.918512 kernel: ftrace: allocating 37969 entries in 149 pages Sep 5 00:04:50.918519 kernel: ftrace: allocated 149 pages with 4 groups Sep 5 00:04:50.918527 kernel: Dynamic Preempt: voluntary Sep 5 00:04:50.918542 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:04:50.918554 kernel: rcu: RCU event tracing is enabled. Sep 5 00:04:50.918561 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:04:50.918569 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:04:50.918577 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:04:50.918585 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:04:50.918592 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:04:50.918602 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:04:50.918610 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 5 00:04:50.918620 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:04:50.918628 kernel: Console: colour dummy device 80x25 Sep 5 00:04:50.918636 kernel: printk: console [ttyS0] enabled Sep 5 00:04:50.918646 kernel: ACPI: Core revision 20230628 Sep 5 00:04:50.918654 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 5 00:04:50.918661 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:04:50.918669 kernel: x2apic enabled Sep 5 00:04:50.918677 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 00:04:50.918684 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 5 00:04:50.918692 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 5 00:04:50.918700 kernel: kvm-guest: setup PV IPIs Sep 5 00:04:50.918708 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 5 00:04:50.918718 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 5 00:04:50.918726 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 5 00:04:50.918733 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 00:04:50.918741 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 5 00:04:50.918749 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 5 00:04:50.918756 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:04:50.918764 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 00:04:50.918772 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 00:04:50.918780 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 5 00:04:50.918790 kernel: active return thunk: retbleed_return_thunk Sep 5 00:04:50.918797 kernel: RETBleed: Mitigation: untrained return thunk Sep 5 00:04:50.918805 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:04:50.918813 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:04:50.918823 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 5 00:04:50.918831 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 5 00:04:50.918839 kernel: active return thunk: srso_return_thunk Sep 5 00:04:50.918847 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 5 00:04:50.918855 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:04:50.918865 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:04:50.918873 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:04:50.918881 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:04:50.918888 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 5 00:04:50.918896 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:04:50.918904 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:04:50.918912 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 00:04:50.918919 kernel: landlock: Up and running. Sep 5 00:04:50.918927 kernel: SELinux: Initializing. Sep 5 00:04:50.918937 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:04:50.918945 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:04:50.918953 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 5 00:04:50.918961 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:04:50.918969 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:04:50.918976 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:04:50.918984 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 5 00:04:50.918992 kernel: ... version: 0 Sep 5 00:04:50.919002 kernel: ... bit width: 48 Sep 5 00:04:50.919009 kernel: ... generic registers: 6 Sep 5 00:04:50.919017 kernel: ... value mask: 0000ffffffffffff Sep 5 00:04:50.919025 kernel: ... max period: 00007fffffffffff Sep 5 00:04:50.919032 kernel: ... fixed-purpose events: 0 Sep 5 00:04:50.919040 kernel: ... event mask: 000000000000003f Sep 5 00:04:50.919048 kernel: signal: max sigframe size: 1776 Sep 5 00:04:50.919055 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:04:50.919063 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:04:50.919071 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:04:50.919081 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:04:50.919089 kernel: .... node #0, CPUs: #1 #2 #3 Sep 5 00:04:50.919096 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:04:50.919104 kernel: smpboot: Max logical packages: 1 Sep 5 00:04:50.919112 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 5 00:04:50.919119 kernel: devtmpfs: initialized Sep 5 00:04:50.919127 kernel: x86/mm: Memory block size: 128MB Sep 5 00:04:50.919135 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 5 00:04:50.919143 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 5 00:04:50.919153 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 5 00:04:50.919161 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 5 00:04:50.919169 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 5 00:04:50.919177 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:04:50.919184 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:04:50.919192 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:04:50.919200 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:04:50.919207 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:04:50.919308 kernel: audit: type=2000 audit(1757030690.175:1): state=initialized audit_enabled=0 res=1 Sep 5 00:04:50.919317 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:04:50.919324 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:04:50.919332 kernel: cpuidle: using governor menu Sep 5 00:04:50.919340 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:04:50.919348 kernel: dca service started, version 1.12.1 Sep 5 00:04:50.919356 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 5 00:04:50.919363 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 5 00:04:50.919371 kernel: PCI: Using configuration type 1 for base access Sep 5 00:04:50.919382 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:04:50.919390 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:04:50.919398 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:04:50.919405 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:04:50.919413 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:04:50.919421 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:04:50.919428 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:04:50.919436 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:04:50.919444 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:04:50.919454 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 5 00:04:50.919469 kernel: ACPI: Interpreter enabled Sep 5 00:04:50.919477 kernel: ACPI: PM: (supports S0 S3 S5) Sep 5 00:04:50.919484 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:04:50.919493 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:04:50.919501 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:04:50.919508 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 00:04:50.919516 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:04:50.919731 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:04:50.919901 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 5 00:04:50.920031 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 5 00:04:50.920042 kernel: PCI host bridge to bus 0000:00 Sep 5 00:04:50.920182 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:04:50.920318 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:04:50.920437 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:04:50.920569 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 5 00:04:50.920713 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 5 00:04:50.920830 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 5 00:04:50.920946 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:04:50.921102 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 5 00:04:50.921275 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 5 00:04:50.921409 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 5 00:04:50.921554 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 5 00:04:50.921683 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 5 00:04:50.921818 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 5 00:04:50.921946 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:04:50.922112 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 5 00:04:50.922288 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 5 00:04:50.922419 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 5 00:04:50.922566 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 5 00:04:50.922752 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 5 00:04:50.922881 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 5 00:04:50.923009 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 5 00:04:50.923145 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 5 00:04:50.923332 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 5 00:04:50.923481 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 5 00:04:50.923612 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 5 00:04:50.923761 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 5 00:04:50.923891 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 5 00:04:50.924037 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 5 00:04:50.924165 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 00:04:50.924373 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 5 00:04:50.924520 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 5 00:04:50.924665 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 5 00:04:50.924822 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 5 00:04:50.924951 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 5 00:04:50.924962 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 00:04:50.924970 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 00:04:50.924978 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 00:04:50.924987 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 00:04:50.925000 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 00:04:50.925008 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 00:04:50.925015 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 00:04:50.925023 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 00:04:50.925030 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 00:04:50.925038 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 00:04:50.925046 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 00:04:50.925053 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 00:04:50.925061 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 00:04:50.925071 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 00:04:50.925079 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 00:04:50.925087 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 00:04:50.925094 kernel: iommu: Default domain type: Translated Sep 5 00:04:50.925102 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:04:50.925110 kernel: efivars: Registered efivars operations Sep 5 00:04:50.925117 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:04:50.925125 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:04:50.925133 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 5 00:04:50.925143 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 5 00:04:50.925150 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 5 00:04:50.925158 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 5 00:04:50.925337 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 00:04:50.925474 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 00:04:50.925604 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:04:50.925619 kernel: vgaarb: loaded Sep 5 00:04:50.925630 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 5 00:04:50.925645 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 5 00:04:50.925655 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 00:04:50.925666 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:04:50.925677 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:04:50.925685 kernel: pnp: PnP ACPI init Sep 5 00:04:50.925902 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 5 00:04:50.925920 kernel: pnp: PnP ACPI: found 6 devices Sep 5 00:04:50.925930 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:04:50.925941 kernel: NET: Registered PF_INET protocol family Sep 5 00:04:50.925956 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:04:50.925965 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:04:50.925977 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:04:50.925988 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:04:50.925999 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:04:50.926009 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:04:50.926017 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:04:50.926024 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:04:50.926038 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:04:50.926049 kernel: NET: Registered PF_XDP protocol family Sep 5 00:04:50.926215 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 5 00:04:50.926402 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 5 00:04:50.926535 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:04:50.926669 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:04:50.926799 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:04:50.926914 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 5 00:04:50.927036 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 5 00:04:50.927151 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 5 00:04:50.927161 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:04:50.927169 kernel: Initialise system trusted keyrings Sep 5 00:04:50.927177 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:04:50.927185 kernel: Key type asymmetric registered Sep 5 00:04:50.927193 kernel: Asymmetric key parser 'x509' registered Sep 5 00:04:50.927200 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 5 00:04:50.927208 kernel: io scheduler mq-deadline registered Sep 5 00:04:50.927266 kernel: io scheduler kyber registered Sep 5 00:04:50.927274 kernel: io scheduler bfq registered Sep 5 00:04:50.927282 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:04:50.927290 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 00:04:50.927298 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 00:04:50.927306 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 00:04:50.927314 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:04:50.927322 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:04:50.927330 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 00:04:50.927341 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 00:04:50.927349 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 00:04:50.927499 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 5 00:04:50.927512 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 5 00:04:50.927631 kernel: rtc_cmos 00:04: registered as rtc0 Sep 5 00:04:50.927776 kernel: rtc_cmos 00:04: setting system clock to 2025-09-05T00:04:50 UTC (1757030690) Sep 5 00:04:50.927918 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 5 00:04:50.927929 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 5 00:04:50.927942 kernel: efifb: probing for efifb Sep 5 00:04:50.927950 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 5 00:04:50.927958 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 5 00:04:50.927965 kernel: efifb: scrolling: redraw Sep 5 00:04:50.927973 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 5 00:04:50.927982 kernel: Console: switching to colour frame buffer device 100x37 Sep 5 00:04:50.928012 kernel: fb0: EFI VGA frame buffer device Sep 5 00:04:50.928023 kernel: pstore: Using crash dump compression: deflate Sep 5 00:04:50.928031 kernel: pstore: Registered efi_pstore as persistent store backend Sep 5 00:04:50.928042 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:04:50.928050 kernel: Segment Routing with IPv6 Sep 5 00:04:50.928059 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:04:50.928070 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:04:50.928081 kernel: Key type dns_resolver registered Sep 5 00:04:50.928092 kernel: IPI shorthand broadcast: enabled Sep 5 00:04:50.928100 kernel: sched_clock: Marking stable (864002935, 111471273)->(995815024, -20340816) Sep 5 00:04:50.928108 kernel: registered taskstats version 1 Sep 5 00:04:50.928116 kernel: Loading compiled-in X.509 certificates Sep 5 00:04:50.928127 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: fbb6a9f06c02a4dbdf06d4c5d95c782040e8492c' Sep 5 00:04:50.928135 kernel: Key type .fscrypt registered Sep 5 00:04:50.928143 kernel: Key type fscrypt-provisioning registered Sep 5 00:04:50.928151 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:04:50.928159 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:04:50.928168 kernel: ima: No architecture policies found Sep 5 00:04:50.928176 kernel: clk: Disabling unused clocks Sep 5 00:04:50.928184 kernel: Freeing unused kernel image (initmem) memory: 42872K Sep 5 00:04:50.928192 kernel: Write protecting the kernel read-only data: 36864k Sep 5 00:04:50.928202 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 5 00:04:50.928210 kernel: Run /init as init process Sep 5 00:04:50.928232 kernel: with arguments: Sep 5 00:04:50.928240 kernel: /init Sep 5 00:04:50.928248 kernel: with environment: Sep 5 00:04:50.928256 kernel: HOME=/ Sep 5 00:04:50.928264 kernel: TERM=linux Sep 5 00:04:50.928272 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:04:50.928282 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:04:50.928296 systemd[1]: Detected virtualization kvm. Sep 5 00:04:50.928304 systemd[1]: Detected architecture x86-64. Sep 5 00:04:50.928313 systemd[1]: Running in initrd. Sep 5 00:04:50.928323 systemd[1]: No hostname configured, using default hostname. Sep 5 00:04:50.928334 systemd[1]: Hostname set to . Sep 5 00:04:50.928343 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:04:50.928351 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:04:50.928360 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:04:50.928368 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:04:50.928377 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:04:50.928386 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:04:50.928397 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:04:50.928406 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:04:50.928416 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:04:50.928425 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:04:50.928434 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:04:50.928442 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:04:50.928450 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:04:50.928469 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:04:50.928478 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:04:50.928486 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:04:50.928495 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:04:50.928503 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:04:50.928512 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:04:50.928520 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 00:04:50.928529 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:04:50.928537 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:04:50.928548 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:04:50.928557 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:04:50.928565 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:04:50.928574 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:04:50.928582 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:04:50.928591 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:04:50.928601 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:04:50.928615 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:04:50.928629 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:04:50.928648 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:04:50.928662 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:04:50.928677 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:04:50.928693 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:04:50.928761 systemd-journald[192]: Collecting audit messages is disabled. Sep 5 00:04:50.928789 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:04:50.928801 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:04:50.928813 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:04:50.928829 systemd-journald[192]: Journal started Sep 5 00:04:50.928852 systemd-journald[192]: Runtime Journal (/run/log/journal/34be31a7f8dc4558899b382bb74b17c0) is 6.0M, max 48.3M, 42.2M free. Sep 5 00:04:50.931273 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:04:50.933576 systemd-modules-load[193]: Inserted module 'overlay' Sep 5 00:04:50.935855 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:04:50.939572 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:04:50.941348 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:04:50.957497 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:04:50.960436 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:04:50.965407 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:04:50.980250 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:04:50.980713 dracut-cmdline[221]: dracut-dracut-053 Sep 5 00:04:50.982652 systemd-modules-load[193]: Inserted module 'br_netfilter' Sep 5 00:04:50.983551 kernel: Bridge firewalling registered Sep 5 00:04:50.983965 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:04:50.989544 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:04:50.995764 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:04:51.007255 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:04:51.017427 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:04:51.051814 systemd-resolved[263]: Positive Trust Anchors: Sep 5 00:04:51.051830 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:04:51.051860 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:04:51.062753 systemd-resolved[263]: Defaulting to hostname 'linux'. Sep 5 00:04:51.064819 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:04:51.067042 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:04:51.078248 kernel: SCSI subsystem initialized Sep 5 00:04:51.088245 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:04:51.099245 kernel: iscsi: registered transport (tcp) Sep 5 00:04:51.121253 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:04:51.121276 kernel: QLogic iSCSI HBA Driver Sep 5 00:04:51.173690 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:04:51.185372 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:04:51.213412 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:04:51.213459 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:04:51.214517 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 00:04:51.257253 kernel: raid6: avx2x4 gen() 29318 MB/s Sep 5 00:04:51.274247 kernel: raid6: avx2x2 gen() 30413 MB/s Sep 5 00:04:51.291320 kernel: raid6: avx2x1 gen() 25146 MB/s Sep 5 00:04:51.291347 kernel: raid6: using algorithm avx2x2 gen() 30413 MB/s Sep 5 00:04:51.309278 kernel: raid6: .... xor() 19265 MB/s, rmw enabled Sep 5 00:04:51.309315 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:04:51.330247 kernel: xor: automatically using best checksumming function avx Sep 5 00:04:51.500272 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:04:51.514660 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:04:51.521491 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:04:51.535942 systemd-udevd[413]: Using default interface naming scheme 'v255'. Sep 5 00:04:51.540770 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:04:51.552408 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:04:51.567391 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Sep 5 00:04:51.599817 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:04:51.610413 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:04:51.677990 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:04:51.690393 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:04:51.703867 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:04:51.707065 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:04:51.708523 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:04:51.710571 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:04:51.719490 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 5 00:04:51.720494 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:04:51.721463 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:04:51.726342 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:04:51.732738 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:04:51.732778 kernel: GPT:9289727 != 19775487 Sep 5 00:04:51.732794 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:04:51.733508 kernel: GPT:9289727 != 19775487 Sep 5 00:04:51.735095 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:04:51.735120 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:04:51.737612 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:04:51.737873 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:04:51.742139 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:04:51.744761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:04:51.745730 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:04:51.747339 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:04:51.758792 kernel: AVX2 version of gcm_enc/dec engaged. Sep 5 00:04:51.758908 kernel: AES CTR mode by8 optimization enabled Sep 5 00:04:51.758929 kernel: libata version 3.00 loaded. Sep 5 00:04:51.759854 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:04:51.761549 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:04:51.774282 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 00:04:51.776987 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:04:51.779495 kernel: BTRFS: device fsid 3713859d-e283-4add-80dc-7ca8465b1d1d devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (457) Sep 5 00:04:51.779520 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 00:04:51.777189 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:04:51.782764 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 5 00:04:51.782975 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 00:04:51.787015 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Sep 5 00:04:51.794337 kernel: scsi host0: ahci Sep 5 00:04:51.795241 kernel: scsi host1: ahci Sep 5 00:04:51.796239 kernel: scsi host2: ahci Sep 5 00:04:51.797262 kernel: scsi host3: ahci Sep 5 00:04:51.800980 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:04:51.802252 kernel: scsi host4: ahci Sep 5 00:04:51.803305 kernel: scsi host5: ahci Sep 5 00:04:51.805932 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 5 00:04:51.805952 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 5 00:04:51.805963 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 5 00:04:51.807904 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 5 00:04:51.807927 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 5 00:04:51.809903 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 5 00:04:51.811093 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:04:51.816130 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:04:51.817396 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:04:51.830035 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:04:51.844559 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:04:51.847858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:04:51.851939 disk-uuid[565]: Primary Header is updated. Sep 5 00:04:51.851939 disk-uuid[565]: Secondary Entries is updated. Sep 5 00:04:51.851939 disk-uuid[565]: Secondary Header is updated. Sep 5 00:04:51.855874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:04:51.859256 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:04:51.872111 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:04:51.881422 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:04:51.900059 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:04:52.120972 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:04:52.121069 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:04:52.121085 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 00:04:52.122250 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:04:52.123251 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 00:04:52.124258 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 5 00:04:52.125620 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 5 00:04:52.125657 kernel: ata3.00: applying bridge limits Sep 5 00:04:52.126305 kernel: ata3.00: configured for UDMA/100 Sep 5 00:04:52.127260 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 00:04:52.176293 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 5 00:04:52.176678 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 00:04:52.190270 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 5 00:04:52.863283 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:04:52.864008 disk-uuid[567]: The operation has completed successfully. Sep 5 00:04:52.908073 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:04:52.908296 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:04:52.941660 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:04:52.949982 sh[595]: Success Sep 5 00:04:52.968332 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 5 00:04:53.018331 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:04:53.033759 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:04:53.037033 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:04:53.057755 kernel: BTRFS info (device dm-0): first mount of filesystem 3713859d-e283-4add-80dc-7ca8465b1d1d Sep 5 00:04:53.057817 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:04:53.057834 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 00:04:53.058946 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:04:53.059863 kernel: BTRFS info (device dm-0): using free space tree Sep 5 00:04:53.071135 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:04:53.072123 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:04:53.087478 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:04:53.092653 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:04:53.108527 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:04:53.108616 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:04:53.108633 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:04:53.113269 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:04:53.126703 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 00:04:53.128508 kernel: BTRFS info (device vda6): last unmount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:04:53.142507 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:04:53.152606 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:04:53.229168 ignition[690]: Ignition 2.19.0 Sep 5 00:04:53.229186 ignition[690]: Stage: fetch-offline Sep 5 00:04:53.229255 ignition[690]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:04:53.229271 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:04:53.229430 ignition[690]: parsed url from cmdline: "" Sep 5 00:04:53.229437 ignition[690]: no config URL provided Sep 5 00:04:53.229445 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:04:53.229461 ignition[690]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:04:53.229518 ignition[690]: op(1): [started] loading QEMU firmware config module Sep 5 00:04:53.229526 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:04:53.240665 ignition[690]: op(1): [finished] loading QEMU firmware config module Sep 5 00:04:53.275325 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:04:53.288305 ignition[690]: parsing config with SHA512: 54c387b5e80ea9ac575e3806c2bd6b0caa52f452b79efe8b7a42c8312e077c2b5839fe0bdb66c2c95d8b82c6b817b34afd3dbc9852a1d72f6094ec63c0838e27 Sep 5 00:04:53.289438 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:04:53.297742 unknown[690]: fetched base config from "system" Sep 5 00:04:53.297754 unknown[690]: fetched user config from "qemu" Sep 5 00:04:53.299751 ignition[690]: fetch-offline: fetch-offline passed Sep 5 00:04:53.299835 ignition[690]: Ignition finished successfully Sep 5 00:04:53.305245 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:04:53.318474 systemd-networkd[783]: lo: Link UP Sep 5 00:04:53.318486 systemd-networkd[783]: lo: Gained carrier Sep 5 00:04:53.320349 systemd-networkd[783]: Enumeration completed Sep 5 00:04:53.320486 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:04:53.320834 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:04:53.320839 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:04:53.322878 systemd[1]: Reached target network.target - Network. Sep 5 00:04:53.322955 systemd-networkd[783]: eth0: Link UP Sep 5 00:04:53.322960 systemd-networkd[783]: eth0: Gained carrier Sep 5 00:04:53.322967 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:04:53.325581 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:04:53.331338 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:04:53.333736 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:04:53.361174 ignition[786]: Ignition 2.19.0 Sep 5 00:04:53.361188 ignition[786]: Stage: kargs Sep 5 00:04:53.361449 ignition[786]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:04:53.361467 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:04:53.362633 ignition[786]: kargs: kargs passed Sep 5 00:04:53.362697 ignition[786]: Ignition finished successfully Sep 5 00:04:53.367075 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:04:53.376451 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:04:53.406704 ignition[795]: Ignition 2.19.0 Sep 5 00:04:53.406720 ignition[795]: Stage: disks Sep 5 00:04:53.407003 ignition[795]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:04:53.407026 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:04:53.408697 ignition[795]: disks: disks passed Sep 5 00:04:53.409720 ignition[795]: Ignition finished successfully Sep 5 00:04:53.413176 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:04:53.414887 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:04:53.416937 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:04:53.418403 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:04:53.420701 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:04:53.423184 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:04:53.435466 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:04:53.444687 systemd-resolved[263]: Detected conflict on linux IN A 10.0.0.15 Sep 5 00:04:53.444718 systemd-resolved[263]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Sep 5 00:04:53.456587 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 00:04:53.464976 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:04:53.479636 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:04:53.609314 kernel: EXT4-fs (vda9): mounted filesystem 83287606-d110-4d13-a801-c8d88205bd5a r/w with ordered data mode. Quota mode: none. Sep 5 00:04:53.610048 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:04:53.610904 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:04:53.620326 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:04:53.622601 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:04:53.624056 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:04:53.624098 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:04:53.633925 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Sep 5 00:04:53.633954 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:04:53.633968 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:04:53.633980 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:04:53.624124 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:04:53.638445 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:04:53.638365 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:04:53.660671 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:04:53.674453 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:04:53.719056 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:04:53.724677 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:04:53.729153 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:04:53.734165 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:04:53.833420 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:04:53.846400 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:04:53.849200 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:04:53.858249 kernel: BTRFS info (device vda6): last unmount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:04:53.877256 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:04:53.929879 ignition[928]: INFO : Ignition 2.19.0 Sep 5 00:04:53.929879 ignition[928]: INFO : Stage: mount Sep 5 00:04:53.931667 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:04:53.931667 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:04:53.931667 ignition[928]: INFO : mount: mount passed Sep 5 00:04:53.931667 ignition[928]: INFO : Ignition finished successfully Sep 5 00:04:53.933486 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:04:53.942340 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:04:54.056687 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:04:54.072818 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:04:54.083261 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Sep 5 00:04:54.086092 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:04:54.086537 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:04:54.086610 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:04:54.090259 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:04:54.093409 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:04:54.131239 ignition[957]: INFO : Ignition 2.19.0 Sep 5 00:04:54.131239 ignition[957]: INFO : Stage: files Sep 5 00:04:54.133975 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:04:54.133975 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:04:54.133975 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:04:54.137402 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:04:54.137402 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:04:54.141313 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:04:54.142756 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:04:54.144931 unknown[957]: wrote ssh authorized keys file for user: core Sep 5 00:04:54.146134 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:04:54.148865 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 5 00:04:54.150826 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 5 00:04:54.195463 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:04:54.332336 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 5 00:04:54.332336 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 00:04:54.336511 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 5 00:04:54.568655 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 5 00:04:54.785966 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 00:04:54.788339 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:04:54.788339 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:04:54.788339 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:04:54.788339 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:04:54.788339 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:04:54.788339 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:04:54.788339 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:04:54.788339 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:04:54.788339 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:04:54.788339 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:04:54.788339 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:04:54.788339 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:04:54.788339 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:04:54.788339 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 5 00:04:54.809600 systemd-networkd[783]: eth0: Gained IPv6LL Sep 5 00:04:55.302746 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 5 00:04:56.965338 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:04:56.965338 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 5 00:04:57.062118 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:04:57.069862 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:04:57.069862 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 5 00:04:57.069862 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 5 00:04:57.069862 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:04:57.069862 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:04:57.069862 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 5 00:04:57.069862 ignition[957]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:04:57.226257 ignition[957]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:04:57.254637 ignition[957]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:04:57.263980 ignition[957]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:04:57.263980 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:04:57.263980 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:04:57.263980 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:04:57.263980 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:04:57.263980 ignition[957]: INFO : files: files passed Sep 5 00:04:57.263980 ignition[957]: INFO : Ignition finished successfully Sep 5 00:04:57.285480 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:04:57.310572 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:04:57.321635 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:04:57.329953 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:04:57.330125 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:04:57.368336 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:04:57.381162 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:04:57.381162 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:04:57.385864 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:04:57.391291 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:04:57.397847 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:04:57.406602 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:04:57.467390 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:04:57.467591 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:04:57.471822 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:04:57.474632 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:04:57.477394 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:04:57.489787 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:04:57.535131 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:04:57.555754 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:04:57.605624 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:04:57.606098 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:04:57.618099 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:04:57.623975 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:04:57.624183 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:04:57.643876 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:04:57.645456 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:04:57.647718 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:04:57.652176 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:04:57.656207 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:04:57.657610 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:04:57.660073 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:04:57.666975 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:04:57.669714 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:04:57.672297 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:04:57.675481 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:04:57.675721 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:04:57.681262 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:04:57.681472 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:04:57.685581 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:04:57.685794 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:04:57.693488 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:04:57.693692 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:04:57.702097 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:04:57.702354 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:04:57.704721 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:04:57.706053 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:04:57.709909 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:04:57.728096 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:04:57.732531 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:04:57.744853 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:04:57.745089 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:04:57.749709 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:04:57.755059 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:04:57.759833 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:04:57.760688 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:04:57.770614 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:04:57.770823 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:04:57.785790 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:04:57.787055 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:04:57.787255 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:04:57.791243 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:04:57.799577 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:04:57.800161 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:04:57.807740 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:04:57.809375 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:04:57.817620 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:04:57.817854 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:04:57.838023 ignition[1011]: INFO : Ignition 2.19.0 Sep 5 00:04:57.838023 ignition[1011]: INFO : Stage: umount Sep 5 00:04:57.840519 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:04:57.840519 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:04:57.840519 ignition[1011]: INFO : umount: umount passed Sep 5 00:04:57.840519 ignition[1011]: INFO : Ignition finished successfully Sep 5 00:04:57.848821 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:04:57.849716 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:04:57.849908 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:04:57.854379 systemd[1]: Stopped target network.target - Network. Sep 5 00:04:57.855648 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:04:57.855755 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:04:57.858170 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:04:57.858320 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:04:57.860475 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:04:57.860559 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:04:57.866827 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:04:57.866940 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:04:57.869662 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:04:57.871737 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:04:57.876453 systemd-networkd[783]: eth0: DHCPv6 lease lost Sep 5 00:04:57.881166 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:04:57.881440 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:04:57.886117 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:04:57.886179 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:04:57.893487 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:04:57.895772 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:04:57.895892 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:04:57.899078 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:04:57.904680 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:04:57.904900 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:04:57.911855 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:04:57.912118 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:04:57.927174 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:04:57.927312 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:04:57.930177 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:04:57.930297 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:04:57.932748 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:04:57.932824 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:04:57.935438 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:04:57.935546 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:04:57.940285 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:04:57.940568 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:04:57.942620 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:04:57.942773 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:04:57.946026 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:04:57.946160 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:04:57.947773 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:04:57.947856 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:04:57.949934 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:04:57.950037 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:04:57.953174 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:04:57.954318 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:04:57.956823 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:04:57.956921 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:04:57.976878 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:04:57.978417 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:04:57.978543 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:04:57.984043 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:04:57.984144 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:04:57.990767 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:04:57.990943 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:04:57.994162 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:04:58.005874 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:04:58.020839 systemd[1]: Switching root. Sep 5 00:04:58.059149 systemd-journald[192]: Journal stopped Sep 5 00:04:59.816329 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 5 00:04:59.816407 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:04:59.816434 kernel: SELinux: policy capability open_perms=1 Sep 5 00:04:59.816449 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:04:59.816464 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:04:59.816476 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:04:59.816494 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:04:59.816510 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:04:59.816522 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:04:59.816533 kernel: audit: type=1403 audit(1757030698.560:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:04:59.816553 systemd[1]: Successfully loaded SELinux policy in 69.652ms. Sep 5 00:04:59.816579 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.903ms. Sep 5 00:04:59.816593 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:04:59.816605 systemd[1]: Detected virtualization kvm. Sep 5 00:04:59.816617 systemd[1]: Detected architecture x86-64. Sep 5 00:04:59.816629 systemd[1]: Detected first boot. Sep 5 00:04:59.816642 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:04:59.816656 zram_generator::config[1056]: No configuration found. Sep 5 00:04:59.816669 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:04:59.816687 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:04:59.816700 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:04:59.816713 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:04:59.816731 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:04:59.816744 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:04:59.816756 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:04:59.816768 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:04:59.816785 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:04:59.816797 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:04:59.816815 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:04:59.816827 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:04:59.816839 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:04:59.816852 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:04:59.816865 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:04:59.816878 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:04:59.816890 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:04:59.816902 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:04:59.816915 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 00:04:59.816933 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:04:59.816947 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:04:59.816959 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:04:59.816971 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:04:59.816983 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:04:59.816996 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:04:59.817010 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:04:59.817028 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:04:59.817041 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:04:59.817053 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:04:59.817065 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:04:59.817077 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:04:59.817089 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:04:59.817101 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:04:59.817113 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:04:59.817126 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:04:59.817138 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:04:59.817156 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:04:59.817169 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:04:59.817181 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:04:59.817193 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:04:59.817205 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:04:59.817251 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:04:59.817265 systemd[1]: Reached target machines.target - Containers. Sep 5 00:04:59.817277 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:04:59.817296 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:04:59.817309 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:04:59.817321 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:04:59.817333 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:04:59.817345 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:04:59.817358 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:04:59.817370 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:04:59.817384 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:04:59.817406 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:04:59.817421 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:04:59.817436 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:04:59.817451 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:04:59.817467 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:04:59.817481 kernel: loop: module loaded Sep 5 00:04:59.817495 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:04:59.817513 kernel: fuse: init (API version 7.39) Sep 5 00:04:59.817524 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:04:59.817543 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:04:59.817583 systemd-journald[1126]: Collecting audit messages is disabled. Sep 5 00:04:59.817606 systemd-journald[1126]: Journal started Sep 5 00:04:59.817629 systemd-journald[1126]: Runtime Journal (/run/log/journal/34be31a7f8dc4558899b382bb74b17c0) is 6.0M, max 48.3M, 42.2M free. Sep 5 00:04:59.590722 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:04:59.612906 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:04:59.613656 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:04:59.820156 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:04:59.823143 kernel: ACPI: bus type drm_connector registered Sep 5 00:04:59.823275 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:04:59.825859 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:04:59.825892 systemd[1]: Stopped verity-setup.service. Sep 5 00:04:59.829259 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:04:59.832672 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:04:59.833570 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:04:59.834839 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:04:59.836275 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:04:59.837494 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:04:59.838798 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:04:59.840119 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:04:59.841523 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:04:59.843133 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:04:59.859582 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:04:59.859787 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:04:59.861502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:04:59.861688 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:04:59.863417 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:04:59.863611 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:04:59.865004 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:04:59.865181 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:04:59.866830 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:04:59.867009 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:04:59.868434 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:04:59.868610 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:04:59.870150 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:04:59.871768 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:04:59.873344 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:04:59.890126 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:04:59.910336 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:04:59.912708 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:04:59.913849 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:04:59.913875 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:04:59.915904 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 00:04:59.918291 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:04:59.922391 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:04:59.923556 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:04:59.927981 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:04:59.930134 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:04:59.931294 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:04:59.933151 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:04:59.934440 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:04:59.937391 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:04:59.942613 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:04:59.946507 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:04:59.951872 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:04:59.955920 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:04:59.957517 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:04:59.959096 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:04:59.963485 systemd-journald[1126]: Time spent on flushing to /var/log/journal/34be31a7f8dc4558899b382bb74b17c0 is 31.096ms for 1003 entries. Sep 5 00:04:59.963485 systemd-journald[1126]: System Journal (/var/log/journal/34be31a7f8dc4558899b382bb74b17c0) is 8.0M, max 195.6M, 187.6M free. Sep 5 00:05:00.032435 systemd-journald[1126]: Received client request to flush runtime journal. Sep 5 00:05:00.032563 kernel: loop0: detected capacity change from 0 to 140768 Sep 5 00:04:59.966448 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:04:59.986620 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 00:05:00.002828 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:05:00.012195 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 00:05:00.036258 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:05:00.042324 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:05:00.044063 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 5 00:05:00.060665 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:05:00.061719 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 00:05:00.063648 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:05:00.072327 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:05:00.079258 kernel: loop1: detected capacity change from 0 to 221472 Sep 5 00:05:00.082505 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:05:00.108362 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Sep 5 00:05:00.108381 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Sep 5 00:05:00.117407 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:05:00.122264 kernel: loop2: detected capacity change from 0 to 142488 Sep 5 00:05:00.166395 kernel: loop3: detected capacity change from 0 to 140768 Sep 5 00:05:00.183247 kernel: loop4: detected capacity change from 0 to 221472 Sep 5 00:05:00.199250 kernel: loop5: detected capacity change from 0 to 142488 Sep 5 00:05:00.209064 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:05:00.209694 (sd-merge)[1202]: Merged extensions into '/usr'. Sep 5 00:05:00.218123 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:05:00.218143 systemd[1]: Reloading... Sep 5 00:05:00.295485 zram_generator::config[1227]: No configuration found. Sep 5 00:05:00.425706 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:05:00.460569 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:05:00.505495 systemd[1]: Reloading finished in 286 ms. Sep 5 00:05:00.583817 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:05:00.585426 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:05:00.602415 systemd[1]: Starting ensure-sysext.service... Sep 5 00:05:00.604819 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:05:00.609671 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:05:00.609685 systemd[1]: Reloading... Sep 5 00:05:00.672252 zram_generator::config[1290]: No configuration found. Sep 5 00:05:00.678952 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:05:00.679391 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:05:00.681069 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:05:00.681575 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Sep 5 00:05:00.681720 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Sep 5 00:05:00.689669 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:05:00.689763 systemd-tmpfiles[1266]: Skipping /boot Sep 5 00:05:00.701514 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:05:00.701528 systemd-tmpfiles[1266]: Skipping /boot Sep 5 00:05:00.800668 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:05:00.850611 systemd[1]: Reloading finished in 240 ms. Sep 5 00:05:00.875715 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:05:00.884626 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:05:00.887436 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:05:00.889885 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:05:00.893324 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:05:00.901214 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:05:00.903577 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:05:00.908813 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:00.908991 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:05:00.911591 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:05:00.914272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:05:00.917344 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:05:00.918577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:05:00.922160 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:05:00.927341 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:05:00.929202 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:00.930488 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:05:00.930676 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:05:00.932880 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:05:00.934831 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:05:00.935002 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:05:00.937157 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:05:00.937436 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:05:00.952126 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:00.953946 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:05:00.963513 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:05:00.963758 augenrules[1361]: No rules Sep 5 00:05:00.968348 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Sep 5 00:05:00.976542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:05:00.981039 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:05:00.982239 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:05:00.987433 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:05:00.988489 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:00.989598 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:05:00.991375 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:05:00.993205 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:05:00.995113 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:05:00.996905 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:05:00.998983 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:05:00.999174 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:05:01.002214 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:05:01.002451 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:05:01.004176 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:05:01.004513 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:05:01.006287 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:05:01.039481 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:01.039690 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:05:01.052605 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:05:01.057417 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:05:01.060404 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:05:01.062635 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:05:01.064071 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:05:01.068243 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:05:01.069371 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:05:01.069404 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:01.070042 systemd[1]: Finished ensure-sysext.service. Sep 5 00:05:01.071275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:05:01.071468 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:05:01.075075 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:05:01.079408 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:05:01.085409 systemd-resolved[1334]: Positive Trust Anchors: Sep 5 00:05:01.085735 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:05:01.085828 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:05:01.086431 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:05:01.089437 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:05:01.091978 systemd-resolved[1334]: Defaulting to hostname 'linux'. Sep 5 00:05:01.094804 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:05:01.095082 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:05:01.096520 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:05:01.102698 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 00:05:01.103842 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:05:01.106829 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:05:01.107075 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:05:01.111241 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:05:01.135251 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1397) Sep 5 00:05:01.173983 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 5 00:05:01.175002 systemd-networkd[1408]: lo: Link UP Sep 5 00:05:01.175015 systemd-networkd[1408]: lo: Gained carrier Sep 5 00:05:01.176890 systemd-networkd[1408]: Enumeration completed Sep 5 00:05:01.177332 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:05:01.177342 systemd-networkd[1408]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:05:01.178106 systemd-networkd[1408]: eth0: Link UP Sep 5 00:05:01.178111 systemd-networkd[1408]: eth0: Gained carrier Sep 5 00:05:01.178123 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:05:01.181647 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:05:01.182900 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:05:01.185606 kernel: ACPI: button: Power Button [PWRF] Sep 5 00:05:01.186910 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:05:01.188625 systemd[1]: Reached target network.target - Network. Sep 5 00:05:01.189687 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:05:01.194246 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 5 00:05:01.196347 systemd-networkd[1408]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:05:01.197172 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Sep 5 00:05:01.696392 systemd-resolved[1334]: Clock change detected. Flushing caches. Sep 5 00:05:01.696489 systemd-timesyncd[1412]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:05:01.696585 systemd-timesyncd[1412]: Initial clock synchronization to Fri 2025-09-05 00:05:01.696341 UTC. Sep 5 00:05:01.698813 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:05:01.706685 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:05:01.739723 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:05:01.763532 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:05:01.777552 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:05:01.777854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:05:01.790316 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 5 00:05:01.790675 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 00:05:01.790844 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 5 00:05:01.791043 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 00:05:01.793464 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 00:05:01.795487 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:05:01.873622 kernel: kvm_amd: TSC scaling supported Sep 5 00:05:01.873713 kernel: kvm_amd: Nested Virtualization enabled Sep 5 00:05:01.873734 kernel: kvm_amd: Nested Paging enabled Sep 5 00:05:01.873783 kernel: kvm_amd: LBR virtualization supported Sep 5 00:05:01.874752 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 5 00:05:01.874787 kernel: kvm_amd: Virtual GIF supported Sep 5 00:05:01.898086 kernel: EDAC MC: Ver: 3.0.0 Sep 5 00:05:01.907920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:05:01.929920 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 00:05:01.938767 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 00:05:01.947467 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:05:01.976758 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 00:05:01.978296 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:05:01.979383 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:05:01.980550 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:05:01.981788 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:05:01.983216 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:05:01.984360 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:05:01.985630 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:05:01.986874 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:05:01.986909 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:05:01.987817 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:05:01.989650 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:05:01.992504 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:05:02.006128 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:05:02.010197 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 00:05:02.012768 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:05:02.015025 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:05:02.016690 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:05:02.017910 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:05:02.017990 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:05:02.020757 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:05:02.024613 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:05:02.029536 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:05:02.031046 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:05:02.033635 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:05:02.034743 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:05:02.037787 jq[1446]: false Sep 5 00:05:02.040597 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:05:02.045654 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:05:02.050141 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:05:02.052737 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:05:02.058741 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:05:02.060298 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:05:02.061161 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:05:02.062972 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:05:02.064184 extend-filesystems[1447]: Found loop3 Sep 5 00:05:02.064184 extend-filesystems[1447]: Found loop4 Sep 5 00:05:02.064184 extend-filesystems[1447]: Found loop5 Sep 5 00:05:02.064184 extend-filesystems[1447]: Found sr0 Sep 5 00:05:02.064184 extend-filesystems[1447]: Found vda Sep 5 00:05:02.064184 extend-filesystems[1447]: Found vda1 Sep 5 00:05:02.064184 extend-filesystems[1447]: Found vda2 Sep 5 00:05:02.071902 dbus-daemon[1445]: [system] SELinux support is enabled Sep 5 00:05:02.065957 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:05:02.080093 extend-filesystems[1447]: Found vda3 Sep 5 00:05:02.080093 extend-filesystems[1447]: Found usr Sep 5 00:05:02.080093 extend-filesystems[1447]: Found vda4 Sep 5 00:05:02.080093 extend-filesystems[1447]: Found vda6 Sep 5 00:05:02.080093 extend-filesystems[1447]: Found vda7 Sep 5 00:05:02.080093 extend-filesystems[1447]: Found vda9 Sep 5 00:05:02.080093 extend-filesystems[1447]: Checking size of /dev/vda9 Sep 5 00:05:02.071116 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 00:05:02.085408 jq[1461]: true Sep 5 00:05:02.074922 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:05:02.092040 update_engine[1459]: I20250905 00:05:02.087916 1459 main.cc:92] Flatcar Update Engine starting Sep 5 00:05:02.092040 update_engine[1459]: I20250905 00:05:02.089460 1459 update_check_scheduler.cc:74] Next update check in 7m52s Sep 5 00:05:02.092269 extend-filesystems[1447]: Resized partition /dev/vda9 Sep 5 00:05:02.090951 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:05:02.095840 extend-filesystems[1469]: resize2fs 1.47.1 (20-May-2024) Sep 5 00:05:02.091277 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:05:02.091658 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:05:02.091859 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:05:02.097954 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:05:02.098226 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:05:02.101458 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:05:02.105458 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1396) Sep 5 00:05:02.118161 jq[1471]: true Sep 5 00:05:02.142323 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:05:02.163473 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:05:02.169743 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:05:02.173845 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:05:02.190673 tar[1470]: linux-amd64/helm Sep 5 00:05:02.173874 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:05:02.175691 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:05:02.175708 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:05:02.189625 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:05:02.193654 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:05:02.193654 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:05:02.193654 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:05:02.192690 systemd-logind[1458]: Watching system buttons on /dev/input/event1 (Power Button) Sep 5 00:05:02.215256 bash[1499]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:05:02.215366 extend-filesystems[1447]: Resized filesystem in /dev/vda9 Sep 5 00:05:02.192711 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 00:05:02.195041 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:05:02.195312 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:05:02.199265 systemd-logind[1458]: New seat seat0. Sep 5 00:05:02.216530 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:05:02.218011 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:05:02.224354 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:05:02.272506 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:05:02.278626 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:05:02.341886 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:05:02.348782 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:05:02.369619 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:05:02.369975 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:05:02.374679 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:05:02.396027 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:05:02.405149 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:05:02.415235 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 00:05:02.417276 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:05:02.498458 containerd[1472]: time="2025-09-05T00:05:02.496851112Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 00:05:02.532654 containerd[1472]: time="2025-09-05T00:05:02.532604254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:02.534933 containerd[1472]: time="2025-09-05T00:05:02.534898988Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:05:02.534933 containerd[1472]: time="2025-09-05T00:05:02.534929546Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 00:05:02.534984 containerd[1472]: time="2025-09-05T00:05:02.534945255Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 00:05:02.535182 containerd[1472]: time="2025-09-05T00:05:02.535159006Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 00:05:02.535211 containerd[1472]: time="2025-09-05T00:05:02.535181679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:02.535282 containerd[1472]: time="2025-09-05T00:05:02.535257321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:05:02.535282 containerd[1472]: time="2025-09-05T00:05:02.535276757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:02.535540 containerd[1472]: time="2025-09-05T00:05:02.535513882Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:05:02.535540 containerd[1472]: time="2025-09-05T00:05:02.535535913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:02.535587 containerd[1472]: time="2025-09-05T00:05:02.535550631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:05:02.535587 containerd[1472]: time="2025-09-05T00:05:02.535565068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:02.535772 containerd[1472]: time="2025-09-05T00:05:02.535744494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:02.536234 containerd[1472]: time="2025-09-05T00:05:02.536201592Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:02.536417 containerd[1472]: time="2025-09-05T00:05:02.536384846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:05:02.536459 containerd[1472]: time="2025-09-05T00:05:02.536414732Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 00:05:02.536613 containerd[1472]: time="2025-09-05T00:05:02.536583047Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 00:05:02.536712 containerd[1472]: time="2025-09-05T00:05:02.536683556Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:05:02.542319 containerd[1472]: time="2025-09-05T00:05:02.542244156Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 00:05:02.542319 containerd[1472]: time="2025-09-05T00:05:02.542304649Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 00:05:02.542319 containerd[1472]: time="2025-09-05T00:05:02.542326751Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 00:05:02.542541 containerd[1472]: time="2025-09-05T00:05:02.542345195Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 00:05:02.542541 containerd[1472]: time="2025-09-05T00:05:02.542363470Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 00:05:02.542628 containerd[1472]: time="2025-09-05T00:05:02.542608880Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 00:05:02.543014 containerd[1472]: time="2025-09-05T00:05:02.542971540Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 00:05:02.543164 containerd[1472]: time="2025-09-05T00:05:02.543129987Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 00:05:02.543164 containerd[1472]: time="2025-09-05T00:05:02.543158090Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 00:05:02.543221 containerd[1472]: time="2025-09-05T00:05:02.543175763Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 00:05:02.543221 containerd[1472]: time="2025-09-05T00:05:02.543194068Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 00:05:02.543221 containerd[1472]: time="2025-09-05T00:05:02.543212643Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 00:05:02.543309 containerd[1472]: time="2025-09-05T00:05:02.543228302Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 00:05:02.543309 containerd[1472]: time="2025-09-05T00:05:02.543246756Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 00:05:02.543309 containerd[1472]: time="2025-09-05T00:05:02.543264760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 00:05:02.543309 containerd[1472]: time="2025-09-05T00:05:02.543280390Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 00:05:02.543309 containerd[1472]: time="2025-09-05T00:05:02.543297512Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 00:05:02.543450 containerd[1472]: time="2025-09-05T00:05:02.543313061Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 00:05:02.543450 containerd[1472]: time="2025-09-05T00:05:02.543416976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.543518 containerd[1472]: time="2025-09-05T00:05:02.543435040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.543518 containerd[1472]: time="2025-09-05T00:05:02.543489281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.543518 containerd[1472]: time="2025-09-05T00:05:02.543506333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.543605 containerd[1472]: time="2025-09-05T00:05:02.543522704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.543605 containerd[1472]: time="2025-09-05T00:05:02.543541409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.543605 containerd[1472]: time="2025-09-05T00:05:02.543582316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.543682 containerd[1472]: time="2025-09-05T00:05:02.543627972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.543682 containerd[1472]: time="2025-09-05T00:05:02.543648420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.543682 containerd[1472]: time="2025-09-05T00:05:02.543667506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.543757 containerd[1472]: time="2025-09-05T00:05:02.543685730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.543757 containerd[1472]: time="2025-09-05T00:05:02.543719934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.543757 containerd[1472]: time="2025-09-05T00:05:02.543739521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.543855 containerd[1472]: time="2025-09-05T00:05:02.543759508Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 00:05:02.543855 containerd[1472]: time="2025-09-05T00:05:02.543788853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.543855 containerd[1472]: time="2025-09-05T00:05:02.543808470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.543855 containerd[1472]: time="2025-09-05T00:05:02.543826704Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 00:05:02.543980 containerd[1472]: time="2025-09-05T00:05:02.543954835Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 00:05:02.544103 containerd[1472]: time="2025-09-05T00:05:02.543983669Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 00:05:02.544103 containerd[1472]: time="2025-09-05T00:05:02.544088816Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 00:05:02.544168 containerd[1472]: time="2025-09-05T00:05:02.544109134Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 00:05:02.544168 containerd[1472]: time="2025-09-05T00:05:02.544123581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.544168 containerd[1472]: time="2025-09-05T00:05:02.544148257Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 00:05:02.544168 containerd[1472]: time="2025-09-05T00:05:02.544163576Z" level=info msg="NRI interface is disabled by configuration." Sep 5 00:05:02.544281 containerd[1472]: time="2025-09-05T00:05:02.544177853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.546164 containerd[1472]: time="2025-09-05T00:05:02.545892669Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 00:05:02.546164 containerd[1472]: time="2025-09-05T00:05:02.546133421Z" level=info msg="Connect containerd service" Sep 5 00:05:02.546648 containerd[1472]: time="2025-09-05T00:05:02.546195447Z" level=info msg="using legacy CRI server" Sep 5 00:05:02.546648 containerd[1472]: time="2025-09-05T00:05:02.546208191Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:05:02.546648 containerd[1472]: time="2025-09-05T00:05:02.546331312Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 00:05:02.547776 containerd[1472]: time="2025-09-05T00:05:02.547742008Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:05:02.548163 containerd[1472]: time="2025-09-05T00:05:02.548114156Z" level=info msg="Start subscribing containerd event" Sep 5 00:05:02.548163 containerd[1472]: time="2025-09-05T00:05:02.548166044Z" level=info msg="Start recovering state" Sep 5 00:05:02.548338 containerd[1472]: time="2025-09-05T00:05:02.548318730Z" level=info msg="Start event monitor" Sep 5 00:05:02.548386 containerd[1472]: time="2025-09-05T00:05:02.548322998Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:05:02.548386 containerd[1472]: time="2025-09-05T00:05:02.548352734Z" level=info msg="Start snapshots syncer" Sep 5 00:05:02.548469 containerd[1472]: time="2025-09-05T00:05:02.548388291Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:05:02.548469 containerd[1472]: time="2025-09-05T00:05:02.548397287Z" level=info msg="Start streaming server" Sep 5 00:05:02.548469 containerd[1472]: time="2025-09-05T00:05:02.548428776Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:05:02.548545 containerd[1472]: time="2025-09-05T00:05:02.548528974Z" level=info msg="containerd successfully booted in 0.052846s" Sep 5 00:05:02.548700 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:05:02.762829 tar[1470]: linux-amd64/LICENSE Sep 5 00:05:02.762829 tar[1470]: linux-amd64/README.md Sep 5 00:05:02.780552 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:05:03.627824 systemd-networkd[1408]: eth0: Gained IPv6LL Sep 5 00:05:03.633650 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:05:03.639418 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:05:03.658773 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:05:03.662675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:03.666005 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:05:03.697691 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:05:03.698034 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:05:03.700933 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:05:03.702851 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:05:04.594527 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:05:04.622665 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:53244.service - OpenSSH per-connection server daemon (10.0.0.1:53244). Sep 5 00:05:04.804057 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 53244 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:04.807308 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:04.819128 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:05:04.831947 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:05:04.837056 systemd-logind[1458]: New session 1 of user core. Sep 5 00:05:04.941283 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:05:04.957239 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:05:04.963410 (systemd)[1558]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:05:05.203220 systemd[1558]: Queued start job for default target default.target. Sep 5 00:05:05.271657 systemd[1558]: Created slice app.slice - User Application Slice. Sep 5 00:05:05.271699 systemd[1558]: Reached target paths.target - Paths. Sep 5 00:05:05.271720 systemd[1558]: Reached target timers.target - Timers. Sep 5 00:05:05.274143 systemd[1558]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:05:05.291034 systemd[1558]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:05:05.291258 systemd[1558]: Reached target sockets.target - Sockets. Sep 5 00:05:05.291279 systemd[1558]: Reached target basic.target - Basic System. Sep 5 00:05:05.291366 systemd[1558]: Reached target default.target - Main User Target. Sep 5 00:05:05.291422 systemd[1558]: Startup finished in 312ms. Sep 5 00:05:05.292455 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:05:05.296112 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:05:05.378978 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:53248.service - OpenSSH per-connection server daemon (10.0.0.1:53248). Sep 5 00:05:05.496787 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 53248 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:05.502607 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:05.522562 systemd-logind[1458]: New session 2 of user core. Sep 5 00:05:05.537902 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:05:05.637839 sshd[1569]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:05.658188 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:53248.service: Deactivated successfully. Sep 5 00:05:05.660906 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:05:05.663326 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:05:05.671099 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:53256.service - OpenSSH per-connection server daemon (10.0.0.1:53256). Sep 5 00:05:05.890850 systemd-logind[1458]: Removed session 2. Sep 5 00:05:05.948428 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 53256 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:05.950718 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:05.956624 systemd-logind[1458]: New session 3 of user core. Sep 5 00:05:05.975803 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:05:06.045040 sshd[1576]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:06.057193 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:53256.service: Deactivated successfully. Sep 5 00:05:06.062317 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:05:06.063323 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:05:06.064682 systemd-logind[1458]: Removed session 3. Sep 5 00:05:06.119335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:06.121791 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:05:06.124319 (kubelet)[1587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:05:06.124588 systemd[1]: Startup finished in 1.000s (kernel) + 7.818s (initrd) + 7.134s (userspace) = 15.953s. Sep 5 00:05:06.915802 kubelet[1587]: E0905 00:05:06.915691 1587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:05:06.920684 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:05:06.922047 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:05:06.923635 systemd[1]: kubelet.service: Consumed 2.690s CPU time. Sep 5 00:05:16.056404 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:35862.service - OpenSSH per-connection server daemon (10.0.0.1:35862). Sep 5 00:05:16.096391 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 35862 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:16.098233 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:16.103488 systemd-logind[1458]: New session 4 of user core. Sep 5 00:05:16.113688 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:05:16.171584 sshd[1601]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:16.196758 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:35862.service: Deactivated successfully. Sep 5 00:05:16.198751 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:05:16.200622 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:05:16.214796 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:35878.service - OpenSSH per-connection server daemon (10.0.0.1:35878). Sep 5 00:05:16.215978 systemd-logind[1458]: Removed session 4. Sep 5 00:05:16.247023 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 35878 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:16.248498 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:16.252678 systemd-logind[1458]: New session 5 of user core. Sep 5 00:05:16.272562 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:05:16.323550 sshd[1608]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:16.341097 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:35878.service: Deactivated successfully. Sep 5 00:05:16.343601 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:05:16.345492 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:05:16.356799 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:35886.service - OpenSSH per-connection server daemon (10.0.0.1:35886). Sep 5 00:05:16.357953 systemd-logind[1458]: Removed session 5. Sep 5 00:05:16.389463 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 35886 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:16.390932 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:16.395349 systemd-logind[1458]: New session 6 of user core. Sep 5 00:05:16.404600 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:05:16.459188 sshd[1615]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:16.474743 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:35886.service: Deactivated successfully. Sep 5 00:05:16.476824 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:05:16.478646 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:05:16.493755 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:35888.service - OpenSSH per-connection server daemon (10.0.0.1:35888). Sep 5 00:05:16.494913 systemd-logind[1458]: Removed session 6. Sep 5 00:05:16.527589 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 35888 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:16.530121 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:16.535812 systemd-logind[1458]: New session 7 of user core. Sep 5 00:05:16.549709 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:05:16.609594 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:05:16.610048 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:05:16.634323 sudo[1625]: pam_unix(sudo:session): session closed for user root Sep 5 00:05:16.636849 sshd[1622]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:16.651689 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:35888.service: Deactivated successfully. Sep 5 00:05:16.653974 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:05:16.655724 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:05:16.657368 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:35890.service - OpenSSH per-connection server daemon (10.0.0.1:35890). Sep 5 00:05:16.658195 systemd-logind[1458]: Removed session 7. Sep 5 00:05:16.694056 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 35890 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:16.695575 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:16.699424 systemd-logind[1458]: New session 8 of user core. Sep 5 00:05:16.709550 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:05:16.765798 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:05:16.766256 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:05:16.770568 sudo[1634]: pam_unix(sudo:session): session closed for user root Sep 5 00:05:16.777431 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 00:05:16.777816 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:05:16.800673 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 00:05:16.802618 auditctl[1637]: No rules Sep 5 00:05:16.803976 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:05:16.804274 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 00:05:16.806249 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:05:16.840804 augenrules[1655]: No rules Sep 5 00:05:16.843063 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:05:16.844805 sudo[1633]: pam_unix(sudo:session): session closed for user root Sep 5 00:05:16.846959 sshd[1630]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:16.859512 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:35890.service: Deactivated successfully. Sep 5 00:05:16.861474 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:05:16.863905 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:05:16.869833 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:35906.service - OpenSSH per-connection server daemon (10.0.0.1:35906). Sep 5 00:05:16.870839 systemd-logind[1458]: Removed session 8. Sep 5 00:05:16.902767 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 35906 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:16.904481 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:16.908723 systemd-logind[1458]: New session 9 of user core. Sep 5 00:05:16.926618 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:05:16.927601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:05:16.929460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:16.983693 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:05:16.984153 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:05:17.170595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:17.177788 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:05:17.308573 kubelet[1684]: E0905 00:05:17.308487 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:05:17.316493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:05:17.316772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:05:18.615081 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:05:18.617404 (dockerd)[1700]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:05:20.014714 dockerd[1700]: time="2025-09-05T00:05:20.014642802Z" level=info msg="Starting up" Sep 5 00:05:20.784133 dockerd[1700]: time="2025-09-05T00:05:20.782875356Z" level=info msg="Loading containers: start." Sep 5 00:05:21.060584 kernel: Initializing XFRM netlink socket Sep 5 00:05:21.168826 systemd-networkd[1408]: docker0: Link UP Sep 5 00:05:21.189910 dockerd[1700]: time="2025-09-05T00:05:21.189873909Z" level=info msg="Loading containers: done." Sep 5 00:05:21.206623 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1420653203-merged.mount: Deactivated successfully. Sep 5 00:05:21.209523 dockerd[1700]: time="2025-09-05T00:05:21.209469806Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:05:21.209631 dockerd[1700]: time="2025-09-05T00:05:21.209601743Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 00:05:21.209772 dockerd[1700]: time="2025-09-05T00:05:21.209743108Z" level=info msg="Daemon has completed initialization" Sep 5 00:05:21.247718 dockerd[1700]: time="2025-09-05T00:05:21.247644781Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:05:21.247988 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:05:22.200158 containerd[1472]: time="2025-09-05T00:05:22.200080037Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 5 00:05:23.140695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3850648799.mount: Deactivated successfully. Sep 5 00:05:24.603152 containerd[1472]: time="2025-09-05T00:05:24.603092271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:24.604082 containerd[1472]: time="2025-09-05T00:05:24.604035811Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 5 00:05:24.605364 containerd[1472]: time="2025-09-05T00:05:24.605300523Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:24.608615 containerd[1472]: time="2025-09-05T00:05:24.608572731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:24.609925 containerd[1472]: time="2025-09-05T00:05:24.609879853Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 2.409717992s" Sep 5 00:05:24.610001 containerd[1472]: time="2025-09-05T00:05:24.609931049Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 5 00:05:24.610662 containerd[1472]: time="2025-09-05T00:05:24.610636943Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 5 00:05:26.628037 containerd[1472]: time="2025-09-05T00:05:26.627909359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:26.628679 containerd[1472]: time="2025-09-05T00:05:26.628593192Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 5 00:05:26.629874 containerd[1472]: time="2025-09-05T00:05:26.629840722Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:26.633152 containerd[1472]: time="2025-09-05T00:05:26.633115074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:26.634357 containerd[1472]: time="2025-09-05T00:05:26.634313181Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 2.023648947s" Sep 5 00:05:26.634357 containerd[1472]: time="2025-09-05T00:05:26.634342065Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 5 00:05:26.635072 containerd[1472]: time="2025-09-05T00:05:26.634907225Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 5 00:05:27.523027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:05:27.534645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:28.073059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:28.079419 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:05:28.138087 kubelet[1913]: E0905 00:05:28.138007 1913 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:05:28.142707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:05:28.142947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:05:28.949432 containerd[1472]: time="2025-09-05T00:05:28.949359576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:28.950260 containerd[1472]: time="2025-09-05T00:05:28.950213729Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 5 00:05:28.951535 containerd[1472]: time="2025-09-05T00:05:28.951505081Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:28.954582 containerd[1472]: time="2025-09-05T00:05:28.954513644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:28.955522 containerd[1472]: time="2025-09-05T00:05:28.955487741Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 2.320552172s" Sep 5 00:05:28.955565 containerd[1472]: time="2025-09-05T00:05:28.955521925Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 5 00:05:28.956055 containerd[1472]: time="2025-09-05T00:05:28.956018356Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 5 00:05:30.352798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount332274712.mount: Deactivated successfully. Sep 5 00:05:31.124590 containerd[1472]: time="2025-09-05T00:05:31.124498305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:31.125292 containerd[1472]: time="2025-09-05T00:05:31.125232091Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 5 00:05:31.126429 containerd[1472]: time="2025-09-05T00:05:31.126405853Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:31.128698 containerd[1472]: time="2025-09-05T00:05:31.128646235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:31.129309 containerd[1472]: time="2025-09-05T00:05:31.129279393Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 2.173227915s" Sep 5 00:05:31.129352 containerd[1472]: time="2025-09-05T00:05:31.129309419Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 5 00:05:31.129861 containerd[1472]: time="2025-09-05T00:05:31.129838281Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 00:05:31.673482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1996554070.mount: Deactivated successfully. Sep 5 00:05:33.592635 containerd[1472]: time="2025-09-05T00:05:33.592566816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:33.593672 containerd[1472]: time="2025-09-05T00:05:33.593636202Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 5 00:05:33.595509 containerd[1472]: time="2025-09-05T00:05:33.595473548Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:33.602461 containerd[1472]: time="2025-09-05T00:05:33.602413155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:33.603662 containerd[1472]: time="2025-09-05T00:05:33.603607956Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.473741031s" Sep 5 00:05:33.603715 containerd[1472]: time="2025-09-05T00:05:33.603661737Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 5 00:05:33.604185 containerd[1472]: time="2025-09-05T00:05:33.604162677Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:05:34.197607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1953376613.mount: Deactivated successfully. Sep 5 00:05:34.203428 containerd[1472]: time="2025-09-05T00:05:34.203371003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:34.204114 containerd[1472]: time="2025-09-05T00:05:34.204076923Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 00:05:34.205135 containerd[1472]: time="2025-09-05T00:05:34.205112439Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:34.207474 containerd[1472]: time="2025-09-05T00:05:34.207429083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:34.208364 containerd[1472]: time="2025-09-05T00:05:34.208330952Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 604.13927ms" Sep 5 00:05:34.208413 containerd[1472]: time="2025-09-05T00:05:34.208367362Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 00:05:34.209048 containerd[1472]: time="2025-09-05T00:05:34.208901982Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 5 00:05:34.832162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94131408.mount: Deactivated successfully. Sep 5 00:05:36.902741 containerd[1472]: time="2025-09-05T00:05:36.902665917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:36.903458 containerd[1472]: time="2025-09-05T00:05:36.903353227Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 5 00:05:36.904716 containerd[1472]: time="2025-09-05T00:05:36.904680718Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:36.907799 containerd[1472]: time="2025-09-05T00:05:36.907768630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:36.909052 containerd[1472]: time="2025-09-05T00:05:36.908993082Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.700060481s" Sep 5 00:05:36.909052 containerd[1472]: time="2025-09-05T00:05:36.909044371Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 5 00:05:38.273105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 5 00:05:38.284742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:38.460417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:38.465784 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:05:38.507055 kubelet[2075]: E0905 00:05:38.506989 2075 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:05:38.511791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:05:38.512024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:05:39.446216 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:39.457677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:39.485416 systemd[1]: Reloading requested from client PID 2091 ('systemctl') (unit session-9.scope)... Sep 5 00:05:39.485457 systemd[1]: Reloading... Sep 5 00:05:39.583469 zram_generator::config[2133]: No configuration found. Sep 5 00:05:40.057122 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:05:40.137077 systemd[1]: Reloading finished in 651 ms. Sep 5 00:05:40.203208 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:40.207114 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:05:40.207386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:40.214884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:40.395087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:40.399855 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:05:40.439685 kubelet[2181]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:05:40.439685 kubelet[2181]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 00:05:40.439685 kubelet[2181]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:05:40.439685 kubelet[2181]: I0905 00:05:40.439128 2181 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:05:40.689961 kubelet[2181]: I0905 00:05:40.689838 2181 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 00:05:40.689961 kubelet[2181]: I0905 00:05:40.689872 2181 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:05:40.690178 kubelet[2181]: I0905 00:05:40.690151 2181 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 00:05:40.731432 kubelet[2181]: E0905 00:05:40.731372 2181 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:40.732718 kubelet[2181]: I0905 00:05:40.732655 2181 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:05:40.768338 kubelet[2181]: E0905 00:05:40.768303 2181 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:05:40.768338 kubelet[2181]: I0905 00:05:40.768333 2181 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:05:40.775488 kubelet[2181]: I0905 00:05:40.775463 2181 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:05:40.776451 kubelet[2181]: I0905 00:05:40.776407 2181 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 00:05:40.776691 kubelet[2181]: I0905 00:05:40.776640 2181 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:05:40.776881 kubelet[2181]: I0905 00:05:40.776681 2181 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:05:40.777018 kubelet[2181]: I0905 00:05:40.776891 2181 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:05:40.777018 kubelet[2181]: I0905 00:05:40.776902 2181 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 00:05:40.777065 kubelet[2181]: I0905 00:05:40.777036 2181 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:05:40.779592 kubelet[2181]: I0905 00:05:40.779562 2181 kubelet.go:408] "Attempting to sync node with API server" Sep 5 00:05:40.779644 kubelet[2181]: I0905 00:05:40.779600 2181 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:05:40.779676 kubelet[2181]: I0905 00:05:40.779653 2181 kubelet.go:314] "Adding apiserver pod source" Sep 5 00:05:40.779701 kubelet[2181]: I0905 00:05:40.779681 2181 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:05:40.782990 kubelet[2181]: I0905 00:05:40.782962 2181 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:05:40.783399 kubelet[2181]: I0905 00:05:40.783377 2181 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:05:40.784049 kubelet[2181]: W0905 00:05:40.784005 2181 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 5 00:05:40.784098 kubelet[2181]: E0905 00:05:40.784060 2181 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:40.784551 kubelet[2181]: W0905 00:05:40.784504 2181 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:05:40.785423 kubelet[2181]: W0905 00:05:40.785365 2181 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 5 00:05:40.785423 kubelet[2181]: E0905 00:05:40.785414 2181 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:40.787563 kubelet[2181]: I0905 00:05:40.787431 2181 server.go:1274] "Started kubelet" Sep 5 00:05:40.787748 kubelet[2181]: I0905 00:05:40.787717 2181 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:05:40.789724 kubelet[2181]: I0905 00:05:40.789698 2181 server.go:449] "Adding debug handlers to kubelet server" Sep 5 00:05:40.792407 kubelet[2181]: I0905 00:05:40.792370 2181 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:05:40.793958 kubelet[2181]: E0905 00:05:40.792641 2181 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623a264c12cf6f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:05:40.787384175 +0000 UTC m=+0.380996585,LastTimestamp:2025-09-05 00:05:40.787384175 +0000 UTC m=+0.380996585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:05:40.794353 kubelet[2181]: I0905 00:05:40.787719 2181 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:05:40.794620 kubelet[2181]: I0905 00:05:40.794591 2181 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:05:40.794841 kubelet[2181]: I0905 00:05:40.794814 2181 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:05:40.796911 kubelet[2181]: E0905 00:05:40.796496 2181 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:05:40.797835 kubelet[2181]: E0905 00:05:40.797789 2181 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:05:40.797878 kubelet[2181]: I0905 00:05:40.797860 2181 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 00:05:40.798770 kubelet[2181]: I0905 00:05:40.798746 2181 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 00:05:40.798954 kubelet[2181]: I0905 00:05:40.798922 2181 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:05:40.798997 kubelet[2181]: W0905 00:05:40.798900 2181 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 5 00:05:40.799091 kubelet[2181]: E0905 00:05:40.799060 2181 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:40.799195 kubelet[2181]: I0905 00:05:40.799174 2181 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:05:40.799260 kubelet[2181]: E0905 00:05:40.799220 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Sep 5 00:05:40.799301 kubelet[2181]: I0905 00:05:40.799268 2181 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:05:40.800888 kubelet[2181]: I0905 00:05:40.800853 2181 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:05:40.818657 kubelet[2181]: I0905 00:05:40.818598 2181 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:05:40.820538 kubelet[2181]: I0905 00:05:40.820431 2181 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 00:05:40.820538 kubelet[2181]: I0905 00:05:40.820537 2181 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 00:05:40.820622 kubelet[2181]: I0905 00:05:40.820556 2181 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:05:40.820848 kubelet[2181]: I0905 00:05:40.820824 2181 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:05:40.820881 kubelet[2181]: I0905 00:05:40.820851 2181 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 00:05:40.820881 kubelet[2181]: I0905 00:05:40.820871 2181 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 00:05:40.821032 kubelet[2181]: E0905 00:05:40.820927 2181 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:05:40.821457 kubelet[2181]: W0905 00:05:40.821417 2181 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 5 00:05:40.821935 kubelet[2181]: E0905 00:05:40.821477 2181 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:40.899011 kubelet[2181]: E0905 00:05:40.898956 2181 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:05:40.921104 kubelet[2181]: E0905 00:05:40.921058 2181 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:05:40.999511 kubelet[2181]: E0905 00:05:40.999320 2181 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:05:41.000032 kubelet[2181]: E0905 00:05:40.999952 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Sep 5 00:05:41.056984 kubelet[2181]: I0905 00:05:41.056914 2181 policy_none.go:49] "None policy: Start" Sep 5 00:05:41.058259 kubelet[2181]: I0905 00:05:41.058216 2181 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 00:05:41.058259 kubelet[2181]: I0905 00:05:41.058266 2181 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:05:41.066767 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:05:41.084111 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:05:41.087672 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:05:41.098942 kubelet[2181]: I0905 00:05:41.098779 2181 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:05:41.099240 kubelet[2181]: I0905 00:05:41.099140 2181 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:05:41.099240 kubelet[2181]: I0905 00:05:41.099165 2181 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:05:41.099451 kubelet[2181]: I0905 00:05:41.099415 2181 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:05:41.100847 kubelet[2181]: E0905 00:05:41.100819 2181 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:05:41.133309 systemd[1]: Created slice kubepods-burstable-pod48cef7e6f10caa2a47abdafa118d8e92.slice - libcontainer container kubepods-burstable-pod48cef7e6f10caa2a47abdafa118d8e92.slice. Sep 5 00:05:41.200069 kubelet[2181]: I0905 00:05:41.200017 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:41.200069 kubelet[2181]: I0905 00:05:41.200062 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:05:41.200243 kubelet[2181]: I0905 00:05:41.200086 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48cef7e6f10caa2a47abdafa118d8e92-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"48cef7e6f10caa2a47abdafa118d8e92\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:05:41.200243 kubelet[2181]: I0905 00:05:41.200105 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48cef7e6f10caa2a47abdafa118d8e92-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"48cef7e6f10caa2a47abdafa118d8e92\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:05:41.200243 kubelet[2181]: I0905 00:05:41.200125 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48cef7e6f10caa2a47abdafa118d8e92-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"48cef7e6f10caa2a47abdafa118d8e92\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:05:41.200243 kubelet[2181]: I0905 00:05:41.200146 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:41.200243 kubelet[2181]: I0905 00:05:41.200166 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:41.200350 kubelet[2181]: I0905 00:05:41.200185 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:41.200350 kubelet[2181]: I0905 00:05:41.200205 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:41.201216 kubelet[2181]: I0905 00:05:41.201197 2181 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:05:41.201620 kubelet[2181]: E0905 00:05:41.201591 2181 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 5 00:05:41.209795 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 5 00:05:41.225164 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 5 00:05:41.402851 kubelet[2181]: E0905 00:05:41.402771 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Sep 5 00:05:41.404040 kubelet[2181]: I0905 00:05:41.404014 2181 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:05:41.404566 kubelet[2181]: E0905 00:05:41.404504 2181 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 5 00:05:41.507598 kubelet[2181]: E0905 00:05:41.507521 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:41.509640 containerd[1472]: time="2025-09-05T00:05:41.509022526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:48cef7e6f10caa2a47abdafa118d8e92,Namespace:kube-system,Attempt:0,}" Sep 5 00:05:41.523147 kubelet[2181]: E0905 00:05:41.523097 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:41.524019 containerd[1472]: time="2025-09-05T00:05:41.523973009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 5 00:05:41.528357 kubelet[2181]: E0905 00:05:41.528307 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:41.529095 containerd[1472]: time="2025-09-05T00:05:41.529052524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 5 00:05:41.728103 kubelet[2181]: W0905 00:05:41.727924 2181 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 5 00:05:41.728103 kubelet[2181]: E0905 00:05:41.728000 2181 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:41.774935 kubelet[2181]: W0905 00:05:41.774888 2181 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 5 00:05:41.775051 kubelet[2181]: E0905 00:05:41.774942 2181 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:41.806757 kubelet[2181]: I0905 00:05:41.806725 2181 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:05:41.807022 kubelet[2181]: E0905 00:05:41.806989 2181 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 5 00:05:41.964135 kubelet[2181]: W0905 00:05:41.964040 2181 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 5 00:05:41.964275 kubelet[2181]: E0905 00:05:41.964139 2181 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:42.072958 kubelet[2181]: W0905 00:05:42.072876 2181 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 5 00:05:42.072958 kubelet[2181]: E0905 00:05:42.072943 2181 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:42.203923 kubelet[2181]: E0905 00:05:42.203851 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" Sep 5 00:05:42.609155 kubelet[2181]: I0905 00:05:42.609110 2181 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:05:42.609773 kubelet[2181]: E0905 00:05:42.609533 2181 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 5 00:05:42.865243 kubelet[2181]: E0905 00:05:42.865069 2181 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:42.940005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2047210479.mount: Deactivated successfully. Sep 5 00:05:42.948412 containerd[1472]: time="2025-09-05T00:05:42.948358048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:05:42.949508 containerd[1472]: time="2025-09-05T00:05:42.949473084Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:05:42.950259 containerd[1472]: time="2025-09-05T00:05:42.950201002Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:05:42.951139 containerd[1472]: time="2025-09-05T00:05:42.951096950Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:05:42.952376 containerd[1472]: time="2025-09-05T00:05:42.952321485Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:05:42.953664 containerd[1472]: time="2025-09-05T00:05:42.953617085Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:05:42.954665 containerd[1472]: time="2025-09-05T00:05:42.954620317Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 5 00:05:42.957351 containerd[1472]: time="2025-09-05T00:05:42.957318321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:05:42.959175 containerd[1472]: time="2025-09-05T00:05:42.959141747Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.450003008s" Sep 5 00:05:42.959947 containerd[1472]: time="2025-09-05T00:05:42.959896095Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.43581549s" Sep 5 00:05:42.960756 containerd[1472]: time="2025-09-05T00:05:42.960720597Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.431564336s" Sep 5 00:05:43.158912 containerd[1472]: time="2025-09-05T00:05:43.158372198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:05:43.158912 containerd[1472]: time="2025-09-05T00:05:43.158491956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:05:43.158912 containerd[1472]: time="2025-09-05T00:05:43.158507316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:43.158912 containerd[1472]: time="2025-09-05T00:05:43.158650859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:43.163431 containerd[1472]: time="2025-09-05T00:05:43.162493788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:05:43.163431 containerd[1472]: time="2025-09-05T00:05:43.162568520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:05:43.163431 containerd[1472]: time="2025-09-05T00:05:43.162594901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:43.163431 containerd[1472]: time="2025-09-05T00:05:43.162765436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:43.164928 containerd[1472]: time="2025-09-05T00:05:43.164179698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:05:43.164928 containerd[1472]: time="2025-09-05T00:05:43.164232529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:05:43.164928 containerd[1472]: time="2025-09-05T00:05:43.164248008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:43.165789 containerd[1472]: time="2025-09-05T00:05:43.164320206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:43.248404 systemd[1]: Started cri-containerd-f047235a4145cda9b5b47c7cd188f563a57903f23d25ea97d78b72e4a5b29c3a.scope - libcontainer container f047235a4145cda9b5b47c7cd188f563a57903f23d25ea97d78b72e4a5b29c3a. Sep 5 00:05:43.254745 systemd[1]: Started cri-containerd-18eab880578a5048e5e13942495b73963d85de70a9f2526217dba28459c49d00.scope - libcontainer container 18eab880578a5048e5e13942495b73963d85de70a9f2526217dba28459c49d00. Sep 5 00:05:43.258219 systemd[1]: Started cri-containerd-dec568644e17e8aa7ced0e5fc5b93976f002f51934c669b2c4f82ab9e51d8323.scope - libcontainer container dec568644e17e8aa7ced0e5fc5b93976f002f51934c669b2c4f82ab9e51d8323. Sep 5 00:05:43.315945 containerd[1472]: time="2025-09-05T00:05:43.315877851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"18eab880578a5048e5e13942495b73963d85de70a9f2526217dba28459c49d00\"" Sep 5 00:05:43.318091 kubelet[2181]: E0905 00:05:43.318065 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:43.321359 containerd[1472]: time="2025-09-05T00:05:43.321048039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f047235a4145cda9b5b47c7cd188f563a57903f23d25ea97d78b72e4a5b29c3a\"" Sep 5 00:05:43.322563 kubelet[2181]: E0905 00:05:43.322530 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:43.323115 containerd[1472]: time="2025-09-05T00:05:43.323072344Z" level=info msg="CreateContainer within sandbox \"18eab880578a5048e5e13942495b73963d85de70a9f2526217dba28459c49d00\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:05:43.324618 containerd[1472]: time="2025-09-05T00:05:43.324579395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:48cef7e6f10caa2a47abdafa118d8e92,Namespace:kube-system,Attempt:0,} returns sandbox id \"dec568644e17e8aa7ced0e5fc5b93976f002f51934c669b2c4f82ab9e51d8323\"" Sep 5 00:05:43.325231 kubelet[2181]: E0905 00:05:43.325206 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:43.326903 containerd[1472]: time="2025-09-05T00:05:43.326868234Z" level=info msg="CreateContainer within sandbox \"f047235a4145cda9b5b47c7cd188f563a57903f23d25ea97d78b72e4a5b29c3a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:05:43.327739 containerd[1472]: time="2025-09-05T00:05:43.327630666Z" level=info msg="CreateContainer within sandbox \"dec568644e17e8aa7ced0e5fc5b93976f002f51934c669b2c4f82ab9e51d8323\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:05:43.349991 containerd[1472]: time="2025-09-05T00:05:43.349926538Z" level=info msg="CreateContainer within sandbox \"18eab880578a5048e5e13942495b73963d85de70a9f2526217dba28459c49d00\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"959bb3739cfd4f893366cd11c0f6cdc5cfaca704aecdc970fd12d553b88c026d\"" Sep 5 00:05:43.350736 containerd[1472]: time="2025-09-05T00:05:43.350692858Z" level=info msg="StartContainer for \"959bb3739cfd4f893366cd11c0f6cdc5cfaca704aecdc970fd12d553b88c026d\"" Sep 5 00:05:43.355709 containerd[1472]: time="2025-09-05T00:05:43.355670077Z" level=info msg="CreateContainer within sandbox \"f047235a4145cda9b5b47c7cd188f563a57903f23d25ea97d78b72e4a5b29c3a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"440bb2c6cf4f91dc2a8a251f604f06087f3bab4da8fea9ba74aeec113fc41fec\"" Sep 5 00:05:43.356236 containerd[1472]: time="2025-09-05T00:05:43.356189687Z" level=info msg="StartContainer for \"440bb2c6cf4f91dc2a8a251f604f06087f3bab4da8fea9ba74aeec113fc41fec\"" Sep 5 00:05:43.357537 containerd[1472]: time="2025-09-05T00:05:43.357505683Z" level=info msg="CreateContainer within sandbox \"dec568644e17e8aa7ced0e5fc5b93976f002f51934c669b2c4f82ab9e51d8323\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"53ed21e8a3defe920199da3a5ef7c3ad77a53be74e8e2545643c9ad811a1cb31\"" Sep 5 00:05:43.358066 containerd[1472]: time="2025-09-05T00:05:43.358033298Z" level=info msg="StartContainer for \"53ed21e8a3defe920199da3a5ef7c3ad77a53be74e8e2545643c9ad811a1cb31\"" Sep 5 00:05:43.382626 systemd[1]: Started cri-containerd-959bb3739cfd4f893366cd11c0f6cdc5cfaca704aecdc970fd12d553b88c026d.scope - libcontainer container 959bb3739cfd4f893366cd11c0f6cdc5cfaca704aecdc970fd12d553b88c026d. Sep 5 00:05:43.385390 systemd[1]: Started cri-containerd-53ed21e8a3defe920199da3a5ef7c3ad77a53be74e8e2545643c9ad811a1cb31.scope - libcontainer container 53ed21e8a3defe920199da3a5ef7c3ad77a53be74e8e2545643c9ad811a1cb31. Sep 5 00:05:43.416696 systemd[1]: Started cri-containerd-440bb2c6cf4f91dc2a8a251f604f06087f3bab4da8fea9ba74aeec113fc41fec.scope - libcontainer container 440bb2c6cf4f91dc2a8a251f604f06087f3bab4da8fea9ba74aeec113fc41fec. Sep 5 00:05:43.447278 containerd[1472]: time="2025-09-05T00:05:43.446847717Z" level=info msg="StartContainer for \"53ed21e8a3defe920199da3a5ef7c3ad77a53be74e8e2545643c9ad811a1cb31\" returns successfully" Sep 5 00:05:43.456811 containerd[1472]: time="2025-09-05T00:05:43.456659835Z" level=info msg="StartContainer for \"959bb3739cfd4f893366cd11c0f6cdc5cfaca704aecdc970fd12d553b88c026d\" returns successfully" Sep 5 00:05:43.475086 containerd[1472]: time="2025-09-05T00:05:43.474983172Z" level=info msg="StartContainer for \"440bb2c6cf4f91dc2a8a251f604f06087f3bab4da8fea9ba74aeec113fc41fec\" returns successfully" Sep 5 00:05:43.898939 kubelet[2181]: E0905 00:05:43.898890 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:43.902928 kubelet[2181]: E0905 00:05:43.902897 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:43.903407 kubelet[2181]: E0905 00:05:43.903386 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:44.212566 kubelet[2181]: I0905 00:05:44.212001 2181 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:05:44.814703 kubelet[2181]: E0905 00:05:44.814653 2181 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 00:05:44.904194 kubelet[2181]: I0905 00:05:44.903548 2181 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 00:05:44.909969 kubelet[2181]: E0905 00:05:44.909921 2181 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 00:05:44.910221 kubelet[2181]: E0905 00:05:44.910178 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:45.365752 kubelet[2181]: E0905 00:05:45.365706 2181 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:45.365927 kubelet[2181]: E0905 00:05:45.365892 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:45.784463 kubelet[2181]: I0905 00:05:45.784393 2181 apiserver.go:52] "Watching apiserver" Sep 5 00:05:45.799373 kubelet[2181]: I0905 00:05:45.799314 2181 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 00:05:46.998469 kubelet[2181]: E0905 00:05:46.998401 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:47.172625 update_engine[1459]: I20250905 00:05:47.172531 1459 update_attempter.cc:509] Updating boot flags... Sep 5 00:05:47.207539 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2466) Sep 5 00:05:47.243478 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2470) Sep 5 00:05:47.281594 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2470) Sep 5 00:05:47.343660 systemd[1]: Reloading requested from client PID 2475 ('systemctl') (unit session-9.scope)... Sep 5 00:05:47.343678 systemd[1]: Reloading... Sep 5 00:05:47.429559 zram_generator::config[2515]: No configuration found. Sep 5 00:05:47.556243 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:05:47.668718 systemd[1]: Reloading finished in 324 ms. Sep 5 00:05:47.722291 kubelet[2181]: I0905 00:05:47.722164 2181 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:05:47.722617 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:47.749176 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:05:47.749565 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:47.749666 systemd[1]: kubelet.service: Consumed 1.165s CPU time, 132.6M memory peak, 0B memory swap peak. Sep 5 00:05:47.761672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:47.959619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:47.965794 (kubelet)[2559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:05:48.018242 kubelet[2559]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:05:48.018242 kubelet[2559]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 00:05:48.018242 kubelet[2559]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:05:48.018805 kubelet[2559]: I0905 00:05:48.018474 2559 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:05:48.029403 kubelet[2559]: I0905 00:05:48.029332 2559 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 00:05:48.029403 kubelet[2559]: I0905 00:05:48.029369 2559 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:05:48.029698 kubelet[2559]: I0905 00:05:48.029672 2559 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 00:05:48.031121 kubelet[2559]: I0905 00:05:48.031084 2559 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 00:05:48.033213 kubelet[2559]: I0905 00:05:48.033153 2559 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:05:48.036426 kubelet[2559]: E0905 00:05:48.036395 2559 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:05:48.036426 kubelet[2559]: I0905 00:05:48.036425 2559 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:05:48.042997 kubelet[2559]: I0905 00:05:48.042936 2559 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:05:48.043160 kubelet[2559]: I0905 00:05:48.043126 2559 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 00:05:48.043320 kubelet[2559]: I0905 00:05:48.043269 2559 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:05:48.043543 kubelet[2559]: I0905 00:05:48.043310 2559 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:05:48.043673 kubelet[2559]: I0905 00:05:48.043557 2559 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:05:48.043673 kubelet[2559]: I0905 00:05:48.043567 2559 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 00:05:48.043673 kubelet[2559]: I0905 00:05:48.043599 2559 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:05:48.043776 kubelet[2559]: I0905 00:05:48.043750 2559 kubelet.go:408] "Attempting to sync node with API server" Sep 5 00:05:48.043776 kubelet[2559]: I0905 00:05:48.043763 2559 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:05:48.043839 kubelet[2559]: I0905 00:05:48.043801 2559 kubelet.go:314] "Adding apiserver pod source" Sep 5 00:05:48.043839 kubelet[2559]: I0905 00:05:48.043822 2559 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:05:48.044959 kubelet[2559]: I0905 00:05:48.044938 2559 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:05:48.045365 kubelet[2559]: I0905 00:05:48.045346 2559 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:05:48.047905 kubelet[2559]: I0905 00:05:48.045856 2559 server.go:1274] "Started kubelet" Sep 5 00:05:48.047905 kubelet[2559]: I0905 00:05:48.046164 2559 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:05:48.047905 kubelet[2559]: I0905 00:05:48.046190 2559 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:05:48.047905 kubelet[2559]: I0905 00:05:48.046507 2559 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:05:48.047905 kubelet[2559]: I0905 00:05:48.047303 2559 server.go:449] "Adding debug handlers to kubelet server" Sep 5 00:05:48.048976 kubelet[2559]: I0905 00:05:48.048929 2559 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:05:48.049338 kubelet[2559]: I0905 00:05:48.049321 2559 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:05:48.050979 kubelet[2559]: I0905 00:05:48.050961 2559 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 00:05:48.051316 kubelet[2559]: E0905 00:05:48.051288 2559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:05:48.053388 kubelet[2559]: I0905 00:05:48.052510 2559 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 00:05:48.057040 kubelet[2559]: I0905 00:05:48.052672 2559 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:05:48.057117 kubelet[2559]: I0905 00:05:48.054548 2559 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:05:48.057368 kubelet[2559]: I0905 00:05:48.057278 2559 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:05:48.066461 kubelet[2559]: I0905 00:05:48.066407 2559 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:05:48.073082 kubelet[2559]: E0905 00:05:48.073049 2559 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:05:48.085427 kubelet[2559]: I0905 00:05:48.085370 2559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:05:48.087206 kubelet[2559]: I0905 00:05:48.087181 2559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:05:48.087614 kubelet[2559]: I0905 00:05:48.087597 2559 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 00:05:48.087816 kubelet[2559]: I0905 00:05:48.087802 2559 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 00:05:48.088018 kubelet[2559]: E0905 00:05:48.087993 2559 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:05:48.114827 kubelet[2559]: I0905 00:05:48.114798 2559 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 00:05:48.115196 kubelet[2559]: I0905 00:05:48.114961 2559 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 00:05:48.115196 kubelet[2559]: I0905 00:05:48.114984 2559 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:05:48.115196 kubelet[2559]: I0905 00:05:48.115133 2559 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:05:48.115196 kubelet[2559]: I0905 00:05:48.115144 2559 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:05:48.115196 kubelet[2559]: I0905 00:05:48.115162 2559 policy_none.go:49] "None policy: Start" Sep 5 00:05:48.116086 kubelet[2559]: I0905 00:05:48.116057 2559 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 00:05:48.116150 kubelet[2559]: I0905 00:05:48.116108 2559 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:05:48.116398 kubelet[2559]: I0905 00:05:48.116380 2559 state_mem.go:75] "Updated machine memory state" Sep 5 00:05:48.124114 kubelet[2559]: I0905 00:05:48.123384 2559 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:05:48.124114 kubelet[2559]: I0905 00:05:48.123672 2559 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:05:48.124114 kubelet[2559]: I0905 00:05:48.123691 2559 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:05:48.124114 kubelet[2559]: I0905 00:05:48.124026 2559 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:05:48.196582 kubelet[2559]: E0905 00:05:48.196539 2559 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:05:48.230679 kubelet[2559]: I0905 00:05:48.230505 2559 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:05:48.237249 kubelet[2559]: I0905 00:05:48.237220 2559 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 5 00:05:48.237363 kubelet[2559]: I0905 00:05:48.237319 2559 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 00:05:48.258078 kubelet[2559]: I0905 00:05:48.258018 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48cef7e6f10caa2a47abdafa118d8e92-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"48cef7e6f10caa2a47abdafa118d8e92\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:05:48.258078 kubelet[2559]: I0905 00:05:48.258065 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48cef7e6f10caa2a47abdafa118d8e92-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"48cef7e6f10caa2a47abdafa118d8e92\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:05:48.258286 kubelet[2559]: I0905 00:05:48.258091 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:48.258286 kubelet[2559]: I0905 00:05:48.258115 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:48.258286 kubelet[2559]: I0905 00:05:48.258206 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:48.258286 kubelet[2559]: I0905 00:05:48.258269 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:48.258392 kubelet[2559]: I0905 00:05:48.258300 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48cef7e6f10caa2a47abdafa118d8e92-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"48cef7e6f10caa2a47abdafa118d8e92\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:05:48.258392 kubelet[2559]: I0905 00:05:48.258322 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:48.258392 kubelet[2559]: I0905 00:05:48.258344 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:05:48.346269 sudo[2597]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 5 00:05:48.346831 sudo[2597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 5 00:05:48.495479 kubelet[2559]: E0905 00:05:48.495329 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:48.497489 kubelet[2559]: E0905 00:05:48.497449 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:48.497649 kubelet[2559]: E0905 00:05:48.497608 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:48.876051 sudo[2597]: pam_unix(sudo:session): session closed for user root Sep 5 00:05:49.045154 kubelet[2559]: I0905 00:05:49.045106 2559 apiserver.go:52] "Watching apiserver" Sep 5 00:05:49.057702 kubelet[2559]: I0905 00:05:49.057657 2559 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 00:05:49.100980 kubelet[2559]: E0905 00:05:49.100925 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:49.101714 kubelet[2559]: E0905 00:05:49.101677 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:49.101966 kubelet[2559]: E0905 00:05:49.101883 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:49.137566 kubelet[2559]: I0905 00:05:49.136982 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.13694776 podStartE2EDuration="3.13694776s" podCreationTimestamp="2025-09-05 00:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:05:49.126697102 +0000 UTC m=+1.156016880" watchObservedRunningTime="2025-09-05 00:05:49.13694776 +0000 UTC m=+1.166267548" Sep 5 00:05:49.146230 kubelet[2559]: I0905 00:05:49.146148 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.1461231299999999 podStartE2EDuration="1.14612313s" podCreationTimestamp="2025-09-05 00:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:05:49.137335816 +0000 UTC m=+1.166655604" watchObservedRunningTime="2025-09-05 00:05:49.14612313 +0000 UTC m=+1.175442908" Sep 5 00:05:50.101953 kubelet[2559]: E0905 00:05:50.101898 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:50.557347 sudo[1669]: pam_unix(sudo:session): session closed for user root Sep 5 00:05:50.560009 sshd[1663]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:50.564305 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:35906.service: Deactivated successfully. Sep 5 00:05:50.566546 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:05:50.566756 systemd[1]: session-9.scope: Consumed 6.317s CPU time, 157.1M memory peak, 0B memory swap peak. Sep 5 00:05:50.567369 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:05:50.568745 systemd-logind[1458]: Removed session 9. Sep 5 00:05:51.001276 kubelet[2559]: E0905 00:05:51.001134 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:52.157080 kubelet[2559]: I0905 00:05:52.157025 2559 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:05:52.157584 containerd[1472]: time="2025-09-05T00:05:52.157545579Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:05:52.157909 kubelet[2559]: I0905 00:05:52.157877 2559 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:05:53.212469 kubelet[2559]: I0905 00:05:53.211606 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.211567201 podStartE2EDuration="5.211567201s" podCreationTimestamp="2025-09-05 00:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:05:49.147044887 +0000 UTC m=+1.176364665" watchObservedRunningTime="2025-09-05 00:05:53.211567201 +0000 UTC m=+5.240886979" Sep 5 00:05:53.231202 systemd[1]: Created slice kubepods-besteffort-pod26e0fc09_494b_474d_9d13_00f30b9cf27c.slice - libcontainer container kubepods-besteffort-pod26e0fc09_494b_474d_9d13_00f30b9cf27c.slice. Sep 5 00:05:53.250923 systemd[1]: Created slice kubepods-burstable-pod48a9996e_3cfe_4c60_adb6_4faa6ae8425c.slice - libcontainer container kubepods-burstable-pod48a9996e_3cfe_4c60_adb6_4faa6ae8425c.slice. Sep 5 00:05:53.276983 systemd[1]: Created slice kubepods-besteffort-pod156ed862_6632_4160_9bcd_c42ca1eaab40.slice - libcontainer container kubepods-besteffort-pod156ed862_6632_4160_9bcd_c42ca1eaab40.slice. Sep 5 00:05:53.281629 kubelet[2559]: I0905 00:05:53.281571 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-host-proc-sys-kernel\") pod \"cilium-mlcmw\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " pod="kube-system/cilium-mlcmw" Sep 5 00:05:53.281629 kubelet[2559]: I0905 00:05:53.281619 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-hubble-tls\") pod \"cilium-mlcmw\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " pod="kube-system/cilium-mlcmw" Sep 5 00:05:53.281629 kubelet[2559]: I0905 00:05:53.281637 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26e0fc09-494b-474d-9d13-00f30b9cf27c-xtables-lock\") pod \"kube-proxy-qhklt\" (UID: \"26e0fc09-494b-474d-9d13-00f30b9cf27c\") " pod="kube-system/kube-proxy-qhklt" Sep 5 00:05:53.281808 kubelet[2559]: I0905 00:05:53.281650 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-hostproc\") pod \"cilium-mlcmw\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " pod="kube-system/cilium-mlcmw" Sep 5 00:05:53.281808 kubelet[2559]: I0905 00:05:53.281663 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-lib-modules\") pod \"cilium-mlcmw\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " pod="kube-system/cilium-mlcmw" Sep 5 00:05:53.281808 kubelet[2559]: I0905 00:05:53.281683 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjspb\" (UniqueName: \"kubernetes.io/projected/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-kube-api-access-sjspb\") pod \"cilium-mlcmw\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " pod="kube-system/cilium-mlcmw" Sep 5 00:05:53.281808 kubelet[2559]: I0905 00:05:53.281701 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-cilium-cgroup\") pod \"cilium-mlcmw\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " pod="kube-system/cilium-mlcmw" Sep 5 00:05:53.281808 kubelet[2559]: I0905 00:05:53.281727 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-etc-cni-netd\") pod \"cilium-mlcmw\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " pod="kube-system/cilium-mlcmw" Sep 5 00:05:53.281808 kubelet[2559]: I0905 00:05:53.281747 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-xtables-lock\") pod \"cilium-mlcmw\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " pod="kube-system/cilium-mlcmw" Sep 5 00:05:53.281946 kubelet[2559]: I0905 00:05:53.281770 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvcc4\" (UniqueName: \"kubernetes.io/projected/26e0fc09-494b-474d-9d13-00f30b9cf27c-kube-api-access-dvcc4\") pod \"kube-proxy-qhklt\" (UID: \"26e0fc09-494b-474d-9d13-00f30b9cf27c\") " pod="kube-system/kube-proxy-qhklt" Sep 5 00:05:53.281946 kubelet[2559]: I0905 00:05:53.281787 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-bpf-maps\") pod \"cilium-mlcmw\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " pod="kube-system/cilium-mlcmw" Sep 5 00:05:53.281946 kubelet[2559]: I0905 00:05:53.281801 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-clustermesh-secrets\") pod \"cilium-mlcmw\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " pod="kube-system/cilium-mlcmw" Sep 5 00:05:53.281946 kubelet[2559]: I0905 00:05:53.281814 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26e0fc09-494b-474d-9d13-00f30b9cf27c-lib-modules\") pod \"kube-proxy-qhklt\" (UID: \"26e0fc09-494b-474d-9d13-00f30b9cf27c\") " pod="kube-system/kube-proxy-qhklt" Sep 5 00:05:53.281946 kubelet[2559]: I0905 00:05:53.281830 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-cilium-run\") pod \"cilium-mlcmw\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " pod="kube-system/cilium-mlcmw" Sep 5 00:05:53.282068 kubelet[2559]: I0905 00:05:53.281846 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjld2\" (UniqueName: \"kubernetes.io/projected/156ed862-6632-4160-9bcd-c42ca1eaab40-kube-api-access-wjld2\") pod \"cilium-operator-5d85765b45-k76s2\" (UID: \"156ed862-6632-4160-9bcd-c42ca1eaab40\") " pod="kube-system/cilium-operator-5d85765b45-k76s2" Sep 5 00:05:53.282068 kubelet[2559]: I0905 00:05:53.281859 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/26e0fc09-494b-474d-9d13-00f30b9cf27c-kube-proxy\") pod \"kube-proxy-qhklt\" (UID: \"26e0fc09-494b-474d-9d13-00f30b9cf27c\") " pod="kube-system/kube-proxy-qhklt" Sep 5 00:05:53.282068 kubelet[2559]: I0905 00:05:53.281881 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-cni-path\") pod \"cilium-mlcmw\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " pod="kube-system/cilium-mlcmw" Sep 5 00:05:53.282068 kubelet[2559]: I0905 00:05:53.281898 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-host-proc-sys-net\") pod \"cilium-mlcmw\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " pod="kube-system/cilium-mlcmw" Sep 5 00:05:53.282068 kubelet[2559]: I0905 00:05:53.281914 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/156ed862-6632-4160-9bcd-c42ca1eaab40-cilium-config-path\") pod \"cilium-operator-5d85765b45-k76s2\" (UID: \"156ed862-6632-4160-9bcd-c42ca1eaab40\") " pod="kube-system/cilium-operator-5d85765b45-k76s2" Sep 5 00:05:53.282206 kubelet[2559]: I0905 00:05:53.281932 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-cilium-config-path\") pod \"cilium-mlcmw\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " pod="kube-system/cilium-mlcmw" Sep 5 00:05:53.547611 kubelet[2559]: E0905 00:05:53.547548 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:53.548291 containerd[1472]: time="2025-09-05T00:05:53.548242605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qhklt,Uid:26e0fc09-494b-474d-9d13-00f30b9cf27c,Namespace:kube-system,Attempt:0,}" Sep 5 00:05:53.554668 kubelet[2559]: E0905 00:05:53.554635 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:53.555176 containerd[1472]: time="2025-09-05T00:05:53.555134887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mlcmw,Uid:48a9996e-3cfe-4c60-adb6-4faa6ae8425c,Namespace:kube-system,Attempt:0,}" Sep 5 00:05:53.580408 kubelet[2559]: E0905 00:05:53.580365 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:53.580765 containerd[1472]: time="2025-09-05T00:05:53.580733630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-k76s2,Uid:156ed862-6632-4160-9bcd-c42ca1eaab40,Namespace:kube-system,Attempt:0,}" Sep 5 00:05:53.900023 containerd[1472]: time="2025-09-05T00:05:53.899792294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:05:53.900023 containerd[1472]: time="2025-09-05T00:05:53.899913784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:05:53.900023 containerd[1472]: time="2025-09-05T00:05:53.899945895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:53.900412 containerd[1472]: time="2025-09-05T00:05:53.900102591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:53.911699 containerd[1472]: time="2025-09-05T00:05:53.909490111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:05:53.911699 containerd[1472]: time="2025-09-05T00:05:53.909629916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:05:53.911699 containerd[1472]: time="2025-09-05T00:05:53.909653932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:53.911699 containerd[1472]: time="2025-09-05T00:05:53.909829934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:53.935737 systemd[1]: Started cri-containerd-81b44dc4f9247710791ee01baa7c70151f125487d0e8569a4081a548351e70c6.scope - libcontainer container 81b44dc4f9247710791ee01baa7c70151f125487d0e8569a4081a548351e70c6. Sep 5 00:05:53.938528 containerd[1472]: time="2025-09-05T00:05:53.938216728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:05:53.938528 containerd[1472]: time="2025-09-05T00:05:53.938298683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:05:53.938528 containerd[1472]: time="2025-09-05T00:05:53.938315255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:53.938927 containerd[1472]: time="2025-09-05T00:05:53.938458546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:53.940834 systemd[1]: Started cri-containerd-a8575a7edff0988538ed7272c06129be2fb8134c4c265e337688e8191135c607.scope - libcontainer container a8575a7edff0988538ed7272c06129be2fb8134c4c265e337688e8191135c607. Sep 5 00:05:53.963769 systemd[1]: Started cri-containerd-614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66.scope - libcontainer container 614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66. Sep 5 00:05:53.986146 containerd[1472]: time="2025-09-05T00:05:53.986090339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qhklt,Uid:26e0fc09-494b-474d-9d13-00f30b9cf27c,Namespace:kube-system,Attempt:0,} returns sandbox id \"81b44dc4f9247710791ee01baa7c70151f125487d0e8569a4081a548351e70c6\"" Sep 5 00:05:53.987089 kubelet[2559]: E0905 00:05:53.986924 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:53.992459 containerd[1472]: time="2025-09-05T00:05:53.992327172Z" level=info msg="CreateContainer within sandbox \"81b44dc4f9247710791ee01baa7c70151f125487d0e8569a4081a548351e70c6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:05:54.009327 containerd[1472]: time="2025-09-05T00:05:54.009288937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mlcmw,Uid:48a9996e-3cfe-4c60-adb6-4faa6ae8425c,Namespace:kube-system,Attempt:0,} returns sandbox id \"614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66\"" Sep 5 00:05:54.010876 kubelet[2559]: E0905 00:05:54.010845 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:54.013993 containerd[1472]: time="2025-09-05T00:05:54.013945316Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 5 00:05:54.024091 containerd[1472]: time="2025-09-05T00:05:54.024012172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-k76s2,Uid:156ed862-6632-4160-9bcd-c42ca1eaab40,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8575a7edff0988538ed7272c06129be2fb8134c4c265e337688e8191135c607\"" Sep 5 00:05:54.025102 kubelet[2559]: E0905 00:05:54.024934 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:54.091768 containerd[1472]: time="2025-09-05T00:05:54.091692699Z" level=info msg="CreateContainer within sandbox \"81b44dc4f9247710791ee01baa7c70151f125487d0e8569a4081a548351e70c6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6536f7601b828d1e69499737f3218822755d94f09b68f848489348601c365e0d\"" Sep 5 00:05:54.092362 containerd[1472]: time="2025-09-05T00:05:54.092319905Z" level=info msg="StartContainer for \"6536f7601b828d1e69499737f3218822755d94f09b68f848489348601c365e0d\"" Sep 5 00:05:54.129632 systemd[1]: Started cri-containerd-6536f7601b828d1e69499737f3218822755d94f09b68f848489348601c365e0d.scope - libcontainer container 6536f7601b828d1e69499737f3218822755d94f09b68f848489348601c365e0d. Sep 5 00:05:54.163349 containerd[1472]: time="2025-09-05T00:05:54.163189550Z" level=info msg="StartContainer for \"6536f7601b828d1e69499737f3218822755d94f09b68f848489348601c365e0d\" returns successfully" Sep 5 00:05:55.114197 kubelet[2559]: E0905 00:05:55.114159 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:55.125666 kubelet[2559]: I0905 00:05:55.125595 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qhklt" podStartSLOduration=2.125543855 podStartE2EDuration="2.125543855s" podCreationTimestamp="2025-09-05 00:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:05:55.12539794 +0000 UTC m=+7.154717728" watchObservedRunningTime="2025-09-05 00:05:55.125543855 +0000 UTC m=+7.154863643" Sep 5 00:05:55.572982 kubelet[2559]: E0905 00:05:55.572933 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:56.116066 kubelet[2559]: E0905 00:05:56.116014 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:56.116591 kubelet[2559]: E0905 00:05:56.116304 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:58.381163 kubelet[2559]: E0905 00:05:58.381106 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:59.121567 kubelet[2559]: E0905 00:05:59.121516 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:00.123607 kubelet[2559]: E0905 00:06:00.123557 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:01.007495 kubelet[2559]: E0905 00:06:01.007351 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:01.995056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597309383.mount: Deactivated successfully. Sep 5 00:06:06.059689 containerd[1472]: time="2025-09-05T00:06:06.059604727Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:06.060656 containerd[1472]: time="2025-09-05T00:06:06.060584652Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 5 00:06:06.062101 containerd[1472]: time="2025-09-05T00:06:06.062046673Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:06.065344 containerd[1472]: time="2025-09-05T00:06:06.065289077Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.051296021s" Sep 5 00:06:06.065344 containerd[1472]: time="2025-09-05T00:06:06.065344170Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 5 00:06:06.073098 containerd[1472]: time="2025-09-05T00:06:06.073060775Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 5 00:06:06.088743 containerd[1472]: time="2025-09-05T00:06:06.088685775Z" level=info msg="CreateContainer within sandbox \"614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 00:06:06.103872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1174582680.mount: Deactivated successfully. Sep 5 00:06:06.127707 containerd[1472]: time="2025-09-05T00:06:06.127633264Z" level=info msg="CreateContainer within sandbox \"614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c\"" Sep 5 00:06:06.132908 containerd[1472]: time="2025-09-05T00:06:06.131811197Z" level=info msg="StartContainer for \"fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c\"" Sep 5 00:06:06.170686 systemd[1]: Started cri-containerd-fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c.scope - libcontainer container fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c. Sep 5 00:06:06.202176 containerd[1472]: time="2025-09-05T00:06:06.202111630Z" level=info msg="StartContainer for \"fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c\" returns successfully" Sep 5 00:06:06.213952 systemd[1]: cri-containerd-fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c.scope: Deactivated successfully. Sep 5 00:06:06.584410 containerd[1472]: time="2025-09-05T00:06:06.584319058Z" level=info msg="shim disconnected" id=fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c namespace=k8s.io Sep 5 00:06:06.584410 containerd[1472]: time="2025-09-05T00:06:06.584398818Z" level=warning msg="cleaning up after shim disconnected" id=fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c namespace=k8s.io Sep 5 00:06:06.584410 containerd[1472]: time="2025-09-05T00:06:06.584410369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:06:07.101403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c-rootfs.mount: Deactivated successfully. Sep 5 00:06:07.146968 kubelet[2559]: E0905 00:06:07.146913 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:07.149137 containerd[1472]: time="2025-09-05T00:06:07.149048647Z" level=info msg="CreateContainer within sandbox \"614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 00:06:07.164244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1420313430.mount: Deactivated successfully. Sep 5 00:06:07.166752 containerd[1472]: time="2025-09-05T00:06:07.166712478Z" level=info msg="CreateContainer within sandbox \"614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c\"" Sep 5 00:06:07.167946 containerd[1472]: time="2025-09-05T00:06:07.167839047Z" level=info msg="StartContainer for \"c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c\"" Sep 5 00:06:07.207756 systemd[1]: Started cri-containerd-c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c.scope - libcontainer container c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c. Sep 5 00:06:07.238165 containerd[1472]: time="2025-09-05T00:06:07.238113703Z" level=info msg="StartContainer for \"c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c\" returns successfully" Sep 5 00:06:07.255376 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:06:07.256187 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:06:07.256306 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:06:07.261831 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:06:07.262393 systemd[1]: cri-containerd-c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c.scope: Deactivated successfully. Sep 5 00:06:07.292398 containerd[1472]: time="2025-09-05T00:06:07.292319529Z" level=info msg="shim disconnected" id=c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c namespace=k8s.io Sep 5 00:06:07.292398 containerd[1472]: time="2025-09-05T00:06:07.292395922Z" level=warning msg="cleaning up after shim disconnected" id=c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c namespace=k8s.io Sep 5 00:06:07.292784 containerd[1472]: time="2025-09-05T00:06:07.292408165Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:06:07.294122 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:06:08.101247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c-rootfs.mount: Deactivated successfully. Sep 5 00:06:08.143924 kubelet[2559]: E0905 00:06:08.143894 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:08.149499 containerd[1472]: time="2025-09-05T00:06:08.147530540Z" level=info msg="CreateContainer within sandbox \"614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 00:06:08.196288 containerd[1472]: time="2025-09-05T00:06:08.196236498Z" level=info msg="CreateContainer within sandbox \"614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3\"" Sep 5 00:06:08.196976 containerd[1472]: time="2025-09-05T00:06:08.196934651Z" level=info msg="StartContainer for \"cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3\"" Sep 5 00:06:08.197986 containerd[1472]: time="2025-09-05T00:06:08.197953799Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:08.199491 containerd[1472]: time="2025-09-05T00:06:08.198990469Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 5 00:06:08.200230 containerd[1472]: time="2025-09-05T00:06:08.200199965Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:08.202231 containerd[1472]: time="2025-09-05T00:06:08.202185902Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.129073661s" Sep 5 00:06:08.202231 containerd[1472]: time="2025-09-05T00:06:08.202228752Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 5 00:06:08.207027 containerd[1472]: time="2025-09-05T00:06:08.205921240Z" level=info msg="CreateContainer within sandbox \"a8575a7edff0988538ed7272c06129be2fb8134c4c265e337688e8191135c607\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 5 00:06:08.225868 containerd[1472]: time="2025-09-05T00:06:08.225817644Z" level=info msg="CreateContainer within sandbox \"a8575a7edff0988538ed7272c06129be2fb8134c4c265e337688e8191135c607\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c\"" Sep 5 00:06:08.226727 containerd[1472]: time="2025-09-05T00:06:08.226694955Z" level=info msg="StartContainer for \"1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c\"" Sep 5 00:06:08.237990 systemd[1]: Started cri-containerd-cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3.scope - libcontainer container cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3. Sep 5 00:06:08.263592 systemd[1]: Started cri-containerd-1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c.scope - libcontainer container 1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c. Sep 5 00:06:08.293224 systemd[1]: cri-containerd-cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3.scope: Deactivated successfully. Sep 5 00:06:08.295730 containerd[1472]: time="2025-09-05T00:06:08.295688763Z" level=info msg="StartContainer for \"cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3\" returns successfully" Sep 5 00:06:08.304247 containerd[1472]: time="2025-09-05T00:06:08.304179398Z" level=info msg="StartContainer for \"1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c\" returns successfully" Sep 5 00:06:08.527770 containerd[1472]: time="2025-09-05T00:06:08.527693689Z" level=info msg="shim disconnected" id=cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3 namespace=k8s.io Sep 5 00:06:08.527770 containerd[1472]: time="2025-09-05T00:06:08.527758151Z" level=warning msg="cleaning up after shim disconnected" id=cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3 namespace=k8s.io Sep 5 00:06:08.527770 containerd[1472]: time="2025-09-05T00:06:08.527766467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:06:09.106261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3-rootfs.mount: Deactivated successfully. Sep 5 00:06:09.158325 kubelet[2559]: E0905 00:06:09.158269 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:09.160495 kubelet[2559]: E0905 00:06:09.160455 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:09.162857 containerd[1472]: time="2025-09-05T00:06:09.162778284Z" level=info msg="CreateContainer within sandbox \"614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 00:06:09.181934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount690053863.mount: Deactivated successfully. Sep 5 00:06:09.183824 containerd[1472]: time="2025-09-05T00:06:09.183780931Z" level=info msg="CreateContainer within sandbox \"614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a\"" Sep 5 00:06:09.184563 containerd[1472]: time="2025-09-05T00:06:09.184522967Z" level=info msg="StartContainer for \"d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a\"" Sep 5 00:06:09.247661 systemd[1]: Started cri-containerd-d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a.scope - libcontainer container d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a. Sep 5 00:06:09.288762 systemd[1]: cri-containerd-d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a.scope: Deactivated successfully. Sep 5 00:06:09.303518 kubelet[2559]: I0905 00:06:09.303421 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-k76s2" podStartSLOduration=2.12574723 podStartE2EDuration="16.30340182s" podCreationTimestamp="2025-09-05 00:05:53 +0000 UTC" firstStartedPulling="2025-09-05 00:05:54.025502759 +0000 UTC m=+6.054822537" lastFinishedPulling="2025-09-05 00:06:08.203157359 +0000 UTC m=+20.232477127" observedRunningTime="2025-09-05 00:06:09.258575647 +0000 UTC m=+21.287895435" watchObservedRunningTime="2025-09-05 00:06:09.30340182 +0000 UTC m=+21.332721598" Sep 5 00:06:09.304943 containerd[1472]: time="2025-09-05T00:06:09.304888336Z" level=info msg="StartContainer for \"d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a\" returns successfully" Sep 5 00:06:09.335659 containerd[1472]: time="2025-09-05T00:06:09.335562336Z" level=info msg="shim disconnected" id=d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a namespace=k8s.io Sep 5 00:06:09.335659 containerd[1472]: time="2025-09-05T00:06:09.335648930Z" level=warning msg="cleaning up after shim disconnected" id=d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a namespace=k8s.io Sep 5 00:06:09.335659 containerd[1472]: time="2025-09-05T00:06:09.335661974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:06:10.103008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a-rootfs.mount: Deactivated successfully. Sep 5 00:06:10.164858 kubelet[2559]: E0905 00:06:10.164772 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:10.164858 kubelet[2559]: E0905 00:06:10.164795 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:10.166835 containerd[1472]: time="2025-09-05T00:06:10.166790962Z" level=info msg="CreateContainer within sandbox \"614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 00:06:10.192061 containerd[1472]: time="2025-09-05T00:06:10.191991759Z" level=info msg="CreateContainer within sandbox \"614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f\"" Sep 5 00:06:10.192682 containerd[1472]: time="2025-09-05T00:06:10.192652581Z" level=info msg="StartContainer for \"23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f\"" Sep 5 00:06:10.235839 systemd[1]: Started cri-containerd-23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f.scope - libcontainer container 23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f. Sep 5 00:06:10.293180 containerd[1472]: time="2025-09-05T00:06:10.293092044Z" level=info msg="StartContainer for \"23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f\" returns successfully" Sep 5 00:06:10.508639 kubelet[2559]: I0905 00:06:10.505796 2559 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 5 00:06:10.554375 systemd[1]: Created slice kubepods-burstable-pod18659d74_3573_4e7a_97cc_fcff39cb6a9e.slice - libcontainer container kubepods-burstable-pod18659d74_3573_4e7a_97cc_fcff39cb6a9e.slice. Sep 5 00:06:10.571353 systemd[1]: Created slice kubepods-burstable-pod221af5ed_8c72_40d3_a862_17a958d040fb.slice - libcontainer container kubepods-burstable-pod221af5ed_8c72_40d3_a862_17a958d040fb.slice. Sep 5 00:06:10.693764 kubelet[2559]: I0905 00:06:10.693665 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18659d74-3573-4e7a-97cc-fcff39cb6a9e-config-volume\") pod \"coredns-7c65d6cfc9-hl5zj\" (UID: \"18659d74-3573-4e7a-97cc-fcff39cb6a9e\") " pod="kube-system/coredns-7c65d6cfc9-hl5zj" Sep 5 00:06:10.693764 kubelet[2559]: I0905 00:06:10.693750 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5965\" (UniqueName: \"kubernetes.io/projected/18659d74-3573-4e7a-97cc-fcff39cb6a9e-kube-api-access-t5965\") pod \"coredns-7c65d6cfc9-hl5zj\" (UID: \"18659d74-3573-4e7a-97cc-fcff39cb6a9e\") " pod="kube-system/coredns-7c65d6cfc9-hl5zj" Sep 5 00:06:10.694038 kubelet[2559]: I0905 00:06:10.693794 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/221af5ed-8c72-40d3-a862-17a958d040fb-config-volume\") pod \"coredns-7c65d6cfc9-qwrnh\" (UID: \"221af5ed-8c72-40d3-a862-17a958d040fb\") " pod="kube-system/coredns-7c65d6cfc9-qwrnh" Sep 5 00:06:10.694038 kubelet[2559]: I0905 00:06:10.693825 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs6q6\" (UniqueName: \"kubernetes.io/projected/221af5ed-8c72-40d3-a862-17a958d040fb-kube-api-access-hs6q6\") pod \"coredns-7c65d6cfc9-qwrnh\" (UID: \"221af5ed-8c72-40d3-a862-17a958d040fb\") " pod="kube-system/coredns-7c65d6cfc9-qwrnh" Sep 5 00:06:10.863017 kubelet[2559]: E0905 00:06:10.862954 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:10.864242 containerd[1472]: time="2025-09-05T00:06:10.864186835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hl5zj,Uid:18659d74-3573-4e7a-97cc-fcff39cb6a9e,Namespace:kube-system,Attempt:0,}" Sep 5 00:06:10.876054 kubelet[2559]: E0905 00:06:10.876004 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:10.876740 containerd[1472]: time="2025-09-05T00:06:10.876699478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qwrnh,Uid:221af5ed-8c72-40d3-a862-17a958d040fb,Namespace:kube-system,Attempt:0,}" Sep 5 00:06:11.170142 kubelet[2559]: E0905 00:06:11.169861 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:11.190139 kubelet[2559]: I0905 00:06:11.190035 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mlcmw" podStartSLOduration=6.131353762 podStartE2EDuration="18.190003859s" podCreationTimestamp="2025-09-05 00:05:53 +0000 UTC" firstStartedPulling="2025-09-05 00:05:54.013347135 +0000 UTC m=+6.042666914" lastFinishedPulling="2025-09-05 00:06:06.071997233 +0000 UTC m=+18.101317011" observedRunningTime="2025-09-05 00:06:11.189189497 +0000 UTC m=+23.218509305" watchObservedRunningTime="2025-09-05 00:06:11.190003859 +0000 UTC m=+23.219323637" Sep 5 00:06:12.171600 kubelet[2559]: E0905 00:06:12.171546 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:12.817825 systemd-networkd[1408]: cilium_host: Link UP Sep 5 00:06:12.818046 systemd-networkd[1408]: cilium_net: Link UP Sep 5 00:06:12.818262 systemd-networkd[1408]: cilium_net: Gained carrier Sep 5 00:06:12.818517 systemd-networkd[1408]: cilium_host: Gained carrier Sep 5 00:06:12.908633 systemd-networkd[1408]: cilium_host: Gained IPv6LL Sep 5 00:06:12.940389 systemd-networkd[1408]: cilium_vxlan: Link UP Sep 5 00:06:12.940403 systemd-networkd[1408]: cilium_vxlan: Gained carrier Sep 5 00:06:13.173532 kubelet[2559]: E0905 00:06:13.173379 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:13.260633 kernel: NET: Registered PF_ALG protocol family Sep 5 00:06:13.643633 systemd-networkd[1408]: cilium_net: Gained IPv6LL Sep 5 00:06:14.036338 systemd-networkd[1408]: lxc_health: Link UP Sep 5 00:06:14.045919 systemd-networkd[1408]: lxc_health: Gained carrier Sep 5 00:06:14.514051 systemd-networkd[1408]: lxc0d6ec02fd0b8: Link UP Sep 5 00:06:14.524581 kernel: eth0: renamed from tmp2221c Sep 5 00:06:14.531692 systemd-networkd[1408]: lxc0d6ec02fd0b8: Gained carrier Sep 5 00:06:14.532489 systemd-networkd[1408]: lxce63215295099: Link UP Sep 5 00:06:14.543499 kernel: eth0: renamed from tmp91b8a Sep 5 00:06:14.549286 systemd-networkd[1408]: lxce63215295099: Gained carrier Sep 5 00:06:14.604640 systemd-networkd[1408]: cilium_vxlan: Gained IPv6LL Sep 5 00:06:15.243763 systemd-networkd[1408]: lxc_health: Gained IPv6LL Sep 5 00:06:15.490517 kubelet[2559]: E0905 00:06:15.490471 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:15.518071 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:46934.service - OpenSSH per-connection server daemon (10.0.0.1:46934). Sep 5 00:06:15.563375 sshd[3777]: Accepted publickey for core from 10.0.0.1 port 46934 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:15.564627 sshd[3777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:15.569923 systemd-logind[1458]: New session 10 of user core. Sep 5 00:06:15.574801 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:06:15.722529 sshd[3777]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:15.727461 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:46934.service: Deactivated successfully. Sep 5 00:06:15.730554 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:06:15.731303 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:06:15.732392 systemd-logind[1458]: Removed session 10. Sep 5 00:06:16.139673 systemd-networkd[1408]: lxce63215295099: Gained IPv6LL Sep 5 00:06:16.181192 kubelet[2559]: E0905 00:06:16.181142 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:16.587712 systemd-networkd[1408]: lxc0d6ec02fd0b8: Gained IPv6LL Sep 5 00:06:17.182762 kubelet[2559]: E0905 00:06:17.182723 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:18.429568 containerd[1472]: time="2025-09-05T00:06:18.429125532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:18.429568 containerd[1472]: time="2025-09-05T00:06:18.429243213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:18.429568 containerd[1472]: time="2025-09-05T00:06:18.429269993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:18.429568 containerd[1472]: time="2025-09-05T00:06:18.429474788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:18.455678 systemd[1]: Started cri-containerd-91b8aea9c8145bf0b38cb5cfd8b699f06ad9f8b699bd232fef96174dbd733c10.scope - libcontainer container 91b8aea9c8145bf0b38cb5cfd8b699f06ad9f8b699bd232fef96174dbd733c10. Sep 5 00:06:18.467715 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:06:18.492635 containerd[1472]: time="2025-09-05T00:06:18.492581428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qwrnh,Uid:221af5ed-8c72-40d3-a862-17a958d040fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"91b8aea9c8145bf0b38cb5cfd8b699f06ad9f8b699bd232fef96174dbd733c10\"" Sep 5 00:06:18.493348 kubelet[2559]: E0905 00:06:18.493313 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:18.494974 containerd[1472]: time="2025-09-05T00:06:18.494938116Z" level=info msg="CreateContainer within sandbox \"91b8aea9c8145bf0b38cb5cfd8b699f06ad9f8b699bd232fef96174dbd733c10\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:06:18.526271 containerd[1472]: time="2025-09-05T00:06:18.526179850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:18.526271 containerd[1472]: time="2025-09-05T00:06:18.526235836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:18.526271 containerd[1472]: time="2025-09-05T00:06:18.526258609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:18.526473 containerd[1472]: time="2025-09-05T00:06:18.526398631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:18.536523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3704149843.mount: Deactivated successfully. Sep 5 00:06:18.549382 containerd[1472]: time="2025-09-05T00:06:18.549327869Z" level=info msg="CreateContainer within sandbox \"91b8aea9c8145bf0b38cb5cfd8b699f06ad9f8b699bd232fef96174dbd733c10\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2bac40d4b541d4417c54a1f6e5a6af4f29998579ac47672be8d51f851756c993\"" Sep 5 00:06:18.550809 containerd[1472]: time="2025-09-05T00:06:18.549870368Z" level=info msg="StartContainer for \"2bac40d4b541d4417c54a1f6e5a6af4f29998579ac47672be8d51f851756c993\"" Sep 5 00:06:18.558479 systemd[1]: Started cri-containerd-2221c1afcc7bbeff49f3129b9314a7eddfd5c8c7233cf510826a22de7a0abe92.scope - libcontainer container 2221c1afcc7bbeff49f3129b9314a7eddfd5c8c7233cf510826a22de7a0abe92. Sep 5 00:06:18.576211 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:06:18.588709 systemd[1]: Started cri-containerd-2bac40d4b541d4417c54a1f6e5a6af4f29998579ac47672be8d51f851756c993.scope - libcontainer container 2bac40d4b541d4417c54a1f6e5a6af4f29998579ac47672be8d51f851756c993. Sep 5 00:06:18.606987 containerd[1472]: time="2025-09-05T00:06:18.606935378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hl5zj,Uid:18659d74-3573-4e7a-97cc-fcff39cb6a9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2221c1afcc7bbeff49f3129b9314a7eddfd5c8c7233cf510826a22de7a0abe92\"" Sep 5 00:06:18.607831 kubelet[2559]: E0905 00:06:18.607808 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:18.611209 containerd[1472]: time="2025-09-05T00:06:18.611064224Z" level=info msg="CreateContainer within sandbox \"2221c1afcc7bbeff49f3129b9314a7eddfd5c8c7233cf510826a22de7a0abe92\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:06:18.629738 containerd[1472]: time="2025-09-05T00:06:18.629691080Z" level=info msg="StartContainer for \"2bac40d4b541d4417c54a1f6e5a6af4f29998579ac47672be8d51f851756c993\" returns successfully" Sep 5 00:06:18.637922 containerd[1472]: time="2025-09-05T00:06:18.637871087Z" level=info msg="CreateContainer within sandbox \"2221c1afcc7bbeff49f3129b9314a7eddfd5c8c7233cf510826a22de7a0abe92\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ca59a1dc9c20b0e4ca033f8407835be2e9c4056be5b0eb525761c1e6c497f36\"" Sep 5 00:06:18.638600 containerd[1472]: time="2025-09-05T00:06:18.638561244Z" level=info msg="StartContainer for \"3ca59a1dc9c20b0e4ca033f8407835be2e9c4056be5b0eb525761c1e6c497f36\"" Sep 5 00:06:18.668597 systemd[1]: Started cri-containerd-3ca59a1dc9c20b0e4ca033f8407835be2e9c4056be5b0eb525761c1e6c497f36.scope - libcontainer container 3ca59a1dc9c20b0e4ca033f8407835be2e9c4056be5b0eb525761c1e6c497f36. Sep 5 00:06:18.702136 containerd[1472]: time="2025-09-05T00:06:18.701973618Z" level=info msg="StartContainer for \"3ca59a1dc9c20b0e4ca033f8407835be2e9c4056be5b0eb525761c1e6c497f36\" returns successfully" Sep 5 00:06:19.186851 kubelet[2559]: E0905 00:06:19.186802 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:19.189198 kubelet[2559]: E0905 00:06:19.188770 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:19.278554 kubelet[2559]: I0905 00:06:19.278479 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qwrnh" podStartSLOduration=26.277703053 podStartE2EDuration="26.277703053s" podCreationTimestamp="2025-09-05 00:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:06:19.231998956 +0000 UTC m=+31.261318734" watchObservedRunningTime="2025-09-05 00:06:19.277703053 +0000 UTC m=+31.307022851" Sep 5 00:06:19.434774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1554104197.mount: Deactivated successfully. Sep 5 00:06:20.190840 kubelet[2559]: E0905 00:06:20.190733 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:20.190840 kubelet[2559]: E0905 00:06:20.190794 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:20.734010 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:51316.service - OpenSSH per-connection server daemon (10.0.0.1:51316). Sep 5 00:06:20.774534 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 51316 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:20.776265 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:20.780383 systemd-logind[1458]: New session 11 of user core. Sep 5 00:06:20.789592 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:06:20.941854 sshd[3970]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:20.946295 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:51316.service: Deactivated successfully. Sep 5 00:06:20.948304 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:06:20.949070 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:06:20.949984 systemd-logind[1458]: Removed session 11. Sep 5 00:06:21.192142 kubelet[2559]: E0905 00:06:21.192109 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:21.192142 kubelet[2559]: E0905 00:06:21.192140 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:25.954118 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:51328.service - OpenSSH per-connection server daemon (10.0.0.1:51328). Sep 5 00:06:25.992239 sshd[3988]: Accepted publickey for core from 10.0.0.1 port 51328 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:25.994383 sshd[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:26.000497 systemd-logind[1458]: New session 12 of user core. Sep 5 00:06:26.010631 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:06:26.137622 sshd[3988]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:26.142346 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:51328.service: Deactivated successfully. Sep 5 00:06:26.145607 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:06:26.146449 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:06:26.148161 systemd-logind[1458]: Removed session 12. Sep 5 00:06:31.152134 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:37404.service - OpenSSH per-connection server daemon (10.0.0.1:37404). Sep 5 00:06:31.198616 sshd[4004]: Accepted publickey for core from 10.0.0.1 port 37404 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:31.200904 sshd[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:31.205822 systemd-logind[1458]: New session 13 of user core. Sep 5 00:06:31.218689 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:06:31.331717 sshd[4004]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:31.336866 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:37404.service: Deactivated successfully. Sep 5 00:06:31.340086 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:06:31.341434 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:06:31.342424 systemd-logind[1458]: Removed session 13. Sep 5 00:06:36.344111 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:37406.service - OpenSSH per-connection server daemon (10.0.0.1:37406). Sep 5 00:06:36.384851 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 37406 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:36.386967 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:36.392277 systemd-logind[1458]: New session 14 of user core. Sep 5 00:06:36.399620 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:06:36.510862 sshd[4019]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:36.520383 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:37406.service: Deactivated successfully. Sep 5 00:06:36.522222 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:06:36.524123 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:06:36.530736 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:37422.service - OpenSSH per-connection server daemon (10.0.0.1:37422). Sep 5 00:06:36.531837 systemd-logind[1458]: Removed session 14. Sep 5 00:06:36.564134 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 37422 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:36.566044 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:36.570737 systemd-logind[1458]: New session 15 of user core. Sep 5 00:06:36.580606 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:06:36.750140 sshd[4034]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:36.761987 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:37422.service: Deactivated successfully. Sep 5 00:06:36.764332 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:06:36.768197 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:06:36.779143 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:37428.service - OpenSSH per-connection server daemon (10.0.0.1:37428). Sep 5 00:06:36.779965 systemd-logind[1458]: Removed session 15. Sep 5 00:06:36.811962 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 37428 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:36.813960 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:36.820171 systemd-logind[1458]: New session 16 of user core. Sep 5 00:06:36.827709 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:06:36.941877 sshd[4047]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:36.946141 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:37428.service: Deactivated successfully. Sep 5 00:06:36.948141 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:06:36.948771 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:06:36.949842 systemd-logind[1458]: Removed session 16. Sep 5 00:06:41.954760 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:39870.service - OpenSSH per-connection server daemon (10.0.0.1:39870). Sep 5 00:06:41.993770 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 39870 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:41.995706 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:41.999899 systemd-logind[1458]: New session 17 of user core. Sep 5 00:06:42.009586 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:06:42.120900 sshd[4062]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:42.125593 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:39870.service: Deactivated successfully. Sep 5 00:06:42.128008 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:06:42.128792 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:06:42.129925 systemd-logind[1458]: Removed session 17. Sep 5 00:06:47.135590 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:39886.service - OpenSSH per-connection server daemon (10.0.0.1:39886). Sep 5 00:06:47.177957 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 39886 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:47.180131 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:47.185092 systemd-logind[1458]: New session 18 of user core. Sep 5 00:06:47.194757 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:06:47.316066 sshd[4076]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:47.320431 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:39886.service: Deactivated successfully. Sep 5 00:06:47.322880 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:06:47.323793 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:06:47.324775 systemd-logind[1458]: Removed session 18. Sep 5 00:06:52.331649 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:53994.service - OpenSSH per-connection server daemon (10.0.0.1:53994). Sep 5 00:06:52.368737 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 53994 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:52.370602 sshd[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:52.375054 systemd-logind[1458]: New session 19 of user core. Sep 5 00:06:52.385608 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:06:52.495860 sshd[4092]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:52.513806 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:53994.service: Deactivated successfully. Sep 5 00:06:52.516126 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:06:52.517725 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:06:52.525794 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:53996.service - OpenSSH per-connection server daemon (10.0.0.1:53996). Sep 5 00:06:52.527102 systemd-logind[1458]: Removed session 19. Sep 5 00:06:52.563333 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 53996 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:52.565419 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:52.570413 systemd-logind[1458]: New session 20 of user core. Sep 5 00:06:52.579631 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:06:52.896592 sshd[4107]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:52.908862 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:53996.service: Deactivated successfully. Sep 5 00:06:52.911018 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:06:52.912808 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:06:52.922702 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:54012.service - OpenSSH per-connection server daemon (10.0.0.1:54012). Sep 5 00:06:52.923810 systemd-logind[1458]: Removed session 20. Sep 5 00:06:52.960708 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 54012 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:52.963033 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:52.967699 systemd-logind[1458]: New session 21 of user core. Sep 5 00:06:52.986684 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:06:54.214585 sshd[4119]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:54.227349 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:54012.service: Deactivated successfully. Sep 5 00:06:54.233304 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:06:54.236072 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:06:54.246808 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:54024.service - OpenSSH per-connection server daemon (10.0.0.1:54024). Sep 5 00:06:54.248710 systemd-logind[1458]: Removed session 21. Sep 5 00:06:54.286757 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 54024 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:54.288166 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:54.292790 systemd-logind[1458]: New session 22 of user core. Sep 5 00:06:54.310600 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:06:54.655186 sshd[4139]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:54.666362 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:54024.service: Deactivated successfully. Sep 5 00:06:54.668337 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:06:54.670541 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:06:54.683767 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:54026.service - OpenSSH per-connection server daemon (10.0.0.1:54026). Sep 5 00:06:54.684754 systemd-logind[1458]: Removed session 22. Sep 5 00:06:54.717636 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 54026 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:54.719289 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:54.723667 systemd-logind[1458]: New session 23 of user core. Sep 5 00:06:54.730579 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:06:54.839239 sshd[4153]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:54.843230 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:54026.service: Deactivated successfully. Sep 5 00:06:54.845235 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:06:54.845816 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:06:54.846811 systemd-logind[1458]: Removed session 23. Sep 5 00:06:59.851132 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:54042.service - OpenSSH per-connection server daemon (10.0.0.1:54042). Sep 5 00:06:59.888498 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 54042 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:59.890673 sshd[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:59.894917 systemd-logind[1458]: New session 24 of user core. Sep 5 00:06:59.901578 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 00:07:00.008136 sshd[4167]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:00.012472 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:54042.service: Deactivated successfully. Sep 5 00:07:00.014956 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 00:07:00.015826 systemd-logind[1458]: Session 24 logged out. Waiting for processes to exit. Sep 5 00:07:00.016913 systemd-logind[1458]: Removed session 24. Sep 5 00:07:05.020709 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:40664.service - OpenSSH per-connection server daemon (10.0.0.1:40664). Sep 5 00:07:05.062006 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 40664 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:05.065164 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:05.069915 systemd-logind[1458]: New session 25 of user core. Sep 5 00:07:05.079630 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 00:07:05.196257 sshd[4184]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:05.201102 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:40664.service: Deactivated successfully. Sep 5 00:07:05.204150 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 00:07:05.204880 systemd-logind[1458]: Session 25 logged out. Waiting for processes to exit. Sep 5 00:07:05.206140 systemd-logind[1458]: Removed session 25. Sep 5 00:07:07.088588 kubelet[2559]: E0905 00:07:07.088522 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:10.208743 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:51914.service - OpenSSH per-connection server daemon (10.0.0.1:51914). Sep 5 00:07:10.247721 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 51914 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:10.249389 sshd[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:10.254569 systemd-logind[1458]: New session 26 of user core. Sep 5 00:07:10.266605 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 00:07:10.372851 sshd[4198]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:10.377261 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:51914.service: Deactivated successfully. Sep 5 00:07:10.379379 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 00:07:10.380085 systemd-logind[1458]: Session 26 logged out. Waiting for processes to exit. Sep 5 00:07:10.381151 systemd-logind[1458]: Removed session 26. Sep 5 00:07:15.385559 systemd[1]: Started sshd@26-10.0.0.15:22-10.0.0.1:51918.service - OpenSSH per-connection server daemon (10.0.0.1:51918). Sep 5 00:07:15.425302 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 51918 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:15.427037 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:15.431248 systemd-logind[1458]: New session 27 of user core. Sep 5 00:07:15.442651 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 00:07:15.551369 sshd[4212]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:15.563608 systemd[1]: sshd@26-10.0.0.15:22-10.0.0.1:51918.service: Deactivated successfully. Sep 5 00:07:15.565811 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 00:07:15.568035 systemd-logind[1458]: Session 27 logged out. Waiting for processes to exit. Sep 5 00:07:15.577880 systemd[1]: Started sshd@27-10.0.0.15:22-10.0.0.1:51932.service - OpenSSH per-connection server daemon (10.0.0.1:51932). Sep 5 00:07:15.579002 systemd-logind[1458]: Removed session 27. Sep 5 00:07:15.612707 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 51932 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:15.614668 sshd[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:15.619234 systemd-logind[1458]: New session 28 of user core. Sep 5 00:07:15.630620 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 5 00:07:16.972044 kubelet[2559]: I0905 00:07:16.971927 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hl5zj" podStartSLOduration=83.971900511 podStartE2EDuration="1m23.971900511s" podCreationTimestamp="2025-09-05 00:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:06:19.312740892 +0000 UTC m=+31.342060680" watchObservedRunningTime="2025-09-05 00:07:16.971900511 +0000 UTC m=+89.001220289" Sep 5 00:07:16.986567 containerd[1472]: time="2025-09-05T00:07:16.986497190Z" level=info msg="StopContainer for \"1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c\" with timeout 30 (s)" Sep 5 00:07:16.987082 containerd[1472]: time="2025-09-05T00:07:16.986976500Z" level=info msg="Stop container \"1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c\" with signal terminated" Sep 5 00:07:17.011196 systemd[1]: cri-containerd-1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c.scope: Deactivated successfully. Sep 5 00:07:17.039998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c-rootfs.mount: Deactivated successfully. Sep 5 00:07:17.043266 containerd[1472]: time="2025-09-05T00:07:17.043230422Z" level=info msg="StopContainer for \"23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f\" with timeout 2 (s)" Sep 5 00:07:17.043425 containerd[1472]: time="2025-09-05T00:07:17.043250441Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:07:17.043607 containerd[1472]: time="2025-09-05T00:07:17.043575818Z" level=info msg="Stop container \"23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f\" with signal terminated" Sep 5 00:07:17.052477 systemd-networkd[1408]: lxc_health: Link DOWN Sep 5 00:07:17.052506 systemd-networkd[1408]: lxc_health: Lost carrier Sep 5 00:07:17.057890 containerd[1472]: time="2025-09-05T00:07:17.057776778Z" level=info msg="shim disconnected" id=1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c namespace=k8s.io Sep 5 00:07:17.057890 containerd[1472]: time="2025-09-05T00:07:17.057880976Z" level=warning msg="cleaning up after shim disconnected" id=1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c namespace=k8s.io Sep 5 00:07:17.057890 containerd[1472]: time="2025-09-05T00:07:17.057896576Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:07:17.078988 systemd[1]: cri-containerd-23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f.scope: Deactivated successfully. Sep 5 00:07:17.079381 systemd[1]: cri-containerd-23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f.scope: Consumed 7.608s CPU time. Sep 5 00:07:17.080115 containerd[1472]: time="2025-09-05T00:07:17.080061801Z" level=info msg="StopContainer for \"1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c\" returns successfully" Sep 5 00:07:17.088043 containerd[1472]: time="2025-09-05T00:07:17.087988144Z" level=info msg="StopPodSandbox for \"a8575a7edff0988538ed7272c06129be2fb8134c4c265e337688e8191135c607\"" Sep 5 00:07:17.088164 containerd[1472]: time="2025-09-05T00:07:17.088058818Z" level=info msg="Container to stop \"1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:07:17.088607 kubelet[2559]: E0905 00:07:17.088581 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:17.093049 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8575a7edff0988538ed7272c06129be2fb8134c4c265e337688e8191135c607-shm.mount: Deactivated successfully. Sep 5 00:07:17.103411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f-rootfs.mount: Deactivated successfully. Sep 5 00:07:17.104555 systemd[1]: cri-containerd-a8575a7edff0988538ed7272c06129be2fb8134c4c265e337688e8191135c607.scope: Deactivated successfully. Sep 5 00:07:17.119312 containerd[1472]: time="2025-09-05T00:07:17.119221500Z" level=info msg="shim disconnected" id=23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f namespace=k8s.io Sep 5 00:07:17.119312 containerd[1472]: time="2025-09-05T00:07:17.119298747Z" level=warning msg="cleaning up after shim disconnected" id=23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f namespace=k8s.io Sep 5 00:07:17.119312 containerd[1472]: time="2025-09-05T00:07:17.119310820Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:07:17.130525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8575a7edff0988538ed7272c06129be2fb8134c4c265e337688e8191135c607-rootfs.mount: Deactivated successfully. Sep 5 00:07:17.135261 containerd[1472]: time="2025-09-05T00:07:17.135185779Z" level=info msg="shim disconnected" id=a8575a7edff0988538ed7272c06129be2fb8134c4c265e337688e8191135c607 namespace=k8s.io Sep 5 00:07:17.135261 containerd[1472]: time="2025-09-05T00:07:17.135250521Z" level=warning msg="cleaning up after shim disconnected" id=a8575a7edff0988538ed7272c06129be2fb8134c4c265e337688e8191135c607 namespace=k8s.io Sep 5 00:07:17.135261 containerd[1472]: time="2025-09-05T00:07:17.135261353Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:07:17.141661 containerd[1472]: time="2025-09-05T00:07:17.141585894Z" level=info msg="StopContainer for \"23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f\" returns successfully" Sep 5 00:07:17.142512 containerd[1472]: time="2025-09-05T00:07:17.142479340Z" level=info msg="StopPodSandbox for \"614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66\"" Sep 5 00:07:17.142595 containerd[1472]: time="2025-09-05T00:07:17.142526269Z" level=info msg="Container to stop \"fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:07:17.142595 containerd[1472]: time="2025-09-05T00:07:17.142547068Z" level=info msg="Container to stop \"cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:07:17.142595 containerd[1472]: time="2025-09-05T00:07:17.142560074Z" level=info msg="Container to stop \"23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:07:17.142595 containerd[1472]: time="2025-09-05T00:07:17.142574611Z" level=info msg="Container to stop \"c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:07:17.142595 containerd[1472]: time="2025-09-05T00:07:17.142587615Z" level=info msg="Container to stop \"d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:07:17.151027 systemd[1]: cri-containerd-614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66.scope: Deactivated successfully. Sep 5 00:07:17.166675 containerd[1472]: time="2025-09-05T00:07:17.166605138Z" level=info msg="TearDown network for sandbox \"a8575a7edff0988538ed7272c06129be2fb8134c4c265e337688e8191135c607\" successfully" Sep 5 00:07:17.167040 containerd[1472]: time="2025-09-05T00:07:17.166864962Z" level=info msg="StopPodSandbox for \"a8575a7edff0988538ed7272c06129be2fb8134c4c265e337688e8191135c607\" returns successfully" Sep 5 00:07:17.190959 containerd[1472]: time="2025-09-05T00:07:17.190710157Z" level=info msg="shim disconnected" id=614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66 namespace=k8s.io Sep 5 00:07:17.190959 containerd[1472]: time="2025-09-05T00:07:17.190950244Z" level=warning msg="cleaning up after shim disconnected" id=614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66 namespace=k8s.io Sep 5 00:07:17.190959 containerd[1472]: time="2025-09-05T00:07:17.190961134Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:07:17.206932 containerd[1472]: time="2025-09-05T00:07:17.206875006Z" level=info msg="TearDown network for sandbox \"614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66\" successfully" Sep 5 00:07:17.206932 containerd[1472]: time="2025-09-05T00:07:17.206907869Z" level=info msg="StopPodSandbox for \"614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66\" returns successfully" Sep 5 00:07:17.306710 kubelet[2559]: I0905 00:07:17.306585 2559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-lib-modules\") pod \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " Sep 5 00:07:17.306710 kubelet[2559]: I0905 00:07:17.306641 2559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-bpf-maps\") pod \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " Sep 5 00:07:17.306710 kubelet[2559]: I0905 00:07:17.306664 2559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-cilium-run\") pod \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " Sep 5 00:07:17.306710 kubelet[2559]: I0905 00:07:17.306700 2559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/156ed862-6632-4160-9bcd-c42ca1eaab40-cilium-config-path\") pod \"156ed862-6632-4160-9bcd-c42ca1eaab40\" (UID: \"156ed862-6632-4160-9bcd-c42ca1eaab40\") " Sep 5 00:07:17.306710 kubelet[2559]: I0905 00:07:17.306727 2559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-cni-path\") pod \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " Sep 5 00:07:17.307049 kubelet[2559]: I0905 00:07:17.306754 2559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjspb\" (UniqueName: \"kubernetes.io/projected/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-kube-api-access-sjspb\") pod \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " Sep 5 00:07:17.307049 kubelet[2559]: I0905 00:07:17.306773 2559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-host-proc-sys-kernel\") pod \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " Sep 5 00:07:17.307049 kubelet[2559]: I0905 00:07:17.306793 2559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-xtables-lock\") pod \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " Sep 5 00:07:17.307049 kubelet[2559]: I0905 00:07:17.306812 2559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-host-proc-sys-net\") pod \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " Sep 5 00:07:17.307049 kubelet[2559]: I0905 00:07:17.306832 2559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-cilium-cgroup\") pod \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " Sep 5 00:07:17.307049 kubelet[2559]: I0905 00:07:17.306852 2559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjld2\" (UniqueName: \"kubernetes.io/projected/156ed862-6632-4160-9bcd-c42ca1eaab40-kube-api-access-wjld2\") pod \"156ed862-6632-4160-9bcd-c42ca1eaab40\" (UID: \"156ed862-6632-4160-9bcd-c42ca1eaab40\") " Sep 5 00:07:17.307200 kubelet[2559]: I0905 00:07:17.306868 2559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-cilium-config-path\") pod \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " Sep 5 00:07:17.307200 kubelet[2559]: I0905 00:07:17.306885 2559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-hostproc\") pod \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " Sep 5 00:07:17.307200 kubelet[2559]: I0905 00:07:17.306907 2559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-clustermesh-secrets\") pod \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " Sep 5 00:07:17.307200 kubelet[2559]: I0905 00:07:17.306921 2559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-hubble-tls\") pod \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " Sep 5 00:07:17.307200 kubelet[2559]: I0905 00:07:17.306934 2559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-etc-cni-netd\") pod \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\" (UID: \"48a9996e-3cfe-4c60-adb6-4faa6ae8425c\") " Sep 5 00:07:17.307200 kubelet[2559]: I0905 00:07:17.306763 2559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "48a9996e-3cfe-4c60-adb6-4faa6ae8425c" (UID: "48a9996e-3cfe-4c60-adb6-4faa6ae8425c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:07:17.307344 kubelet[2559]: I0905 00:07:17.306792 2559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-cni-path" (OuterVolumeSpecName: "cni-path") pod "48a9996e-3cfe-4c60-adb6-4faa6ae8425c" (UID: "48a9996e-3cfe-4c60-adb6-4faa6ae8425c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:07:17.307344 kubelet[2559]: I0905 00:07:17.306770 2559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "48a9996e-3cfe-4c60-adb6-4faa6ae8425c" (UID: "48a9996e-3cfe-4c60-adb6-4faa6ae8425c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:07:17.307344 kubelet[2559]: I0905 00:07:17.306803 2559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "48a9996e-3cfe-4c60-adb6-4faa6ae8425c" (UID: "48a9996e-3cfe-4c60-adb6-4faa6ae8425c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:07:17.307344 kubelet[2559]: I0905 00:07:17.306820 2559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "48a9996e-3cfe-4c60-adb6-4faa6ae8425c" (UID: "48a9996e-3cfe-4c60-adb6-4faa6ae8425c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:07:17.307344 kubelet[2559]: I0905 00:07:17.306981 2559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "48a9996e-3cfe-4c60-adb6-4faa6ae8425c" (UID: "48a9996e-3cfe-4c60-adb6-4faa6ae8425c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:07:17.307503 kubelet[2559]: I0905 00:07:17.307062 2559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "48a9996e-3cfe-4c60-adb6-4faa6ae8425c" (UID: "48a9996e-3cfe-4c60-adb6-4faa6ae8425c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:07:17.307503 kubelet[2559]: I0905 00:07:17.307077 2559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "48a9996e-3cfe-4c60-adb6-4faa6ae8425c" (UID: "48a9996e-3cfe-4c60-adb6-4faa6ae8425c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:07:17.307503 kubelet[2559]: I0905 00:07:17.307095 2559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "48a9996e-3cfe-4c60-adb6-4faa6ae8425c" (UID: "48a9996e-3cfe-4c60-adb6-4faa6ae8425c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:07:17.310174 kubelet[2559]: I0905 00:07:17.307628 2559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-hostproc" (OuterVolumeSpecName: "hostproc") pod "48a9996e-3cfe-4c60-adb6-4faa6ae8425c" (UID: "48a9996e-3cfe-4c60-adb6-4faa6ae8425c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:07:17.310811 kubelet[2559]: I0905 00:07:17.310771 2559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/156ed862-6632-4160-9bcd-c42ca1eaab40-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "156ed862-6632-4160-9bcd-c42ca1eaab40" (UID: "156ed862-6632-4160-9bcd-c42ca1eaab40"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 5 00:07:17.310957 kubelet[2559]: I0905 00:07:17.310922 2559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/156ed862-6632-4160-9bcd-c42ca1eaab40-kube-api-access-wjld2" (OuterVolumeSpecName: "kube-api-access-wjld2") pod "156ed862-6632-4160-9bcd-c42ca1eaab40" (UID: "156ed862-6632-4160-9bcd-c42ca1eaab40"). InnerVolumeSpecName "kube-api-access-wjld2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 00:07:17.311865 kubelet[2559]: I0905 00:07:17.311819 2559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-kube-api-access-sjspb" (OuterVolumeSpecName: "kube-api-access-sjspb") pod "48a9996e-3cfe-4c60-adb6-4faa6ae8425c" (UID: "48a9996e-3cfe-4c60-adb6-4faa6ae8425c"). InnerVolumeSpecName "kube-api-access-sjspb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 00:07:17.312581 kubelet[2559]: I0905 00:07:17.312246 2559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "48a9996e-3cfe-4c60-adb6-4faa6ae8425c" (UID: "48a9996e-3cfe-4c60-adb6-4faa6ae8425c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 5 00:07:17.314394 kubelet[2559]: I0905 00:07:17.314142 2559 scope.go:117] "RemoveContainer" containerID="1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c" Sep 5 00:07:17.314394 kubelet[2559]: I0905 00:07:17.314328 2559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "48a9996e-3cfe-4c60-adb6-4faa6ae8425c" (UID: "48a9996e-3cfe-4c60-adb6-4faa6ae8425c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 5 00:07:17.315182 kubelet[2559]: I0905 00:07:17.314895 2559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "48a9996e-3cfe-4c60-adb6-4faa6ae8425c" (UID: "48a9996e-3cfe-4c60-adb6-4faa6ae8425c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 00:07:17.317176 containerd[1472]: time="2025-09-05T00:07:17.317116701Z" level=info msg="RemoveContainer for \"1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c\"" Sep 5 00:07:17.323576 systemd[1]: Removed slice kubepods-besteffort-pod156ed862_6632_4160_9bcd_c42ca1eaab40.slice - libcontainer container kubepods-besteffort-pod156ed862_6632_4160_9bcd_c42ca1eaab40.slice. Sep 5 00:07:17.331835 systemd[1]: Removed slice kubepods-burstable-pod48a9996e_3cfe_4c60_adb6_4faa6ae8425c.slice - libcontainer container kubepods-burstable-pod48a9996e_3cfe_4c60_adb6_4faa6ae8425c.slice. Sep 5 00:07:17.331943 systemd[1]: kubepods-burstable-pod48a9996e_3cfe_4c60_adb6_4faa6ae8425c.slice: Consumed 7.729s CPU time. Sep 5 00:07:17.341385 containerd[1472]: time="2025-09-05T00:07:17.341314847Z" level=info msg="RemoveContainer for \"1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c\" returns successfully" Sep 5 00:07:17.341698 kubelet[2559]: I0905 00:07:17.341655 2559 scope.go:117] "RemoveContainer" containerID="1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c" Sep 5 00:07:17.345417 containerd[1472]: time="2025-09-05T00:07:17.345314163Z" level=error msg="ContainerStatus for \"1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c\": not found" Sep 5 00:07:17.354339 kubelet[2559]: E0905 00:07:17.354292 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c\": not found" containerID="1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c" Sep 5 00:07:17.354476 kubelet[2559]: I0905 00:07:17.354337 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c"} err="failed to get container status \"1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c\": rpc error: code = NotFound desc = an error occurred when try to find container \"1b7e4ca47839fdd97648310dae1b83e8d6bd538cca7e3ce6953464f0fb69fa8c\": not found" Sep 5 00:07:17.354476 kubelet[2559]: I0905 00:07:17.354454 2559 scope.go:117] "RemoveContainer" containerID="23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f" Sep 5 00:07:17.355768 containerd[1472]: time="2025-09-05T00:07:17.355724273Z" level=info msg="RemoveContainer for \"23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f\"" Sep 5 00:07:17.359480 containerd[1472]: time="2025-09-05T00:07:17.359431954Z" level=info msg="RemoveContainer for \"23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f\" returns successfully" Sep 5 00:07:17.359636 kubelet[2559]: I0905 00:07:17.359591 2559 scope.go:117] "RemoveContainer" containerID="d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a" Sep 5 00:07:17.360986 containerd[1472]: time="2025-09-05T00:07:17.360685866Z" level=info msg="RemoveContainer for \"d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a\"" Sep 5 00:07:17.364351 containerd[1472]: time="2025-09-05T00:07:17.364308115Z" level=info msg="RemoveContainer for \"d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a\" returns successfully" Sep 5 00:07:17.364593 kubelet[2559]: I0905 00:07:17.364553 2559 scope.go:117] "RemoveContainer" containerID="cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3" Sep 5 00:07:17.365723 containerd[1472]: time="2025-09-05T00:07:17.365688256Z" level=info msg="RemoveContainer for \"cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3\"" Sep 5 00:07:17.369200 containerd[1472]: time="2025-09-05T00:07:17.369167303Z" level=info msg="RemoveContainer for \"cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3\" returns successfully" Sep 5 00:07:17.369363 kubelet[2559]: I0905 00:07:17.369331 2559 scope.go:117] "RemoveContainer" containerID="c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c" Sep 5 00:07:17.370475 containerd[1472]: time="2025-09-05T00:07:17.370426885Z" level=info msg="RemoveContainer for \"c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c\"" Sep 5 00:07:17.382390 containerd[1472]: time="2025-09-05T00:07:17.382343366Z" level=info msg="RemoveContainer for \"c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c\" returns successfully" Sep 5 00:07:17.382672 kubelet[2559]: I0905 00:07:17.382621 2559 scope.go:117] "RemoveContainer" containerID="fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c" Sep 5 00:07:17.383697 containerd[1472]: time="2025-09-05T00:07:17.383654757Z" level=info msg="RemoveContainer for \"fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c\"" Sep 5 00:07:17.387133 containerd[1472]: time="2025-09-05T00:07:17.387102064Z" level=info msg="RemoveContainer for \"fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c\" returns successfully" Sep 5 00:07:17.387331 kubelet[2559]: I0905 00:07:17.387294 2559 scope.go:117] "RemoveContainer" containerID="23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f" Sep 5 00:07:17.387537 containerd[1472]: time="2025-09-05T00:07:17.387502364Z" level=error msg="ContainerStatus for \"23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f\": not found" Sep 5 00:07:17.387640 kubelet[2559]: E0905 00:07:17.387602 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f\": not found" containerID="23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f" Sep 5 00:07:17.387686 kubelet[2559]: I0905 00:07:17.387641 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f"} err="failed to get container status \"23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f\": rpc error: code = NotFound desc = an error occurred when try to find container \"23c446ecca3168f9691f4007aa4ebd7b723e9b487da89f2c8487ff43c693b20f\": not found" Sep 5 00:07:17.387686 kubelet[2559]: I0905 00:07:17.387665 2559 scope.go:117] "RemoveContainer" containerID="d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a" Sep 5 00:07:17.387836 containerd[1472]: time="2025-09-05T00:07:17.387804087Z" level=error msg="ContainerStatus for \"d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a\": not found" Sep 5 00:07:17.387951 kubelet[2559]: E0905 00:07:17.387927 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a\": not found" containerID="d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a" Sep 5 00:07:17.387999 kubelet[2559]: I0905 00:07:17.387959 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a"} err="failed to get container status \"d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a\": rpc error: code = NotFound desc = an error occurred when try to find container \"d14faeea5dbc784b4bd748e6f5793235a5e036db988f536e659a7dbfbeaa8a1a\": not found" Sep 5 00:07:17.387999 kubelet[2559]: I0905 00:07:17.387984 2559 scope.go:117] "RemoveContainer" containerID="cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3" Sep 5 00:07:17.388205 containerd[1472]: time="2025-09-05T00:07:17.388170353Z" level=error msg="ContainerStatus for \"cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3\": not found" Sep 5 00:07:17.388314 kubelet[2559]: E0905 00:07:17.388294 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3\": not found" containerID="cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3" Sep 5 00:07:17.388349 kubelet[2559]: I0905 00:07:17.388317 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3"} err="failed to get container status \"cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3\": rpc error: code = NotFound desc = an error occurred when try to find container \"cac1d353ed2c425f636373242e466ca287a817412151b0f3f0b5a7849ba1cef3\": not found" Sep 5 00:07:17.388349 kubelet[2559]: I0905 00:07:17.388331 2559 scope.go:117] "RemoveContainer" containerID="c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c" Sep 5 00:07:17.388732 containerd[1472]: time="2025-09-05T00:07:17.388686022Z" level=error msg="ContainerStatus for \"c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c\": not found" Sep 5 00:07:17.388903 kubelet[2559]: E0905 00:07:17.388872 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c\": not found" containerID="c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c" Sep 5 00:07:17.388951 kubelet[2559]: I0905 00:07:17.388913 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c"} err="failed to get container status \"c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3b5356b837ad9d10f86e6250ea5ec15cd0c3a6ec9a8fdb339d1bf372c587e0c\": not found" Sep 5 00:07:17.388951 kubelet[2559]: I0905 00:07:17.388948 2559 scope.go:117] "RemoveContainer" containerID="fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c" Sep 5 00:07:17.389192 containerd[1472]: time="2025-09-05T00:07:17.389157587Z" level=error msg="ContainerStatus for \"fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c\": not found" Sep 5 00:07:17.389292 kubelet[2559]: E0905 00:07:17.389265 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c\": not found" containerID="fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c" Sep 5 00:07:17.389343 kubelet[2559]: I0905 00:07:17.389288 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c"} err="failed to get container status \"fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa3ca50cdae8508bb77cfdf8e99c5ba3e93cc7d132401a044732464f3092af6c\": not found" Sep 5 00:07:17.407574 kubelet[2559]: I0905 00:07:17.407524 2559 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:17.407574 kubelet[2559]: I0905 00:07:17.407553 2559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjspb\" (UniqueName: \"kubernetes.io/projected/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-kube-api-access-sjspb\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:17.407574 kubelet[2559]: I0905 00:07:17.407573 2559 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:17.407574 kubelet[2559]: I0905 00:07:17.407584 2559 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:17.407574 kubelet[2559]: I0905 00:07:17.407594 2559 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:17.407837 kubelet[2559]: I0905 00:07:17.407605 2559 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:17.407837 kubelet[2559]: I0905 00:07:17.407630 2559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjld2\" (UniqueName: \"kubernetes.io/projected/156ed862-6632-4160-9bcd-c42ca1eaab40-kube-api-access-wjld2\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:17.407837 kubelet[2559]: I0905 00:07:17.407641 2559 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:17.407837 kubelet[2559]: I0905 00:07:17.407652 2559 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:17.407837 kubelet[2559]: I0905 00:07:17.407661 2559 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:17.407837 kubelet[2559]: I0905 00:07:17.407677 2559 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:17.407837 kubelet[2559]: I0905 00:07:17.407686 2559 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:17.407837 kubelet[2559]: I0905 00:07:17.407699 2559 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:17.408059 kubelet[2559]: I0905 00:07:17.407709 2559 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:17.408059 kubelet[2559]: I0905 00:07:17.407717 2559 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/48a9996e-3cfe-4c60-adb6-4faa6ae8425c-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:17.408059 kubelet[2559]: I0905 00:07:17.407727 2559 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/156ed862-6632-4160-9bcd-c42ca1eaab40-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:18.016106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66-rootfs.mount: Deactivated successfully. Sep 5 00:07:18.016247 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-614266876f9a755ea9f4a5c686f27fa47b54b0c28af19433631c4f29e4ac5a66-shm.mount: Deactivated successfully. Sep 5 00:07:18.016359 systemd[1]: var-lib-kubelet-pods-48a9996e\x2d3cfe\x2d4c60\x2dadb6\x2d4faa6ae8425c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsjspb.mount: Deactivated successfully. Sep 5 00:07:18.016480 systemd[1]: var-lib-kubelet-pods-48a9996e\x2d3cfe\x2d4c60\x2dadb6\x2d4faa6ae8425c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 5 00:07:18.016597 systemd[1]: var-lib-kubelet-pods-156ed862\x2d6632\x2d4160\x2d9bcd\x2dc42ca1eaab40-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwjld2.mount: Deactivated successfully. Sep 5 00:07:18.016709 systemd[1]: var-lib-kubelet-pods-48a9996e\x2d3cfe\x2d4c60\x2dadb6\x2d4faa6ae8425c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 5 00:07:18.088944 kubelet[2559]: E0905 00:07:18.088882 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:18.091619 kubelet[2559]: I0905 00:07:18.091563 2559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="156ed862-6632-4160-9bcd-c42ca1eaab40" path="/var/lib/kubelet/pods/156ed862-6632-4160-9bcd-c42ca1eaab40/volumes" Sep 5 00:07:18.092395 kubelet[2559]: I0905 00:07:18.092356 2559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a9996e-3cfe-4c60-adb6-4faa6ae8425c" path="/var/lib/kubelet/pods/48a9996e-3cfe-4c60-adb6-4faa6ae8425c/volumes" Sep 5 00:07:18.148599 kubelet[2559]: E0905 00:07:18.148526 2559 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 00:07:18.994709 sshd[4226]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:19.003430 systemd[1]: sshd@27-10.0.0.15:22-10.0.0.1:51932.service: Deactivated successfully. Sep 5 00:07:19.006227 systemd[1]: session-28.scope: Deactivated successfully. Sep 5 00:07:19.008006 systemd-logind[1458]: Session 28 logged out. Waiting for processes to exit. Sep 5 00:07:19.013960 systemd[1]: Started sshd@28-10.0.0.15:22-10.0.0.1:51938.service - OpenSSH per-connection server daemon (10.0.0.1:51938). Sep 5 00:07:19.015144 systemd-logind[1458]: Removed session 28. Sep 5 00:07:19.053350 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 51938 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:19.055588 sshd[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:19.060977 systemd-logind[1458]: New session 29 of user core. Sep 5 00:07:19.070657 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 5 00:07:19.702987 sshd[4390]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:19.712179 systemd[1]: sshd@28-10.0.0.15:22-10.0.0.1:51938.service: Deactivated successfully. Sep 5 00:07:19.717010 kubelet[2559]: E0905 00:07:19.716939 2559 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="48a9996e-3cfe-4c60-adb6-4faa6ae8425c" containerName="mount-cgroup" Sep 5 00:07:19.717010 kubelet[2559]: E0905 00:07:19.716988 2559 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="48a9996e-3cfe-4c60-adb6-4faa6ae8425c" containerName="apply-sysctl-overwrites" Sep 5 00:07:19.717010 kubelet[2559]: E0905 00:07:19.717015 2559 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="48a9996e-3cfe-4c60-adb6-4faa6ae8425c" containerName="clean-cilium-state" Sep 5 00:07:19.719762 kubelet[2559]: E0905 00:07:19.717029 2559 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="48a9996e-3cfe-4c60-adb6-4faa6ae8425c" containerName="cilium-agent" Sep 5 00:07:19.719762 kubelet[2559]: E0905 00:07:19.717043 2559 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="48a9996e-3cfe-4c60-adb6-4faa6ae8425c" containerName="mount-bpf-fs" Sep 5 00:07:19.719762 kubelet[2559]: E0905 00:07:19.717053 2559 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="156ed862-6632-4160-9bcd-c42ca1eaab40" containerName="cilium-operator" Sep 5 00:07:19.719762 kubelet[2559]: I0905 00:07:19.717100 2559 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a9996e-3cfe-4c60-adb6-4faa6ae8425c" containerName="cilium-agent" Sep 5 00:07:19.719762 kubelet[2559]: I0905 00:07:19.717113 2559 memory_manager.go:354] "RemoveStaleState removing state" podUID="156ed862-6632-4160-9bcd-c42ca1eaab40" containerName="cilium-operator" Sep 5 00:07:19.717460 systemd[1]: session-29.scope: Deactivated successfully. Sep 5 00:07:19.725206 systemd-logind[1458]: Session 29 logged out. Waiting for processes to exit. Sep 5 00:07:19.742291 systemd[1]: Started sshd@29-10.0.0.15:22-10.0.0.1:51944.service - OpenSSH per-connection server daemon (10.0.0.1:51944). Sep 5 00:07:19.746646 systemd-logind[1458]: Removed session 29. Sep 5 00:07:19.753105 systemd[1]: Created slice kubepods-burstable-pod8cc290d6_7eee_4062_ade5_43a373ad230d.slice - libcontainer container kubepods-burstable-pod8cc290d6_7eee_4062_ade5_43a373ad230d.slice. Sep 5 00:07:19.786462 sshd[4405]: Accepted publickey for core from 10.0.0.1 port 51944 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:19.788505 sshd[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:19.793281 systemd-logind[1458]: New session 30 of user core. Sep 5 00:07:19.797568 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 5 00:07:19.822583 kubelet[2559]: I0905 00:07:19.822528 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8cc290d6-7eee-4062-ade5-43a373ad230d-cilium-cgroup\") pod \"cilium-xr9pm\" (UID: \"8cc290d6-7eee-4062-ade5-43a373ad230d\") " pod="kube-system/cilium-xr9pm" Sep 5 00:07:19.822583 kubelet[2559]: I0905 00:07:19.822577 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8cc290d6-7eee-4062-ade5-43a373ad230d-lib-modules\") pod \"cilium-xr9pm\" (UID: \"8cc290d6-7eee-4062-ade5-43a373ad230d\") " pod="kube-system/cilium-xr9pm" Sep 5 00:07:19.822583 kubelet[2559]: I0905 00:07:19.822597 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8cc290d6-7eee-4062-ade5-43a373ad230d-hostproc\") pod \"cilium-xr9pm\" (UID: \"8cc290d6-7eee-4062-ade5-43a373ad230d\") " pod="kube-system/cilium-xr9pm" Sep 5 00:07:19.822795 kubelet[2559]: I0905 00:07:19.822624 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8cc290d6-7eee-4062-ade5-43a373ad230d-cilium-config-path\") pod \"cilium-xr9pm\" (UID: \"8cc290d6-7eee-4062-ade5-43a373ad230d\") " pod="kube-system/cilium-xr9pm" Sep 5 00:07:19.822795 kubelet[2559]: I0905 00:07:19.822641 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8cc290d6-7eee-4062-ade5-43a373ad230d-etc-cni-netd\") pod \"cilium-xr9pm\" (UID: \"8cc290d6-7eee-4062-ade5-43a373ad230d\") " pod="kube-system/cilium-xr9pm" Sep 5 00:07:19.822795 kubelet[2559]: I0905 00:07:19.822656 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8cc290d6-7eee-4062-ade5-43a373ad230d-cilium-ipsec-secrets\") pod \"cilium-xr9pm\" (UID: \"8cc290d6-7eee-4062-ade5-43a373ad230d\") " pod="kube-system/cilium-xr9pm" Sep 5 00:07:19.822795 kubelet[2559]: I0905 00:07:19.822671 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8cc290d6-7eee-4062-ade5-43a373ad230d-hubble-tls\") pod \"cilium-xr9pm\" (UID: \"8cc290d6-7eee-4062-ade5-43a373ad230d\") " pod="kube-system/cilium-xr9pm" Sep 5 00:07:19.822795 kubelet[2559]: I0905 00:07:19.822743 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzvgx\" (UniqueName: \"kubernetes.io/projected/8cc290d6-7eee-4062-ade5-43a373ad230d-kube-api-access-tzvgx\") pod \"cilium-xr9pm\" (UID: \"8cc290d6-7eee-4062-ade5-43a373ad230d\") " pod="kube-system/cilium-xr9pm" Sep 5 00:07:19.822975 kubelet[2559]: I0905 00:07:19.822869 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8cc290d6-7eee-4062-ade5-43a373ad230d-clustermesh-secrets\") pod \"cilium-xr9pm\" (UID: \"8cc290d6-7eee-4062-ade5-43a373ad230d\") " pod="kube-system/cilium-xr9pm" Sep 5 00:07:19.822975 kubelet[2559]: I0905 00:07:19.822923 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8cc290d6-7eee-4062-ade5-43a373ad230d-bpf-maps\") pod \"cilium-xr9pm\" (UID: \"8cc290d6-7eee-4062-ade5-43a373ad230d\") " pod="kube-system/cilium-xr9pm" Sep 5 00:07:19.822975 kubelet[2559]: I0905 00:07:19.822953 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cc290d6-7eee-4062-ade5-43a373ad230d-xtables-lock\") pod \"cilium-xr9pm\" (UID: \"8cc290d6-7eee-4062-ade5-43a373ad230d\") " pod="kube-system/cilium-xr9pm" Sep 5 00:07:19.823089 kubelet[2559]: I0905 00:07:19.822985 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8cc290d6-7eee-4062-ade5-43a373ad230d-host-proc-sys-net\") pod \"cilium-xr9pm\" (UID: \"8cc290d6-7eee-4062-ade5-43a373ad230d\") " pod="kube-system/cilium-xr9pm" Sep 5 00:07:19.823089 kubelet[2559]: I0905 00:07:19.823025 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8cc290d6-7eee-4062-ade5-43a373ad230d-host-proc-sys-kernel\") pod \"cilium-xr9pm\" (UID: \"8cc290d6-7eee-4062-ade5-43a373ad230d\") " pod="kube-system/cilium-xr9pm" Sep 5 00:07:19.823089 kubelet[2559]: I0905 00:07:19.823070 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8cc290d6-7eee-4062-ade5-43a373ad230d-cilium-run\") pod \"cilium-xr9pm\" (UID: \"8cc290d6-7eee-4062-ade5-43a373ad230d\") " pod="kube-system/cilium-xr9pm" Sep 5 00:07:19.823220 kubelet[2559]: I0905 00:07:19.823166 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8cc290d6-7eee-4062-ade5-43a373ad230d-cni-path\") pod \"cilium-xr9pm\" (UID: \"8cc290d6-7eee-4062-ade5-43a373ad230d\") " pod="kube-system/cilium-xr9pm" Sep 5 00:07:19.849961 sshd[4405]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:19.859941 systemd[1]: sshd@29-10.0.0.15:22-10.0.0.1:51944.service: Deactivated successfully. Sep 5 00:07:19.862613 systemd[1]: session-30.scope: Deactivated successfully. Sep 5 00:07:19.864966 systemd-logind[1458]: Session 30 logged out. Waiting for processes to exit. Sep 5 00:07:19.871893 systemd[1]: Started sshd@30-10.0.0.15:22-10.0.0.1:51960.service - OpenSSH per-connection server daemon (10.0.0.1:51960). Sep 5 00:07:19.873294 systemd-logind[1458]: Removed session 30. Sep 5 00:07:19.905736 sshd[4413]: Accepted publickey for core from 10.0.0.1 port 51960 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:19.907528 sshd[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:19.912465 systemd-logind[1458]: New session 31 of user core. Sep 5 00:07:19.922613 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 5 00:07:20.059321 kubelet[2559]: E0905 00:07:20.059260 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:20.060079 containerd[1472]: time="2025-09-05T00:07:20.060030046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xr9pm,Uid:8cc290d6-7eee-4062-ade5-43a373ad230d,Namespace:kube-system,Attempt:0,}" Sep 5 00:07:20.087273 containerd[1472]: time="2025-09-05T00:07:20.087037618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:07:20.087273 containerd[1472]: time="2025-09-05T00:07:20.087217889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:07:20.087273 containerd[1472]: time="2025-09-05T00:07:20.087236004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:07:20.087681 containerd[1472]: time="2025-09-05T00:07:20.087577181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:07:20.109668 systemd[1]: Started cri-containerd-f17b956c16fa6fa6499b7b851cb5609ff4ce5b19b5f399538eb8643bdd4ac800.scope - libcontainer container f17b956c16fa6fa6499b7b851cb5609ff4ce5b19b5f399538eb8643bdd4ac800. Sep 5 00:07:20.138024 containerd[1472]: time="2025-09-05T00:07:20.137963075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xr9pm,Uid:8cc290d6-7eee-4062-ade5-43a373ad230d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f17b956c16fa6fa6499b7b851cb5609ff4ce5b19b5f399538eb8643bdd4ac800\"" Sep 5 00:07:20.138727 kubelet[2559]: E0905 00:07:20.138694 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:20.141126 containerd[1472]: time="2025-09-05T00:07:20.141093345Z" level=info msg="CreateContainer within sandbox \"f17b956c16fa6fa6499b7b851cb5609ff4ce5b19b5f399538eb8643bdd4ac800\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 00:07:20.156147 containerd[1472]: time="2025-09-05T00:07:20.156095575Z" level=info msg="CreateContainer within sandbox \"f17b956c16fa6fa6499b7b851cb5609ff4ce5b19b5f399538eb8643bdd4ac800\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"539c729b0e24096a5edbe7882bc396fd258369e7f4dc5099598cb6d012a9d067\"" Sep 5 00:07:20.156665 containerd[1472]: time="2025-09-05T00:07:20.156603959Z" level=info msg="StartContainer for \"539c729b0e24096a5edbe7882bc396fd258369e7f4dc5099598cb6d012a9d067\"" Sep 5 00:07:20.185590 systemd[1]: Started cri-containerd-539c729b0e24096a5edbe7882bc396fd258369e7f4dc5099598cb6d012a9d067.scope - libcontainer container 539c729b0e24096a5edbe7882bc396fd258369e7f4dc5099598cb6d012a9d067. Sep 5 00:07:20.215322 containerd[1472]: time="2025-09-05T00:07:20.215281829Z" level=info msg="StartContainer for \"539c729b0e24096a5edbe7882bc396fd258369e7f4dc5099598cb6d012a9d067\" returns successfully" Sep 5 00:07:20.228297 systemd[1]: cri-containerd-539c729b0e24096a5edbe7882bc396fd258369e7f4dc5099598cb6d012a9d067.scope: Deactivated successfully. Sep 5 00:07:20.333800 kubelet[2559]: E0905 00:07:20.333670 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:20.399672 containerd[1472]: time="2025-09-05T00:07:20.399596522Z" level=info msg="shim disconnected" id=539c729b0e24096a5edbe7882bc396fd258369e7f4dc5099598cb6d012a9d067 namespace=k8s.io Sep 5 00:07:20.399672 containerd[1472]: time="2025-09-05T00:07:20.399660192Z" level=warning msg="cleaning up after shim disconnected" id=539c729b0e24096a5edbe7882bc396fd258369e7f4dc5099598cb6d012a9d067 namespace=k8s.io Sep 5 00:07:20.399672 containerd[1472]: time="2025-09-05T00:07:20.399670212Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:07:20.882625 kubelet[2559]: I0905 00:07:20.882574 2559 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-05T00:07:20Z","lastTransitionTime":"2025-09-05T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 5 00:07:21.336545 kubelet[2559]: E0905 00:07:21.336508 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:21.338208 containerd[1472]: time="2025-09-05T00:07:21.338165621Z" level=info msg="CreateContainer within sandbox \"f17b956c16fa6fa6499b7b851cb5609ff4ce5b19b5f399538eb8643bdd4ac800\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 00:07:21.386941 containerd[1472]: time="2025-09-05T00:07:21.386874159Z" level=info msg="CreateContainer within sandbox \"f17b956c16fa6fa6499b7b851cb5609ff4ce5b19b5f399538eb8643bdd4ac800\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a5d6b33107435ffc29748bf4f1bc33394261692ca1fdb7198ca903b618d776ec\"" Sep 5 00:07:21.392212 containerd[1472]: time="2025-09-05T00:07:21.392157954Z" level=info msg="StartContainer for \"a5d6b33107435ffc29748bf4f1bc33394261692ca1fdb7198ca903b618d776ec\"" Sep 5 00:07:21.431699 systemd[1]: Started cri-containerd-a5d6b33107435ffc29748bf4f1bc33394261692ca1fdb7198ca903b618d776ec.scope - libcontainer container a5d6b33107435ffc29748bf4f1bc33394261692ca1fdb7198ca903b618d776ec. Sep 5 00:07:21.462151 containerd[1472]: time="2025-09-05T00:07:21.462101038Z" level=info msg="StartContainer for \"a5d6b33107435ffc29748bf4f1bc33394261692ca1fdb7198ca903b618d776ec\" returns successfully" Sep 5 00:07:21.469427 systemd[1]: cri-containerd-a5d6b33107435ffc29748bf4f1bc33394261692ca1fdb7198ca903b618d776ec.scope: Deactivated successfully. Sep 5 00:07:21.493595 containerd[1472]: time="2025-09-05T00:07:21.493525135Z" level=info msg="shim disconnected" id=a5d6b33107435ffc29748bf4f1bc33394261692ca1fdb7198ca903b618d776ec namespace=k8s.io Sep 5 00:07:21.493595 containerd[1472]: time="2025-09-05T00:07:21.493588355Z" level=warning msg="cleaning up after shim disconnected" id=a5d6b33107435ffc29748bf4f1bc33394261692ca1fdb7198ca903b618d776ec namespace=k8s.io Sep 5 00:07:21.493595 containerd[1472]: time="2025-09-05T00:07:21.493598504Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:07:21.931335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5d6b33107435ffc29748bf4f1bc33394261692ca1fdb7198ca903b618d776ec-rootfs.mount: Deactivated successfully. Sep 5 00:07:22.340386 kubelet[2559]: E0905 00:07:22.340350 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:22.342171 containerd[1472]: time="2025-09-05T00:07:22.342120042Z" level=info msg="CreateContainer within sandbox \"f17b956c16fa6fa6499b7b851cb5609ff4ce5b19b5f399538eb8643bdd4ac800\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 00:07:22.473063 containerd[1472]: time="2025-09-05T00:07:22.472835231Z" level=info msg="CreateContainer within sandbox \"f17b956c16fa6fa6499b7b851cb5609ff4ce5b19b5f399538eb8643bdd4ac800\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fb0f6a107e37081fb0ec4f4a03df5f0db0cf670c6b605d9b45736de81ee098f0\"" Sep 5 00:07:22.473465 containerd[1472]: time="2025-09-05T00:07:22.473402257Z" level=info msg="StartContainer for \"fb0f6a107e37081fb0ec4f4a03df5f0db0cf670c6b605d9b45736de81ee098f0\"" Sep 5 00:07:22.510810 systemd[1]: Started cri-containerd-fb0f6a107e37081fb0ec4f4a03df5f0db0cf670c6b605d9b45736de81ee098f0.scope - libcontainer container fb0f6a107e37081fb0ec4f4a03df5f0db0cf670c6b605d9b45736de81ee098f0. Sep 5 00:07:22.545876 containerd[1472]: time="2025-09-05T00:07:22.545822868Z" level=info msg="StartContainer for \"fb0f6a107e37081fb0ec4f4a03df5f0db0cf670c6b605d9b45736de81ee098f0\" returns successfully" Sep 5 00:07:22.548398 systemd[1]: cri-containerd-fb0f6a107e37081fb0ec4f4a03df5f0db0cf670c6b605d9b45736de81ee098f0.scope: Deactivated successfully. Sep 5 00:07:22.576015 containerd[1472]: time="2025-09-05T00:07:22.575920916Z" level=info msg="shim disconnected" id=fb0f6a107e37081fb0ec4f4a03df5f0db0cf670c6b605d9b45736de81ee098f0 namespace=k8s.io Sep 5 00:07:22.576015 containerd[1472]: time="2025-09-05T00:07:22.576004294Z" level=warning msg="cleaning up after shim disconnected" id=fb0f6a107e37081fb0ec4f4a03df5f0db0cf670c6b605d9b45736de81ee098f0 namespace=k8s.io Sep 5 00:07:22.576015 containerd[1472]: time="2025-09-05T00:07:22.576016107Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:07:22.932258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb0f6a107e37081fb0ec4f4a03df5f0db0cf670c6b605d9b45736de81ee098f0-rootfs.mount: Deactivated successfully. Sep 5 00:07:23.089585 kubelet[2559]: E0905 00:07:23.089541 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:23.149978 kubelet[2559]: E0905 00:07:23.149911 2559 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 00:07:23.344370 kubelet[2559]: E0905 00:07:23.344322 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:23.346074 containerd[1472]: time="2025-09-05T00:07:23.345911671Z" level=info msg="CreateContainer within sandbox \"f17b956c16fa6fa6499b7b851cb5609ff4ce5b19b5f399538eb8643bdd4ac800\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 00:07:23.406198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117036263.mount: Deactivated successfully. Sep 5 00:07:23.409518 containerd[1472]: time="2025-09-05T00:07:23.409479894Z" level=info msg="CreateContainer within sandbox \"f17b956c16fa6fa6499b7b851cb5609ff4ce5b19b5f399538eb8643bdd4ac800\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"be68e168ed901221351e94b34fae77b5406638b447c77f3e5477176d9c9ed511\"" Sep 5 00:07:23.410041 containerd[1472]: time="2025-09-05T00:07:23.410017293Z" level=info msg="StartContainer for \"be68e168ed901221351e94b34fae77b5406638b447c77f3e5477176d9c9ed511\"" Sep 5 00:07:23.444015 systemd[1]: Started cri-containerd-be68e168ed901221351e94b34fae77b5406638b447c77f3e5477176d9c9ed511.scope - libcontainer container be68e168ed901221351e94b34fae77b5406638b447c77f3e5477176d9c9ed511. Sep 5 00:07:23.470476 systemd[1]: cri-containerd-be68e168ed901221351e94b34fae77b5406638b447c77f3e5477176d9c9ed511.scope: Deactivated successfully. Sep 5 00:07:23.473764 containerd[1472]: time="2025-09-05T00:07:23.473719991Z" level=info msg="StartContainer for \"be68e168ed901221351e94b34fae77b5406638b447c77f3e5477176d9c9ed511\" returns successfully" Sep 5 00:07:23.503547 containerd[1472]: time="2025-09-05T00:07:23.503471127Z" level=info msg="shim disconnected" id=be68e168ed901221351e94b34fae77b5406638b447c77f3e5477176d9c9ed511 namespace=k8s.io Sep 5 00:07:23.503547 containerd[1472]: time="2025-09-05T00:07:23.503527955Z" level=warning msg="cleaning up after shim disconnected" id=be68e168ed901221351e94b34fae77b5406638b447c77f3e5477176d9c9ed511 namespace=k8s.io Sep 5 00:07:23.503547 containerd[1472]: time="2025-09-05T00:07:23.503538315Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:07:23.931643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be68e168ed901221351e94b34fae77b5406638b447c77f3e5477176d9c9ed511-rootfs.mount: Deactivated successfully. Sep 5 00:07:24.348134 kubelet[2559]: E0905 00:07:24.348102 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:24.349576 containerd[1472]: time="2025-09-05T00:07:24.349543136Z" level=info msg="CreateContainer within sandbox \"f17b956c16fa6fa6499b7b851cb5609ff4ce5b19b5f399538eb8643bdd4ac800\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 00:07:24.373710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416526971.mount: Deactivated successfully. Sep 5 00:07:24.379485 containerd[1472]: time="2025-09-05T00:07:24.379408115Z" level=info msg="CreateContainer within sandbox \"f17b956c16fa6fa6499b7b851cb5609ff4ce5b19b5f399538eb8643bdd4ac800\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fca7f116c9b87bd7eb93d22829dcef316ec4a712f99a89c2cd1b83a3d33a61f5\"" Sep 5 00:07:24.380131 containerd[1472]: time="2025-09-05T00:07:24.380102482Z" level=info msg="StartContainer for \"fca7f116c9b87bd7eb93d22829dcef316ec4a712f99a89c2cd1b83a3d33a61f5\"" Sep 5 00:07:24.409571 systemd[1]: Started cri-containerd-fca7f116c9b87bd7eb93d22829dcef316ec4a712f99a89c2cd1b83a3d33a61f5.scope - libcontainer container fca7f116c9b87bd7eb93d22829dcef316ec4a712f99a89c2cd1b83a3d33a61f5. Sep 5 00:07:24.440335 containerd[1472]: time="2025-09-05T00:07:24.440288245Z" level=info msg="StartContainer for \"fca7f116c9b87bd7eb93d22829dcef316ec4a712f99a89c2cd1b83a3d33a61f5\" returns successfully" Sep 5 00:07:24.934486 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 5 00:07:25.360459 kubelet[2559]: E0905 00:07:25.360399 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:25.376159 kubelet[2559]: I0905 00:07:25.376095 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xr9pm" podStartSLOduration=6.376073889 podStartE2EDuration="6.376073889s" podCreationTimestamp="2025-09-05 00:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:07:25.375342132 +0000 UTC m=+97.404661920" watchObservedRunningTime="2025-09-05 00:07:25.376073889 +0000 UTC m=+97.405393667" Sep 5 00:07:26.362070 kubelet[2559]: E0905 00:07:26.362009 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:28.209639 systemd-networkd[1408]: lxc_health: Link UP Sep 5 00:07:28.219620 systemd-networkd[1408]: lxc_health: Gained carrier Sep 5 00:07:28.734745 systemd[1]: run-containerd-runc-k8s.io-fca7f116c9b87bd7eb93d22829dcef316ec4a712f99a89c2cd1b83a3d33a61f5-runc.5VNdDX.mount: Deactivated successfully. Sep 5 00:07:29.998562 systemd-networkd[1408]: lxc_health: Gained IPv6LL Sep 5 00:07:30.061690 kubelet[2559]: E0905 00:07:30.061098 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:30.369341 kubelet[2559]: E0905 00:07:30.369277 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:30.842643 systemd[1]: run-containerd-runc-k8s.io-fca7f116c9b87bd7eb93d22829dcef316ec4a712f99a89c2cd1b83a3d33a61f5-runc.GnZyGO.mount: Deactivated successfully. Sep 5 00:07:31.371039 kubelet[2559]: E0905 00:07:31.370994 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:35.106574 sshd[4413]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:35.111464 systemd[1]: sshd@30-10.0.0.15:22-10.0.0.1:51960.service: Deactivated successfully. Sep 5 00:07:35.113876 systemd[1]: session-31.scope: Deactivated successfully. Sep 5 00:07:35.114754 systemd-logind[1458]: Session 31 logged out. Waiting for processes to exit. Sep 5 00:07:35.116295 systemd-logind[1458]: Removed session 31.