Sep 4 00:09:35.886835 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 3 22:05:39 -00 2025 Sep 4 00:09:35.886856 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:09:35.886868 kernel: BIOS-provided physical RAM map: Sep 4 00:09:35.886875 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 4 00:09:35.886881 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 4 00:09:35.886887 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Sep 4 00:09:35.886895 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 4 00:09:35.886902 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Sep 4 00:09:35.886915 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 4 00:09:35.886923 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 4 00:09:35.886933 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 4 00:09:35.886949 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 4 00:09:35.886958 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 4 00:09:35.886967 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 4 00:09:35.886977 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 4 00:09:35.886987 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 4 00:09:35.887004 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 4 00:09:35.887014 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 00:09:35.887023 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 00:09:35.887033 kernel: NX (Execute Disable) protection: active Sep 4 00:09:35.887042 kernel: APIC: Static calls initialized Sep 4 00:09:35.887052 kernel: e820: update [mem 0x9a13e018-0x9a147c57] usable ==> usable Sep 4 00:09:35.887062 kernel: e820: update [mem 0x9a101018-0x9a13de57] usable ==> usable Sep 4 00:09:35.887071 kernel: extended physical RAM map: Sep 4 00:09:35.887081 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 4 00:09:35.887091 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 4 00:09:35.887101 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Sep 4 00:09:35.887115 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 4 00:09:35.887124 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a101017] usable Sep 4 00:09:35.887134 kernel: reserve setup_data: [mem 0x000000009a101018-0x000000009a13de57] usable Sep 4 00:09:35.887144 kernel: reserve setup_data: [mem 0x000000009a13de58-0x000000009a13e017] usable Sep 4 00:09:35.887153 kernel: reserve setup_data: [mem 0x000000009a13e018-0x000000009a147c57] usable Sep 4 00:09:35.887163 kernel: reserve setup_data: [mem 0x000000009a147c58-0x000000009b8ecfff] usable Sep 4 00:09:35.887172 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 4 00:09:35.887182 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 4 00:09:35.887192 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 4 00:09:35.887202 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 4 00:09:35.887211 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 4 00:09:35.887225 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 4 00:09:35.887235 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 4 00:09:35.887250 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 4 00:09:35.887260 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 4 00:09:35.887270 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 00:09:35.887280 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 00:09:35.887294 kernel: efi: EFI v2.7 by EDK II Sep 4 00:09:35.887306 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Sep 4 00:09:35.887317 kernel: random: crng init done Sep 4 00:09:35.887330 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 4 00:09:35.887339 kernel: secureboot: Secure boot enabled Sep 4 00:09:35.887349 kernel: SMBIOS 2.8 present. Sep 4 00:09:35.887359 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 4 00:09:35.887369 kernel: DMI: Memory slots populated: 1/1 Sep 4 00:09:35.887379 kernel: Hypervisor detected: KVM Sep 4 00:09:35.887389 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 00:09:35.887403 kernel: kvm-clock: using sched offset of 7493791370 cycles Sep 4 00:09:35.887414 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 00:09:35.887424 kernel: tsc: Detected 2794.748 MHz processor Sep 4 00:09:35.887452 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 00:09:35.887463 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 00:09:35.887473 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Sep 4 00:09:35.887483 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 00:09:35.887500 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 00:09:35.887510 kernel: Using GB pages for direct mapping Sep 4 00:09:35.887526 kernel: ACPI: Early table checksum verification disabled Sep 4 00:09:35.887537 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Sep 4 00:09:35.887547 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 4 00:09:35.887557 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:09:35.887568 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:09:35.887578 kernel: ACPI: FACS 0x000000009BBDD000 000040 Sep 4 00:09:35.887588 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:09:35.887598 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:09:35.887618 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:09:35.887631 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:09:35.887641 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 4 00:09:35.887652 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Sep 4 00:09:35.887662 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Sep 4 00:09:35.887672 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Sep 4 00:09:35.887682 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Sep 4 00:09:35.887692 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Sep 4 00:09:35.887702 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Sep 4 00:09:35.887712 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Sep 4 00:09:35.887725 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Sep 4 00:09:35.887735 kernel: No NUMA configuration found Sep 4 00:09:35.887745 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Sep 4 00:09:35.887756 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Sep 4 00:09:35.887766 kernel: Zone ranges: Sep 4 00:09:35.887776 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 00:09:35.887786 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Sep 4 00:09:35.887796 kernel: Normal empty Sep 4 00:09:35.887806 kernel: Device empty Sep 4 00:09:35.887816 kernel: Movable zone start for each node Sep 4 00:09:35.887829 kernel: Early memory node ranges Sep 4 00:09:35.887839 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Sep 4 00:09:35.887849 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Sep 4 00:09:35.887860 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Sep 4 00:09:35.887870 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Sep 4 00:09:35.887880 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Sep 4 00:09:35.887890 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Sep 4 00:09:35.887900 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 00:09:35.887911 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Sep 4 00:09:35.887924 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 00:09:35.887934 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 4 00:09:35.887945 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 4 00:09:35.887954 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Sep 4 00:09:35.887964 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 00:09:35.887974 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 00:09:35.887985 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 00:09:35.887995 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 00:09:35.888005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 00:09:35.888023 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 00:09:35.888034 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 00:09:35.888044 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 00:09:35.888055 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 00:09:35.888065 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 00:09:35.888076 kernel: TSC deadline timer available Sep 4 00:09:35.888086 kernel: CPU topo: Max. logical packages: 1 Sep 4 00:09:35.888096 kernel: CPU topo: Max. logical dies: 1 Sep 4 00:09:35.888107 kernel: CPU topo: Max. dies per package: 1 Sep 4 00:09:35.888130 kernel: CPU topo: Max. threads per core: 1 Sep 4 00:09:35.888141 kernel: CPU topo: Num. cores per package: 4 Sep 4 00:09:35.888152 kernel: CPU topo: Num. threads per package: 4 Sep 4 00:09:35.888165 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 4 00:09:35.888180 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 00:09:35.888190 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 00:09:35.888201 kernel: kvm-guest: setup PV sched yield Sep 4 00:09:35.888212 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 4 00:09:35.888226 kernel: Booting paravirtualized kernel on KVM Sep 4 00:09:35.888237 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 00:09:35.888248 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 00:09:35.888259 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 4 00:09:35.888270 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 4 00:09:35.888280 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 00:09:35.888291 kernel: kvm-guest: PV spinlocks enabled Sep 4 00:09:35.888302 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 00:09:35.888314 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:09:35.888329 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 00:09:35.888340 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 00:09:35.888351 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 00:09:35.888362 kernel: Fallback order for Node 0: 0 Sep 4 00:09:35.888373 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Sep 4 00:09:35.888383 kernel: Policy zone: DMA32 Sep 4 00:09:35.888394 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 00:09:35.888405 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 00:09:35.888419 kernel: ftrace: allocating 40099 entries in 157 pages Sep 4 00:09:35.888447 kernel: ftrace: allocated 157 pages with 5 groups Sep 4 00:09:35.888459 kernel: Dynamic Preempt: voluntary Sep 4 00:09:35.888469 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 00:09:35.888481 kernel: rcu: RCU event tracing is enabled. Sep 4 00:09:35.888492 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 00:09:35.888503 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 00:09:35.888514 kernel: Rude variant of Tasks RCU enabled. Sep 4 00:09:35.888524 kernel: Tracing variant of Tasks RCU enabled. Sep 4 00:09:35.888540 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 00:09:35.888551 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 00:09:35.888562 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 00:09:35.888573 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 00:09:35.888588 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 00:09:35.888599 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 00:09:35.888620 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 00:09:35.888631 kernel: Console: colour dummy device 80x25 Sep 4 00:09:35.888642 kernel: printk: legacy console [ttyS0] enabled Sep 4 00:09:35.888656 kernel: ACPI: Core revision 20240827 Sep 4 00:09:35.888667 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 00:09:35.888678 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 00:09:35.888688 kernel: x2apic enabled Sep 4 00:09:35.888699 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 00:09:35.888710 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 00:09:35.888721 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 00:09:35.888732 kernel: kvm-guest: setup PV IPIs Sep 4 00:09:35.888743 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 00:09:35.888758 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 4 00:09:35.888769 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 4 00:09:35.888780 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 00:09:35.888790 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 00:09:35.888801 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 00:09:35.888817 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 00:09:35.888828 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 00:09:35.888838 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 00:09:35.888849 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 00:09:35.888864 kernel: active return thunk: retbleed_return_thunk Sep 4 00:09:35.888874 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 00:09:35.888885 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 00:09:35.888896 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 00:09:35.888907 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 00:09:35.888919 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 00:09:35.888929 kernel: active return thunk: srso_return_thunk Sep 4 00:09:35.888940 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 00:09:35.888955 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 00:09:35.888966 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 00:09:35.888977 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 00:09:35.888987 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 00:09:35.889013 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 00:09:35.889024 kernel: Freeing SMP alternatives memory: 32K Sep 4 00:09:35.889046 kernel: pid_max: default: 32768 minimum: 301 Sep 4 00:09:35.889069 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 00:09:35.889098 kernel: landlock: Up and running. Sep 4 00:09:35.889115 kernel: SELinux: Initializing. Sep 4 00:09:35.889126 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 00:09:35.889137 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 00:09:35.889148 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 00:09:35.889171 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 00:09:35.889200 kernel: ... version: 0 Sep 4 00:09:35.889225 kernel: ... bit width: 48 Sep 4 00:09:35.889236 kernel: ... generic registers: 6 Sep 4 00:09:35.889247 kernel: ... value mask: 0000ffffffffffff Sep 4 00:09:35.889261 kernel: ... max period: 00007fffffffffff Sep 4 00:09:35.889272 kernel: ... fixed-purpose events: 0 Sep 4 00:09:35.889283 kernel: ... event mask: 000000000000003f Sep 4 00:09:35.889293 kernel: signal: max sigframe size: 1776 Sep 4 00:09:35.889303 kernel: rcu: Hierarchical SRCU implementation. Sep 4 00:09:35.889314 kernel: rcu: Max phase no-delay instances is 400. Sep 4 00:09:35.889325 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 00:09:35.889335 kernel: smp: Bringing up secondary CPUs ... Sep 4 00:09:35.889346 kernel: smpboot: x86: Booting SMP configuration: Sep 4 00:09:35.889360 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 00:09:35.889371 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 00:09:35.889382 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 4 00:09:35.889393 kernel: Memory: 2411272K/2552216K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 135016K reserved, 0K cma-reserved) Sep 4 00:09:35.889405 kernel: devtmpfs: initialized Sep 4 00:09:35.889415 kernel: x86/mm: Memory block size: 128MB Sep 4 00:09:35.889454 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Sep 4 00:09:35.889467 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Sep 4 00:09:35.889478 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 00:09:35.889492 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 00:09:35.889503 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 00:09:35.889513 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 00:09:35.889524 kernel: audit: initializing netlink subsys (disabled) Sep 4 00:09:35.889534 kernel: audit: type=2000 audit(1756944573.807:1): state=initialized audit_enabled=0 res=1 Sep 4 00:09:35.889545 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 00:09:35.889556 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 00:09:35.889566 kernel: cpuidle: using governor menu Sep 4 00:09:35.889577 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 00:09:35.889591 kernel: dca service started, version 1.12.1 Sep 4 00:09:35.889611 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 4 00:09:35.889622 kernel: PCI: Using configuration type 1 for base access Sep 4 00:09:35.889632 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 00:09:35.889643 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 00:09:35.889654 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 00:09:35.889664 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 00:09:35.889675 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 00:09:35.889685 kernel: ACPI: Added _OSI(Module Device) Sep 4 00:09:35.889699 kernel: ACPI: Added _OSI(Processor Device) Sep 4 00:09:35.889710 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 00:09:35.889721 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 00:09:35.889731 kernel: ACPI: Interpreter enabled Sep 4 00:09:35.889741 kernel: ACPI: PM: (supports S0 S5) Sep 4 00:09:35.889752 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 00:09:35.889762 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 00:09:35.889773 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 00:09:35.889784 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 4 00:09:35.889797 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 00:09:35.890062 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 00:09:35.890223 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 4 00:09:35.890422 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 4 00:09:35.890478 kernel: PCI host bridge to bus 0000:00 Sep 4 00:09:35.890717 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 00:09:35.890928 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 00:09:35.891101 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 00:09:35.891250 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 4 00:09:35.891405 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 4 00:09:35.891587 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 4 00:09:35.891754 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 00:09:35.891954 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 4 00:09:35.892153 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 4 00:09:35.892317 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 4 00:09:35.892506 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 4 00:09:35.892726 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 4 00:09:35.892958 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 00:09:35.893111 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 4 00:09:35.893236 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 4 00:09:35.893379 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 4 00:09:35.893527 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 4 00:09:35.893682 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 4 00:09:35.893833 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 4 00:09:35.894038 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 4 00:09:35.894176 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 4 00:09:35.894372 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 4 00:09:35.894560 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 4 00:09:35.894718 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 4 00:09:35.894840 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 4 00:09:35.894961 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 4 00:09:35.895099 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 4 00:09:35.895221 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 4 00:09:35.895387 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 4 00:09:35.895530 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 4 00:09:35.895665 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 4 00:09:35.895798 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 4 00:09:35.895920 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 4 00:09:35.895931 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 00:09:35.895939 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 00:09:35.895952 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 00:09:35.895961 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 00:09:35.895969 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 4 00:09:35.895977 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 4 00:09:35.895985 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 4 00:09:35.895993 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 4 00:09:35.896000 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 4 00:09:35.896008 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 4 00:09:35.896016 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 4 00:09:35.896027 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 4 00:09:35.896034 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 4 00:09:35.896042 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 4 00:09:35.896050 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 4 00:09:35.896058 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 4 00:09:35.896066 kernel: iommu: Default domain type: Translated Sep 4 00:09:35.896074 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 00:09:35.896082 kernel: efivars: Registered efivars operations Sep 4 00:09:35.896090 kernel: PCI: Using ACPI for IRQ routing Sep 4 00:09:35.896100 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 00:09:35.896108 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Sep 4 00:09:35.896116 kernel: e820: reserve RAM buffer [mem 0x9a101018-0x9bffffff] Sep 4 00:09:35.896123 kernel: e820: reserve RAM buffer [mem 0x9a13e018-0x9bffffff] Sep 4 00:09:35.896131 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Sep 4 00:09:35.896139 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Sep 4 00:09:35.896265 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 4 00:09:35.896397 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 4 00:09:35.896653 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 00:09:35.896671 kernel: vgaarb: loaded Sep 4 00:09:35.896679 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 00:09:35.896688 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 00:09:35.896696 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 00:09:35.896704 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 00:09:35.896712 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 00:09:35.896720 kernel: pnp: PnP ACPI init Sep 4 00:09:35.896890 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 4 00:09:35.896906 kernel: pnp: PnP ACPI: found 6 devices Sep 4 00:09:35.896914 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 00:09:35.896923 kernel: NET: Registered PF_INET protocol family Sep 4 00:09:35.896931 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 00:09:35.896939 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 00:09:35.896947 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 00:09:35.896955 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 00:09:35.896964 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 00:09:35.896971 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 00:09:35.896982 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 00:09:35.896990 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 00:09:35.896998 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 00:09:35.897006 kernel: NET: Registered PF_XDP protocol family Sep 4 00:09:35.897129 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 4 00:09:35.897253 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 4 00:09:35.897378 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 00:09:35.897505 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 00:09:35.897631 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 00:09:35.897746 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 4 00:09:35.897886 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 4 00:09:35.898122 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 4 00:09:35.898141 kernel: PCI: CLS 0 bytes, default 64 Sep 4 00:09:35.898153 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 4 00:09:35.898164 kernel: Initialise system trusted keyrings Sep 4 00:09:35.898176 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 00:09:35.898191 kernel: Key type asymmetric registered Sep 4 00:09:35.898202 kernel: Asymmetric key parser 'x509' registered Sep 4 00:09:35.898231 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 00:09:35.898245 kernel: io scheduler mq-deadline registered Sep 4 00:09:35.898256 kernel: io scheduler kyber registered Sep 4 00:09:35.898267 kernel: io scheduler bfq registered Sep 4 00:09:35.898278 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 00:09:35.898290 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 4 00:09:35.898302 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 4 00:09:35.898317 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 4 00:09:35.898328 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 00:09:35.898340 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 00:09:35.898351 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 00:09:35.898362 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 00:09:35.898374 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 00:09:35.898386 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 00:09:35.898612 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 00:09:35.898768 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 00:09:35.898914 kernel: rtc_cmos 00:04: setting system clock to 2025-09-04T00:09:35 UTC (1756944575) Sep 4 00:09:35.899054 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 4 00:09:35.899069 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 00:09:35.899081 kernel: efifb: probing for efifb Sep 4 00:09:35.899092 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 4 00:09:35.899104 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 4 00:09:35.899116 kernel: efifb: scrolling: redraw Sep 4 00:09:35.899127 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 00:09:35.899142 kernel: Console: switching to colour frame buffer device 160x50 Sep 4 00:09:35.899154 kernel: fb0: EFI VGA frame buffer device Sep 4 00:09:35.899167 kernel: pstore: Using crash dump compression: deflate Sep 4 00:09:35.899179 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 00:09:35.899190 kernel: NET: Registered PF_INET6 protocol family Sep 4 00:09:35.899202 kernel: Segment Routing with IPv6 Sep 4 00:09:35.899215 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 00:09:35.899227 kernel: NET: Registered PF_PACKET protocol family Sep 4 00:09:35.899238 kernel: Key type dns_resolver registered Sep 4 00:09:35.899249 kernel: IPI shorthand broadcast: enabled Sep 4 00:09:35.899261 kernel: sched_clock: Marking stable (3069004290, 160627656)->(3246894215, -17262269) Sep 4 00:09:35.899272 kernel: registered taskstats version 1 Sep 4 00:09:35.899284 kernel: Loading compiled-in X.509 certificates Sep 4 00:09:35.899296 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 247a8159a15e16f8eb89737aa66cd9cf9bbb3c10' Sep 4 00:09:35.899307 kernel: Demotion targets for Node 0: null Sep 4 00:09:35.899321 kernel: Key type .fscrypt registered Sep 4 00:09:35.899332 kernel: Key type fscrypt-provisioning registered Sep 4 00:09:35.899344 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 00:09:35.899355 kernel: ima: Allocated hash algorithm: sha1 Sep 4 00:09:35.899367 kernel: ima: No architecture policies found Sep 4 00:09:35.899378 kernel: clk: Disabling unused clocks Sep 4 00:09:35.899390 kernel: Warning: unable to open an initial console. Sep 4 00:09:35.899402 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 4 00:09:35.899413 kernel: Write protecting the kernel read-only data: 24576k Sep 4 00:09:35.899450 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 4 00:09:35.899462 kernel: Run /init as init process Sep 4 00:09:35.899474 kernel: with arguments: Sep 4 00:09:35.899485 kernel: /init Sep 4 00:09:35.899496 kernel: with environment: Sep 4 00:09:35.899507 kernel: HOME=/ Sep 4 00:09:35.899519 kernel: TERM=linux Sep 4 00:09:35.899530 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 00:09:35.899546 systemd[1]: Successfully made /usr/ read-only. Sep 4 00:09:35.899561 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:09:35.899576 systemd[1]: Detected virtualization kvm. Sep 4 00:09:35.899588 systemd[1]: Detected architecture x86-64. Sep 4 00:09:35.899610 systemd[1]: Running in initrd. Sep 4 00:09:35.899622 systemd[1]: No hostname configured, using default hostname. Sep 4 00:09:35.899634 systemd[1]: Hostname set to . Sep 4 00:09:35.899649 systemd[1]: Initializing machine ID from VM UUID. Sep 4 00:09:35.899661 systemd[1]: Queued start job for default target initrd.target. Sep 4 00:09:35.899673 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:09:35.899685 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:09:35.899698 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 00:09:35.899711 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:09:35.899723 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 00:09:35.899736 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 00:09:35.899753 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 00:09:35.899766 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 00:09:35.899778 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:09:35.899790 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:09:35.899802 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:09:35.899814 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:09:35.899826 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:09:35.899841 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:09:35.899853 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:09:35.899865 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:09:35.899877 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 00:09:35.899889 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 00:09:35.899901 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:09:35.899914 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:09:35.899926 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:09:35.899938 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:09:35.899952 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 00:09:35.899964 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:09:35.899977 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 00:09:35.899989 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 00:09:35.900002 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 00:09:35.900014 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:09:35.900026 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:09:35.900038 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:09:35.900053 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 00:09:35.900066 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:09:35.900078 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 00:09:35.900090 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 00:09:35.900131 systemd-journald[220]: Collecting audit messages is disabled. Sep 4 00:09:35.900165 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:09:35.900178 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:09:35.900190 systemd-journald[220]: Journal started Sep 4 00:09:35.900219 systemd-journald[220]: Runtime Journal (/run/log/journal/4d4035c27a294c4091cab03cd3a19a89) is 6M, max 48.2M, 42.2M free. Sep 4 00:09:35.891065 systemd-modules-load[221]: Inserted module 'overlay' Sep 4 00:09:35.905383 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:09:35.907287 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:09:35.913577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:09:35.918176 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 00:09:35.923916 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:09:35.926940 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 00:09:35.929101 systemd-tmpfiles[238]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 00:09:35.931271 kernel: Bridge firewalling registered Sep 4 00:09:35.931148 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 4 00:09:35.932829 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:09:35.934969 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:09:35.935891 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:09:35.952087 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:09:35.954709 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 00:09:35.957703 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:09:35.974945 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:09:35.991704 dracut-cmdline[258]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:09:36.030959 systemd-resolved[262]: Positive Trust Anchors: Sep 4 00:09:36.030977 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:09:36.031009 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:09:36.033697 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 4 00:09:36.035036 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:09:36.042016 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:09:36.130476 kernel: SCSI subsystem initialized Sep 4 00:09:36.139453 kernel: Loading iSCSI transport class v2.0-870. Sep 4 00:09:36.150464 kernel: iscsi: registered transport (tcp) Sep 4 00:09:36.173518 kernel: iscsi: registered transport (qla4xxx) Sep 4 00:09:36.173587 kernel: QLogic iSCSI HBA Driver Sep 4 00:09:36.197200 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:09:36.216475 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:09:36.217254 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:09:36.296134 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 00:09:36.330685 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 00:09:36.391476 kernel: raid6: avx2x4 gen() 29648 MB/s Sep 4 00:09:36.408459 kernel: raid6: avx2x2 gen() 30793 MB/s Sep 4 00:09:36.425556 kernel: raid6: avx2x1 gen() 24830 MB/s Sep 4 00:09:36.425629 kernel: raid6: using algorithm avx2x2 gen() 30793 MB/s Sep 4 00:09:36.449465 kernel: raid6: .... xor() 19313 MB/s, rmw enabled Sep 4 00:09:36.449514 kernel: raid6: using avx2x2 recovery algorithm Sep 4 00:09:36.475468 kernel: xor: automatically using best checksumming function avx Sep 4 00:09:36.651485 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 00:09:36.661600 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:09:36.681782 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:09:36.715956 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 4 00:09:36.721727 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:09:36.731674 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 00:09:36.773173 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Sep 4 00:09:36.813177 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:09:36.817226 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:09:36.914286 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:09:36.918921 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 00:09:36.957728 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 00:09:36.963474 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 00:09:36.979186 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 00:09:36.979226 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 00:09:36.979243 kernel: GPT:9289727 != 19775487 Sep 4 00:09:36.979256 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 00:09:36.979269 kernel: GPT:9289727 != 19775487 Sep 4 00:09:36.979282 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 00:09:36.979295 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 00:09:36.994489 kernel: libata version 3.00 loaded. Sep 4 00:09:36.996486 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 4 00:09:37.020182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:09:37.020395 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:09:37.049779 kernel: ahci 0000:00:1f.2: version 3.0 Sep 4 00:09:37.050025 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 4 00:09:37.050022 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:09:37.053798 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 4 00:09:37.053995 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 4 00:09:37.054163 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 4 00:09:37.055591 kernel: AES CTR mode by8 optimization enabled Sep 4 00:09:37.055507 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:09:37.057055 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:09:37.062540 kernel: scsi host0: ahci Sep 4 00:09:37.067627 kernel: scsi host1: ahci Sep 4 00:09:37.072452 kernel: scsi host2: ahci Sep 4 00:09:37.079475 kernel: scsi host3: ahci Sep 4 00:09:37.081515 kernel: scsi host4: ahci Sep 4 00:09:37.102458 kernel: scsi host5: ahci Sep 4 00:09:37.102750 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 4 00:09:37.102767 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 4 00:09:37.102781 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 4 00:09:37.106010 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 4 00:09:37.106050 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 4 00:09:37.109623 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 4 00:09:37.116613 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 00:09:37.117314 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:09:37.139124 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 00:09:37.139812 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 00:09:37.149100 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 00:09:37.159655 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 00:09:37.161315 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 00:09:37.198761 disk-uuid[633]: Primary Header is updated. Sep 4 00:09:37.198761 disk-uuid[633]: Secondary Entries is updated. Sep 4 00:09:37.198761 disk-uuid[633]: Secondary Header is updated. Sep 4 00:09:37.203465 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 00:09:37.416473 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 00:09:37.416556 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 4 00:09:37.417465 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 4 00:09:37.418461 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 4 00:09:37.419968 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 00:09:37.419987 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 00:09:37.421099 kernel: ata3.00: applying bridge limits Sep 4 00:09:37.423044 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 00:09:37.423071 kernel: ata3.00: configured for UDMA/100 Sep 4 00:09:37.424477 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 00:09:37.425848 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 00:09:37.426473 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 00:09:37.488471 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 00:09:37.488837 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 00:09:37.510484 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 4 00:09:37.964789 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 00:09:37.967451 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:09:37.970011 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:09:37.970702 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:09:37.972172 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 00:09:38.007012 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:09:38.268794 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 00:09:38.268869 disk-uuid[634]: The operation has completed successfully. Sep 4 00:09:38.308145 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 00:09:38.308279 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 00:09:38.346361 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 00:09:38.369356 sh[662]: Success Sep 4 00:09:38.403942 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 00:09:38.404019 kernel: device-mapper: uevent: version 1.0.3 Sep 4 00:09:38.405085 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 00:09:38.420477 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 4 00:09:38.455013 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 00:09:38.459312 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 00:09:38.479009 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 00:09:38.538468 kernel: BTRFS: device fsid 8a9c2e34-3d3c-49a9-acce-59bf90003071 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (674) Sep 4 00:09:38.540509 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9c2e34-3d3c-49a9-acce-59bf90003071 Sep 4 00:09:38.540552 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:09:38.545466 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 00:09:38.545491 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 00:09:38.546771 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 00:09:38.549114 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:09:38.551524 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 00:09:38.554548 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 00:09:38.557248 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 00:09:38.581107 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (707) Sep 4 00:09:38.581161 kernel: BTRFS info (device vda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:09:38.581172 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:09:38.584723 kernel: BTRFS info (device vda6): turning on async discard Sep 4 00:09:38.584799 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 00:09:38.589766 kernel: BTRFS info (device vda6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:09:38.590404 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 00:09:38.593740 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 00:09:38.685345 ignition[750]: Ignition 2.21.0 Sep 4 00:09:38.685368 ignition[750]: Stage: fetch-offline Sep 4 00:09:38.698593 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:09:38.685421 ignition[750]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:09:38.752276 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:09:38.685458 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:09:38.685567 ignition[750]: parsed url from cmdline: "" Sep 4 00:09:38.685572 ignition[750]: no config URL provided Sep 4 00:09:38.685579 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 00:09:38.685590 ignition[750]: no config at "/usr/lib/ignition/user.ign" Sep 4 00:09:38.685620 ignition[750]: op(1): [started] loading QEMU firmware config module Sep 4 00:09:38.685628 ignition[750]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 00:09:38.765505 ignition[750]: op(1): [finished] loading QEMU firmware config module Sep 4 00:09:38.794125 systemd-networkd[852]: lo: Link UP Sep 4 00:09:38.794138 systemd-networkd[852]: lo: Gained carrier Sep 4 00:09:38.796217 systemd-networkd[852]: Enumeration completed Sep 4 00:09:38.797145 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:09:38.797150 systemd-networkd[852]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:09:38.797322 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:09:38.799068 systemd-networkd[852]: eth0: Link UP Sep 4 00:09:38.799304 systemd-networkd[852]: eth0: Gained carrier Sep 4 00:09:38.799314 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:09:38.807585 systemd[1]: Reached target network.target - Network. Sep 4 00:09:38.818479 ignition[750]: parsing config with SHA512: b7b6a676493a7e81a8044355c22e2b9db012a9b9efd857e5e526ff04e57a1b21e8baf67b733de7a81a93911f069fcd7c2b1765074e94f1ca6221a3885f3bf721 Sep 4 00:09:38.818499 systemd-networkd[852]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 00:09:38.822803 unknown[750]: fetched base config from "system" Sep 4 00:09:38.822819 unknown[750]: fetched user config from "qemu" Sep 4 00:09:38.823565 ignition[750]: fetch-offline: fetch-offline passed Sep 4 00:09:38.823628 ignition[750]: Ignition finished successfully Sep 4 00:09:38.828100 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:09:38.829059 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 00:09:38.830183 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 00:09:38.863386 ignition[857]: Ignition 2.21.0 Sep 4 00:09:38.863406 ignition[857]: Stage: kargs Sep 4 00:09:38.863593 ignition[857]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:09:38.863605 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:09:38.868456 ignition[857]: kargs: kargs passed Sep 4 00:09:38.868583 ignition[857]: Ignition finished successfully Sep 4 00:09:38.873764 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 00:09:38.877535 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 00:09:38.903396 ignition[865]: Ignition 2.21.0 Sep 4 00:09:38.903410 ignition[865]: Stage: disks Sep 4 00:09:38.903608 ignition[865]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:09:38.903622 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:09:38.906757 ignition[865]: disks: disks passed Sep 4 00:09:38.906874 ignition[865]: Ignition finished successfully Sep 4 00:09:38.931076 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 00:09:38.933909 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 00:09:38.934208 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 00:09:38.938661 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:09:38.940857 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:09:38.945086 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:09:38.948369 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 00:09:38.981211 systemd-fsck[875]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 4 00:09:38.988911 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 00:09:38.990255 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 00:09:39.116527 kernel: EXT4-fs (vda9): mounted filesystem c3518c93-f823-4477-a620-ff9666a59be5 r/w with ordered data mode. Quota mode: none. Sep 4 00:09:39.117706 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 00:09:39.119359 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 00:09:39.122968 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:09:39.124977 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 00:09:39.126076 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 00:09:39.126131 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 00:09:39.126162 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:09:39.141731 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 00:09:39.143622 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 00:09:39.152472 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (884) Sep 4 00:09:39.154941 kernel: BTRFS info (device vda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:09:39.154997 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:09:39.158140 kernel: BTRFS info (device vda6): turning on async discard Sep 4 00:09:39.158188 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 00:09:39.161280 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:09:39.184773 initrd-setup-root[908]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 00:09:39.190232 initrd-setup-root[915]: cut: /sysroot/etc/group: No such file or directory Sep 4 00:09:39.195799 initrd-setup-root[922]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 00:09:39.200961 initrd-setup-root[929]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 00:09:39.310721 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 00:09:39.313043 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 00:09:39.315120 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 00:09:39.335470 kernel: BTRFS info (device vda6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:09:39.353185 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 00:09:39.368101 ignition[998]: INFO : Ignition 2.21.0 Sep 4 00:09:39.368101 ignition[998]: INFO : Stage: mount Sep 4 00:09:39.370205 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:09:39.370205 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:09:39.370205 ignition[998]: INFO : mount: mount passed Sep 4 00:09:39.370205 ignition[998]: INFO : Ignition finished successfully Sep 4 00:09:39.374782 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 00:09:39.379150 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 00:09:39.538968 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 00:09:39.541118 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:09:39.579457 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1010) Sep 4 00:09:39.581777 kernel: BTRFS info (device vda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:09:39.581801 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:09:39.585482 kernel: BTRFS info (device vda6): turning on async discard Sep 4 00:09:39.585592 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 00:09:39.587637 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:09:39.632970 ignition[1027]: INFO : Ignition 2.21.0 Sep 4 00:09:39.632970 ignition[1027]: INFO : Stage: files Sep 4 00:09:39.635931 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:09:39.635931 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:09:39.635931 ignition[1027]: DEBUG : files: compiled without relabeling support, skipping Sep 4 00:09:39.635931 ignition[1027]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 00:09:39.635931 ignition[1027]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 00:09:39.643092 ignition[1027]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 00:09:39.643092 ignition[1027]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 00:09:39.643092 ignition[1027]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 00:09:39.639980 unknown[1027]: wrote ssh authorized keys file for user: core Sep 4 00:09:39.648882 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 00:09:39.648882 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 00:09:39.692891 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 00:09:39.838660 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 00:09:39.838660 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 00:09:39.842500 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 00:09:40.060024 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 00:09:40.182241 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 00:09:40.184494 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 00:09:40.186506 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 00:09:40.188418 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:09:40.190509 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:09:40.194077 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:09:40.195836 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:09:40.197543 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:09:40.199234 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:09:40.206621 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:09:40.208724 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:09:40.208724 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 00:09:40.215313 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 00:09:40.215313 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 00:09:40.220717 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 4 00:09:40.385694 systemd-networkd[852]: eth0: Gained IPv6LL Sep 4 00:09:40.705011 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 00:09:41.205596 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 00:09:41.205596 ignition[1027]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 00:09:41.209705 ignition[1027]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:09:41.222781 ignition[1027]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:09:41.222781 ignition[1027]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 00:09:41.222781 ignition[1027]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 00:09:41.227153 ignition[1027]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 00:09:41.227153 ignition[1027]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 00:09:41.227153 ignition[1027]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 00:09:41.232543 ignition[1027]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 00:09:41.252020 ignition[1027]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 00:09:41.257483 ignition[1027]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 00:09:41.259371 ignition[1027]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 00:09:41.259371 ignition[1027]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 00:09:41.259371 ignition[1027]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 00:09:41.259371 ignition[1027]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:09:41.259371 ignition[1027]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:09:41.259371 ignition[1027]: INFO : files: files passed Sep 4 00:09:41.259371 ignition[1027]: INFO : Ignition finished successfully Sep 4 00:09:41.270780 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 00:09:41.274259 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 00:09:41.275559 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 00:09:41.292223 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 00:09:41.294199 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 00:09:41.298833 initrd-setup-root-after-ignition[1056]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 00:09:41.303319 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:09:41.305508 initrd-setup-root-after-ignition[1058]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:09:41.307308 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:09:41.310973 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:09:41.312665 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 00:09:41.316815 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 00:09:41.372780 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 00:09:41.372938 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 00:09:41.373992 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 00:09:41.429978 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 00:09:41.430782 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 00:09:41.431918 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 00:09:41.460731 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:09:41.463047 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 00:09:41.485681 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:09:41.486981 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:09:41.489253 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 00:09:41.489771 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 00:09:41.489886 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:09:41.493226 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 00:09:41.493815 systemd[1]: Stopped target basic.target - Basic System. Sep 4 00:09:41.494129 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 00:09:41.494470 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:09:41.494936 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 00:09:41.495255 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:09:41.495912 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 00:09:41.496221 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:09:41.496722 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 00:09:41.497033 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 00:09:41.497347 systemd[1]: Stopped target swap.target - Swaps. Sep 4 00:09:41.497653 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 00:09:41.497763 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:09:41.729273 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:09:41.730033 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:09:41.730337 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 00:09:41.730742 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:09:41.734536 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 00:09:41.734672 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 00:09:41.735345 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 00:09:41.735509 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:09:41.739827 systemd[1]: Stopped target paths.target - Path Units. Sep 4 00:09:41.741898 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 00:09:41.744048 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:09:41.744813 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 00:09:41.745114 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 00:09:41.745466 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 00:09:41.745563 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:09:41.745976 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 00:09:41.746069 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:09:41.752851 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 00:09:41.752980 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:09:41.754831 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 00:09:41.754943 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 00:09:41.758813 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 00:09:41.760065 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 00:09:41.761899 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 00:09:41.762025 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:09:41.769835 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 00:09:41.770865 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:09:41.776905 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 00:09:41.777041 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 00:09:41.795339 ignition[1082]: INFO : Ignition 2.21.0 Sep 4 00:09:41.795339 ignition[1082]: INFO : Stage: umount Sep 4 00:09:41.797799 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:09:41.797799 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:09:41.800692 ignition[1082]: INFO : umount: umount passed Sep 4 00:09:41.800692 ignition[1082]: INFO : Ignition finished successfully Sep 4 00:09:41.801057 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 00:09:41.801801 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 00:09:41.801934 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 00:09:41.803671 systemd[1]: Stopped target network.target - Network. Sep 4 00:09:41.805337 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 00:09:41.805393 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 00:09:41.806053 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 00:09:41.806102 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 00:09:41.806405 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 00:09:41.806499 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 00:09:41.806945 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 00:09:41.806989 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 00:09:41.808325 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 00:09:41.905946 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 00:09:41.915221 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 00:09:41.915400 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 00:09:41.920012 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 00:09:41.920308 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 00:09:41.920471 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 00:09:41.924605 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 00:09:41.925350 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 00:09:41.992333 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 00:09:41.992407 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:09:41.996883 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 00:09:41.998801 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 00:09:41.998866 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:09:42.000993 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 00:09:42.001044 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:09:42.002240 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 00:09:42.002290 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 00:09:42.002945 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 00:09:42.002991 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:09:42.007408 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:09:42.009097 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 00:09:42.009180 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:09:42.009590 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 00:09:42.012657 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 00:09:42.014006 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 00:09:42.014101 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 00:09:42.028282 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 00:09:42.028474 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 00:09:42.038484 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 00:09:42.038738 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:09:42.039416 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 00:09:42.039516 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 00:09:42.043064 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 00:09:42.043104 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:09:42.045642 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 00:09:42.045695 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:09:42.046710 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 00:09:42.046761 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 00:09:42.053798 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 00:09:42.053873 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:09:42.056302 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 00:09:42.461775 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 00:09:42.461880 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:09:42.469552 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 00:09:42.469629 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:09:42.473753 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 00:09:42.473804 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:09:42.478223 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 00:09:42.478275 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:09:42.479089 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:09:42.479137 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:09:42.487654 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 4 00:09:42.487751 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 4 00:09:42.487817 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 00:09:42.487883 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:09:42.496561 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 00:09:42.496749 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 00:09:42.497458 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 00:09:42.505184 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 00:09:42.536649 systemd[1]: Switching root. Sep 4 00:09:42.913311 systemd-journald[220]: Journal stopped Sep 4 00:09:46.622150 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 4 00:09:46.622290 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 00:09:46.622322 kernel: SELinux: policy capability open_perms=1 Sep 4 00:09:46.622337 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 00:09:46.622353 kernel: SELinux: policy capability always_check_network=0 Sep 4 00:09:46.622369 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 00:09:46.622384 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 00:09:46.622406 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 00:09:46.622444 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 00:09:46.622460 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 00:09:46.622471 kernel: audit: type=1403 audit(1756944585.008:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 00:09:46.622494 systemd[1]: Successfully loaded SELinux policy in 65.877ms. Sep 4 00:09:46.622522 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.854ms. Sep 4 00:09:46.622536 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:09:46.622551 systemd[1]: Detected virtualization kvm. Sep 4 00:09:46.622563 systemd[1]: Detected architecture x86-64. Sep 4 00:09:46.622583 systemd[1]: Detected first boot. Sep 4 00:09:46.622597 systemd[1]: Initializing machine ID from VM UUID. Sep 4 00:09:46.622609 zram_generator::config[1129]: No configuration found. Sep 4 00:09:46.622625 kernel: Guest personality initialized and is inactive Sep 4 00:09:46.622636 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 00:09:46.622648 kernel: Initialized host personality Sep 4 00:09:46.622666 kernel: NET: Registered PF_VSOCK protocol family Sep 4 00:09:46.622678 systemd[1]: Populated /etc with preset unit settings. Sep 4 00:09:46.622690 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 00:09:46.622707 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 00:09:46.622720 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 00:09:46.622734 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 00:09:46.622746 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 00:09:46.622759 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 00:09:46.622771 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 00:09:46.622783 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 00:09:46.622795 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 00:09:46.622808 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 00:09:46.622825 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 00:09:46.622837 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 00:09:46.622851 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:09:46.622864 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:09:46.622876 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 00:09:46.622888 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 00:09:46.622901 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 00:09:46.622918 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:09:46.622930 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 00:09:46.622945 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:09:46.622961 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:09:46.622973 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 00:09:46.622988 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 00:09:46.623002 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 00:09:46.623016 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 00:09:46.623028 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:09:46.623046 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:09:46.623059 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:09:46.623070 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:09:46.623082 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 00:09:46.623094 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 00:09:46.623111 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 00:09:46.623124 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:09:46.623138 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:09:46.623150 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:09:46.623165 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 00:09:46.623181 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 00:09:46.623194 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 00:09:46.623210 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 00:09:46.623222 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:09:46.623237 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 00:09:46.623249 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 00:09:46.623261 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 00:09:46.623274 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 00:09:46.623291 systemd[1]: Reached target machines.target - Containers. Sep 4 00:09:46.623305 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 00:09:46.623325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:09:46.623338 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:09:46.623352 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 00:09:46.623365 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:09:46.623376 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:09:46.623389 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:09:46.623406 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 00:09:46.623421 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:09:46.623462 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 00:09:46.623478 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 00:09:46.623490 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 00:09:46.623504 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 00:09:46.623516 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 00:09:46.623529 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:09:46.623551 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:09:46.623563 kernel: loop: module loaded Sep 4 00:09:46.623575 kernel: fuse: init (API version 7.41) Sep 4 00:09:46.623586 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:09:46.623599 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:09:46.623611 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 00:09:46.623624 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 00:09:46.623645 kernel: ACPI: bus type drm_connector registered Sep 4 00:09:46.623657 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:09:46.623669 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 00:09:46.623681 systemd[1]: Stopped verity-setup.service. Sep 4 00:09:46.623694 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:09:46.623712 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 00:09:46.623753 systemd-journald[1200]: Collecting audit messages is disabled. Sep 4 00:09:46.623784 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 00:09:46.623797 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 00:09:46.624823 systemd-journald[1200]: Journal started Sep 4 00:09:46.624859 systemd-journald[1200]: Runtime Journal (/run/log/journal/4d4035c27a294c4091cab03cd3a19a89) is 6M, max 48.2M, 42.2M free. Sep 4 00:09:46.261568 systemd[1]: Queued start job for default target multi-user.target. Sep 4 00:09:46.284589 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 00:09:46.285786 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 00:09:46.286465 systemd[1]: systemd-journald.service: Consumed 1.427s CPU time. Sep 4 00:09:46.628031 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:09:46.629056 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 00:09:46.630379 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 00:09:46.631724 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 00:09:46.633180 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 00:09:46.635064 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:09:46.636842 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 00:09:46.637175 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 00:09:46.638837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:09:46.639161 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:09:46.640786 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:09:46.641095 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:09:46.642911 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:09:46.643235 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:09:46.645052 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 00:09:46.645401 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 00:09:46.647074 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:09:46.647408 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:09:46.649373 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:09:46.651193 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:09:46.653375 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 00:09:46.670485 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:09:46.673877 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 00:09:46.680118 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 00:09:46.682777 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 00:09:46.683044 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:09:46.685964 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 00:09:46.715253 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 00:09:46.717144 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:09:46.719715 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 00:09:46.723184 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 00:09:46.725090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:09:46.728013 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 00:09:46.732633 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:09:46.734859 systemd-journald[1200]: Time spent on flushing to /var/log/journal/4d4035c27a294c4091cab03cd3a19a89 is 32.655ms for 1043 entries. Sep 4 00:09:46.734859 systemd-journald[1200]: System Journal (/var/log/journal/4d4035c27a294c4091cab03cd3a19a89) is 8M, max 195.6M, 187.6M free. Sep 4 00:09:46.781874 systemd-journald[1200]: Received client request to flush runtime journal. Sep 4 00:09:46.781935 kernel: loop0: detected capacity change from 0 to 113872 Sep 4 00:09:46.734775 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:09:46.740419 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 00:09:46.850256 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 00:09:46.831483 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 00:09:46.835866 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 00:09:46.837974 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:09:46.839811 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 00:09:46.841275 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 00:09:46.844284 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 00:09:46.847087 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 00:09:46.859013 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 00:09:46.864030 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 00:09:46.879985 kernel: loop1: detected capacity change from 0 to 221472 Sep 4 00:09:46.877804 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:09:46.882326 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 4 00:09:46.882346 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 4 00:09:46.892777 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:09:46.897663 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 00:09:46.957457 kernel: loop2: detected capacity change from 0 to 146240 Sep 4 00:09:47.062477 kernel: loop3: detected capacity change from 0 to 113872 Sep 4 00:09:47.069948 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 00:09:47.073838 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:09:47.098468 kernel: loop4: detected capacity change from 0 to 221472 Sep 4 00:09:47.117935 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 4 00:09:47.117960 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 4 00:09:47.125505 kernel: loop5: detected capacity change from 0 to 146240 Sep 4 00:09:47.126074 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:09:47.186257 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 00:09:47.200521 (sd-merge)[1269]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 00:09:47.201126 (sd-merge)[1269]: Merged extensions into '/usr'. Sep 4 00:09:47.205974 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 00:09:47.206077 systemd[1]: Reloading... Sep 4 00:09:47.317460 zram_generator::config[1298]: No configuration found. Sep 4 00:09:47.518025 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:09:47.614708 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 00:09:47.615385 systemd[1]: Reloading finished in 408 ms. Sep 4 00:09:47.640186 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 00:09:47.659053 ldconfig[1242]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 00:09:47.666627 systemd[1]: Starting ensure-sysext.service... Sep 4 00:09:47.669382 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:09:47.695644 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Sep 4 00:09:47.695669 systemd[1]: Reloading... Sep 4 00:09:47.720558 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 00:09:47.720597 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 00:09:47.720902 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 00:09:47.721166 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 00:09:47.722087 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 00:09:47.722386 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 4 00:09:47.722483 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 4 00:09:47.727084 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:09:47.727099 systemd-tmpfiles[1337]: Skipping /boot Sep 4 00:09:47.747330 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:09:47.747349 systemd-tmpfiles[1337]: Skipping /boot Sep 4 00:09:47.773372 zram_generator::config[1371]: No configuration found. Sep 4 00:09:47.919109 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:09:48.001695 systemd[1]: Reloading finished in 305 ms. Sep 4 00:09:48.024176 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 00:09:48.046508 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:09:48.058286 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:09:48.062449 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 00:09:48.085168 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 00:09:48.091744 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:09:48.097675 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 00:09:48.122348 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 00:09:48.125982 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:09:48.126310 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:09:48.165340 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:09:48.190907 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:09:48.193312 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:09:48.194519 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:09:48.194638 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:09:48.196380 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:09:48.207453 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 00:09:48.223867 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:09:48.227245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:09:48.227544 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:09:48.229920 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:09:48.230163 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:09:48.232893 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:09:48.233148 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:09:48.235572 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 00:09:48.250891 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 00:09:48.267815 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:09:48.268154 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:09:48.270464 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:09:48.302254 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:09:48.312344 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:09:48.313399 systemd-udevd[1429]: Using default interface naming scheme 'v255'. Sep 4 00:09:48.320531 augenrules[1443]: No rules Sep 4 00:09:48.323049 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:09:48.324620 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:09:48.324671 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:09:48.324752 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 00:09:48.324776 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:09:48.325585 systemd[1]: Finished ensure-sysext.service. Sep 4 00:09:48.327930 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:09:48.328235 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:09:48.336852 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 00:09:48.339639 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:09:48.339939 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:09:48.342375 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:09:48.342741 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:09:48.345236 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:09:48.345561 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:09:48.347767 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:09:48.348016 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:09:48.353100 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:09:48.358762 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 00:09:48.365991 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:09:48.367420 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:09:48.367520 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:09:48.376843 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 00:09:48.398359 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 00:09:48.503753 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 00:09:48.574847 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 00:09:48.678178 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 00:09:48.704748 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 00:09:48.718475 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 00:09:48.734059 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 00:09:48.734192 kernel: ACPI: button: Power Button [PWRF] Sep 4 00:09:48.746040 systemd-networkd[1467]: lo: Link UP Sep 4 00:09:48.746570 systemd-networkd[1467]: lo: Gained carrier Sep 4 00:09:48.749022 systemd-networkd[1467]: Enumeration completed Sep 4 00:09:48.749675 systemd-networkd[1467]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:09:48.749749 systemd-networkd[1467]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:09:48.750645 systemd-networkd[1467]: eth0: Link UP Sep 4 00:09:48.751079 systemd-networkd[1467]: eth0: Gained carrier Sep 4 00:09:48.751162 systemd-networkd[1467]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:09:48.753743 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:09:48.762725 systemd-resolved[1407]: Positive Trust Anchors: Sep 4 00:09:48.762745 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:09:48.762778 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:09:48.766664 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 00:09:48.770090 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 00:09:48.772743 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 00:09:48.774632 systemd-resolved[1407]: Defaulting to hostname 'linux'. Sep 4 00:09:48.781011 systemd-networkd[1467]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 00:09:48.782530 systemd-timesyncd[1475]: Network configuration changed, trying to establish connection. Sep 4 00:09:49.990942 systemd-resolved[1407]: Clock change detected. Flushing caches. Sep 4 00:09:49.991053 systemd-timesyncd[1475]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 00:09:49.991123 systemd-timesyncd[1475]: Initial clock synchronization to Thu 2025-09-04 00:09:49.990895 UTC. Sep 4 00:09:49.991338 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 00:09:49.993209 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:09:49.996689 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 4 00:09:49.996993 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 4 00:09:49.997232 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 4 00:09:49.997842 systemd[1]: Reached target network.target - Network. Sep 4 00:09:49.999591 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:09:50.001137 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:09:50.002502 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 00:09:50.004182 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 00:09:50.005883 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 4 00:09:50.007496 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 00:09:50.009221 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 00:09:50.009272 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:09:50.010436 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 00:09:50.012033 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 00:09:50.013528 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 00:09:50.015105 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:09:50.017319 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 00:09:50.020153 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 00:09:50.024433 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 00:09:50.027632 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 00:09:50.029227 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 00:09:50.077522 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 00:09:50.092063 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 00:09:50.094361 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 00:09:50.097606 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:09:50.099241 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:09:50.100527 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:09:50.100571 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:09:50.104769 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 00:09:50.108106 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 00:09:50.119294 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 00:09:50.137600 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 00:09:50.140953 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 00:09:50.142197 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 00:09:50.144932 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 4 00:09:50.151257 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 00:09:50.156310 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 00:09:50.166376 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 00:09:50.170896 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 00:09:50.177467 jq[1528]: false Sep 4 00:09:50.180438 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Refreshing passwd entry cache Sep 4 00:09:50.180455 oslogin_cache_refresh[1530]: Refreshing passwd entry cache Sep 4 00:09:50.185971 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 00:09:50.188516 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 00:09:50.189548 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 00:09:50.191257 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 00:09:50.193568 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Failure getting users, quitting Sep 4 00:09:50.193560 oslogin_cache_refresh[1530]: Failure getting users, quitting Sep 4 00:09:50.193678 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:09:50.193597 oslogin_cache_refresh[1530]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:09:50.193750 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Refreshing group entry cache Sep 4 00:09:50.193697 oslogin_cache_refresh[1530]: Refreshing group entry cache Sep 4 00:09:50.194921 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 00:09:50.197856 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 00:09:50.198275 extend-filesystems[1529]: Found /dev/vda6 Sep 4 00:09:50.200550 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 00:09:50.202150 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 00:09:50.202423 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 00:09:50.205258 extend-filesystems[1529]: Found /dev/vda9 Sep 4 00:09:50.207045 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 00:09:50.207352 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 00:09:50.210340 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Failure getting groups, quitting Sep 4 00:09:50.210340 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:09:50.208969 oslogin_cache_refresh[1530]: Failure getting groups, quitting Sep 4 00:09:50.208991 oslogin_cache_refresh[1530]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:09:50.218136 extend-filesystems[1529]: Checking size of /dev/vda9 Sep 4 00:09:50.238062 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 4 00:09:50.244075 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 4 00:09:50.244788 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 00:09:50.245045 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 00:09:50.251396 jq[1546]: true Sep 4 00:09:50.260695 extend-filesystems[1529]: Resized partition /dev/vda9 Sep 4 00:09:50.269821 update_engine[1544]: I20250904 00:09:50.269706 1544 main.cc:92] Flatcar Update Engine starting Sep 4 00:09:50.363002 jq[1561]: true Sep 4 00:09:50.363467 extend-filesystems[1570]: resize2fs 1.47.2 (1-Jan-2025) Sep 4 00:09:50.367292 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 00:09:50.366246 dbus-daemon[1526]: [system] SELinux support is enabled Sep 4 00:09:50.369897 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 00:09:50.379185 update_engine[1544]: I20250904 00:09:50.379108 1544 update_check_scheduler.cc:74] Next update check in 4m17s Sep 4 00:09:50.393951 tar[1549]: linux-amd64/helm Sep 4 00:09:50.395667 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 00:09:50.396380 systemd[1]: Started update-engine.service - Update Engine. Sep 4 00:09:50.400727 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 00:09:50.402625 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 00:09:50.402860 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 00:09:50.430429 extend-filesystems[1570]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 00:09:50.430429 extend-filesystems[1570]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 00:09:50.430429 extend-filesystems[1570]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 00:09:50.409080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:09:50.434405 extend-filesystems[1529]: Resized filesystem in /dev/vda9 Sep 4 00:09:50.410226 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 00:09:50.410465 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 00:09:50.415059 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 00:09:50.423295 (ntainerd)[1569]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 00:09:50.437727 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 00:09:50.438023 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 00:09:50.529986 bash[1596]: Updated "/home/core/.ssh/authorized_keys" Sep 4 00:09:50.530678 systemd-logind[1539]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 00:09:50.530709 systemd-logind[1539]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 00:09:50.532081 systemd-logind[1539]: New seat seat0. Sep 4 00:09:50.533432 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 00:09:50.535201 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 00:09:50.536832 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 00:09:50.635956 kernel: kvm_amd: TSC scaling supported Sep 4 00:09:50.636076 kernel: kvm_amd: Nested Virtualization enabled Sep 4 00:09:50.636091 kernel: kvm_amd: Nested Paging enabled Sep 4 00:09:50.636103 kernel: kvm_amd: LBR virtualization supported Sep 4 00:09:50.637231 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 00:09:50.637257 kernel: kvm_amd: Virtual GIF supported Sep 4 00:09:50.702769 locksmithd[1578]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 00:09:50.706670 kernel: EDAC MC: Ver: 3.0.0 Sep 4 00:09:50.749826 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:09:50.795054 containerd[1569]: time="2025-09-04T00:09:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 00:09:50.796554 containerd[1569]: time="2025-09-04T00:09:50.796523120Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 4 00:09:50.806698 containerd[1569]: time="2025-09-04T00:09:50.806623606Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.991µs" Sep 4 00:09:50.806698 containerd[1569]: time="2025-09-04T00:09:50.806680703Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 00:09:50.806698 containerd[1569]: time="2025-09-04T00:09:50.806701422Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 00:09:50.806948 containerd[1569]: time="2025-09-04T00:09:50.806936463Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 00:09:50.806978 containerd[1569]: time="2025-09-04T00:09:50.806952022Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 00:09:50.807005 containerd[1569]: time="2025-09-04T00:09:50.806980615Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:09:50.807128 containerd[1569]: time="2025-09-04T00:09:50.807088317Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:09:50.807128 containerd[1569]: time="2025-09-04T00:09:50.807109958Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:09:50.807559 containerd[1569]: time="2025-09-04T00:09:50.807436420Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:09:50.807559 containerd[1569]: time="2025-09-04T00:09:50.807462129Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:09:50.807559 containerd[1569]: time="2025-09-04T00:09:50.807472258Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:09:50.807559 containerd[1569]: time="2025-09-04T00:09:50.807480814Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 00:09:50.807956 containerd[1569]: time="2025-09-04T00:09:50.807573798Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 00:09:50.807956 containerd[1569]: time="2025-09-04T00:09:50.807857671Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:09:50.807956 containerd[1569]: time="2025-09-04T00:09:50.807898267Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:09:50.807956 containerd[1569]: time="2025-09-04T00:09:50.807910259Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 00:09:50.807956 containerd[1569]: time="2025-09-04T00:09:50.807940666Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 00:09:50.808203 containerd[1569]: time="2025-09-04T00:09:50.808176368Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 00:09:50.808269 containerd[1569]: time="2025-09-04T00:09:50.808249455Z" level=info msg="metadata content store policy set" policy=shared Sep 4 00:09:50.809024 sshd_keygen[1555]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 00:09:50.819301 containerd[1569]: time="2025-09-04T00:09:50.819235302Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 00:09:50.819473 containerd[1569]: time="2025-09-04T00:09:50.819349025Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 00:09:50.819473 containerd[1569]: time="2025-09-04T00:09:50.819367209Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 00:09:50.819473 containerd[1569]: time="2025-09-04T00:09:50.819380173Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 00:09:50.819473 containerd[1569]: time="2025-09-04T00:09:50.819392947Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 00:09:50.819473 containerd[1569]: time="2025-09-04T00:09:50.819404038Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 00:09:50.819473 containerd[1569]: time="2025-09-04T00:09:50.819414919Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 00:09:50.819473 containerd[1569]: time="2025-09-04T00:09:50.819426200Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 00:09:50.819473 containerd[1569]: time="2025-09-04T00:09:50.819438212Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 00:09:50.819473 containerd[1569]: time="2025-09-04T00:09:50.819462287Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 00:09:50.819473 containerd[1569]: time="2025-09-04T00:09:50.819474631Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 00:09:50.819773 containerd[1569]: time="2025-09-04T00:09:50.819494448Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 00:09:50.819773 containerd[1569]: time="2025-09-04T00:09:50.819632537Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 00:09:50.819773 containerd[1569]: time="2025-09-04T00:09:50.819679345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 00:09:50.819773 containerd[1569]: time="2025-09-04T00:09:50.819693732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 00:09:50.819773 containerd[1569]: time="2025-09-04T00:09:50.819708068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 00:09:50.819773 containerd[1569]: time="2025-09-04T00:09:50.819718668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 00:09:50.819773 containerd[1569]: time="2025-09-04T00:09:50.819729018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 00:09:50.819773 containerd[1569]: time="2025-09-04T00:09:50.819739357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 00:09:50.819773 containerd[1569]: time="2025-09-04T00:09:50.819753023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 00:09:50.819773 containerd[1569]: time="2025-09-04T00:09:50.819771748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 00:09:50.820193 containerd[1569]: time="2025-09-04T00:09:50.819783660Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 00:09:50.820193 containerd[1569]: time="2025-09-04T00:09:50.819803948Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 00:09:50.820193 containerd[1569]: time="2025-09-04T00:09:50.819876695Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 00:09:50.820193 containerd[1569]: time="2025-09-04T00:09:50.819890681Z" level=info msg="Start snapshots syncer" Sep 4 00:09:50.820193 containerd[1569]: time="2025-09-04T00:09:50.819920287Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 00:09:50.820397 containerd[1569]: time="2025-09-04T00:09:50.820183971Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 00:09:50.820397 containerd[1569]: time="2025-09-04T00:09:50.820232262Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 00:09:50.820561 containerd[1569]: time="2025-09-04T00:09:50.820315558Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 00:09:50.820561 containerd[1569]: time="2025-09-04T00:09:50.820452425Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 00:09:50.820561 containerd[1569]: time="2025-09-04T00:09:50.820480447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 00:09:50.820561 containerd[1569]: time="2025-09-04T00:09:50.820494794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 00:09:50.820561 containerd[1569]: time="2025-09-04T00:09:50.820507308Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 00:09:50.820561 containerd[1569]: time="2025-09-04T00:09:50.820521885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 00:09:50.820561 containerd[1569]: time="2025-09-04T00:09:50.820532876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 00:09:50.820561 containerd[1569]: time="2025-09-04T00:09:50.820542914Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 00:09:50.820922 containerd[1569]: time="2025-09-04T00:09:50.820572069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 00:09:50.820922 containerd[1569]: time="2025-09-04T00:09:50.820582849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 00:09:50.820922 containerd[1569]: time="2025-09-04T00:09:50.820592207Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 00:09:50.820922 containerd[1569]: time="2025-09-04T00:09:50.820634596Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:09:50.820922 containerd[1569]: time="2025-09-04T00:09:50.820691142Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:09:50.820922 containerd[1569]: time="2025-09-04T00:09:50.820709317Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:09:50.820922 containerd[1569]: time="2025-09-04T00:09:50.820718844Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:09:50.820922 containerd[1569]: time="2025-09-04T00:09:50.820726419Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 00:09:50.820922 containerd[1569]: time="2025-09-04T00:09:50.820746727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 00:09:50.820922 containerd[1569]: time="2025-09-04T00:09:50.820761324Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 00:09:50.820922 containerd[1569]: time="2025-09-04T00:09:50.820780931Z" level=info msg="runtime interface created" Sep 4 00:09:50.820922 containerd[1569]: time="2025-09-04T00:09:50.820786071Z" level=info msg="created NRI interface" Sep 4 00:09:50.820922 containerd[1569]: time="2025-09-04T00:09:50.820797642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 00:09:50.820922 containerd[1569]: time="2025-09-04T00:09:50.820807671Z" level=info msg="Connect containerd service" Sep 4 00:09:50.820922 containerd[1569]: time="2025-09-04T00:09:50.820829843Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 00:09:50.821744 containerd[1569]: time="2025-09-04T00:09:50.821625335Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 00:09:50.834745 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 00:09:50.840901 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 00:09:50.843373 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:60884.service - OpenSSH per-connection server daemon (10.0.0.1:60884). Sep 4 00:09:50.866853 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 00:09:50.867546 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 00:09:50.873941 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 00:09:50.921591 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 00:09:50.928789 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 00:09:50.933048 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 00:09:50.934836 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 00:09:50.940951 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 60884 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:09:50.942610 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:50.947320 containerd[1569]: time="2025-09-04T00:09:50.947237417Z" level=info msg="Start subscribing containerd event" Sep 4 00:09:50.948190 containerd[1569]: time="2025-09-04T00:09:50.947768143Z" level=info msg="Start recovering state" Sep 4 00:09:50.948190 containerd[1569]: time="2025-09-04T00:09:50.947918875Z" level=info msg="Start event monitor" Sep 4 00:09:50.948190 containerd[1569]: time="2025-09-04T00:09:50.947942750Z" level=info msg="Start cni network conf syncer for default" Sep 4 00:09:50.948190 containerd[1569]: time="2025-09-04T00:09:50.947953901Z" level=info msg="Start streaming server" Sep 4 00:09:50.948190 containerd[1569]: time="2025-09-04T00:09:50.947966735Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 00:09:50.948190 containerd[1569]: time="2025-09-04T00:09:50.947977195Z" level=info msg="runtime interface starting up..." Sep 4 00:09:50.948190 containerd[1569]: time="2025-09-04T00:09:50.947986192Z" level=info msg="starting plugins..." Sep 4 00:09:50.948190 containerd[1569]: time="2025-09-04T00:09:50.948007281Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 00:09:50.948630 containerd[1569]: time="2025-09-04T00:09:50.948606736Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 00:09:50.948936 containerd[1569]: time="2025-09-04T00:09:50.948913411Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 00:09:50.949079 containerd[1569]: time="2025-09-04T00:09:50.949060727Z" level=info msg="containerd successfully booted in 0.154830s" Sep 4 00:09:50.949780 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 00:09:50.987195 systemd-logind[1539]: New session 1 of user core. Sep 4 00:09:50.988846 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 00:09:50.991814 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 00:09:51.017990 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 00:09:51.024100 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 00:09:51.048399 tar[1549]: linux-amd64/LICENSE Sep 4 00:09:51.048531 tar[1549]: linux-amd64/README.md Sep 4 00:09:51.051456 (systemd)[1649]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 00:09:51.054811 systemd-logind[1539]: New session c1 of user core. Sep 4 00:09:51.069696 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 00:09:51.205050 systemd[1649]: Queued start job for default target default.target. Sep 4 00:09:51.217008 systemd[1649]: Created slice app.slice - User Application Slice. Sep 4 00:09:51.217042 systemd[1649]: Reached target paths.target - Paths. Sep 4 00:09:51.217100 systemd[1649]: Reached target timers.target - Timers. Sep 4 00:09:51.218783 systemd[1649]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 00:09:51.232065 systemd[1649]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 00:09:51.232237 systemd[1649]: Reached target sockets.target - Sockets. Sep 4 00:09:51.232293 systemd[1649]: Reached target basic.target - Basic System. Sep 4 00:09:51.232344 systemd[1649]: Reached target default.target - Main User Target. Sep 4 00:09:51.232391 systemd[1649]: Startup finished in 168ms. Sep 4 00:09:51.232771 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 00:09:51.236096 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 00:09:51.305234 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:60890.service - OpenSSH per-connection server daemon (10.0.0.1:60890). Sep 4 00:09:51.369351 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 60890 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:09:51.371338 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:51.376773 systemd-logind[1539]: New session 2 of user core. Sep 4 00:09:51.385916 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 00:09:51.445019 sshd[1665]: Connection closed by 10.0.0.1 port 60890 Sep 4 00:09:51.445420 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:51.447790 systemd-networkd[1467]: eth0: Gained IPv6LL Sep 4 00:09:51.455597 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 00:09:51.457874 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:60890.service: Deactivated successfully. Sep 4 00:09:51.460317 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 00:09:51.461370 systemd-logind[1539]: Session 2 logged out. Waiting for processes to exit. Sep 4 00:09:51.465029 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 00:09:51.468609 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 00:09:51.472015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:09:51.496575 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 00:09:51.499251 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:60900.service - OpenSSH per-connection server daemon (10.0.0.1:60900). Sep 4 00:09:51.508477 systemd-logind[1539]: Removed session 2. Sep 4 00:09:51.527295 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 00:09:51.531588 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 00:09:51.532067 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 00:09:51.534017 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 00:09:51.552604 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 60900 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:09:51.554589 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:51.560301 systemd-logind[1539]: New session 3 of user core. Sep 4 00:09:51.577845 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 00:09:51.747390 sshd[1691]: Connection closed by 10.0.0.1 port 60900 Sep 4 00:09:51.747781 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:51.753385 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:60900.service: Deactivated successfully. Sep 4 00:09:51.756331 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 00:09:51.757347 systemd-logind[1539]: Session 3 logged out. Waiting for processes to exit. Sep 4 00:09:51.759038 systemd-logind[1539]: Removed session 3. Sep 4 00:09:53.350923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:09:53.380103 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 00:09:53.381776 systemd[1]: Startup finished in 3.181s (kernel) + 9.311s (initrd) + 7.230s (userspace) = 19.723s. Sep 4 00:09:53.395207 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:09:54.355023 kubelet[1701]: E0904 00:09:54.354928 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:09:54.360289 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:09:54.360622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:09:54.361317 systemd[1]: kubelet.service: Consumed 2.632s CPU time, 268.3M memory peak. Sep 4 00:10:01.771319 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:41816.service - OpenSSH per-connection server daemon (10.0.0.1:41816). Sep 4 00:10:01.823897 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 41816 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:10:01.825758 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:10:01.830455 systemd-logind[1539]: New session 4 of user core. Sep 4 00:10:01.839780 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 00:10:01.894089 sshd[1717]: Connection closed by 10.0.0.1 port 41816 Sep 4 00:10:01.894474 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Sep 4 00:10:01.903126 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:41816.service: Deactivated successfully. Sep 4 00:10:01.905242 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 00:10:01.906104 systemd-logind[1539]: Session 4 logged out. Waiting for processes to exit. Sep 4 00:10:01.910130 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:41824.service - OpenSSH per-connection server daemon (10.0.0.1:41824). Sep 4 00:10:01.910726 systemd-logind[1539]: Removed session 4. Sep 4 00:10:01.961502 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 41824 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:10:01.963425 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:10:01.968954 systemd-logind[1539]: New session 5 of user core. Sep 4 00:10:01.979795 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 00:10:02.030821 sshd[1725]: Connection closed by 10.0.0.1 port 41824 Sep 4 00:10:02.031218 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Sep 4 00:10:02.042739 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:41824.service: Deactivated successfully. Sep 4 00:10:02.044872 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 00:10:02.045598 systemd-logind[1539]: Session 5 logged out. Waiting for processes to exit. Sep 4 00:10:02.049044 systemd[1]: Started sshd@5-10.0.0.134:22-10.0.0.1:41832.service - OpenSSH per-connection server daemon (10.0.0.1:41832). Sep 4 00:10:02.049597 systemd-logind[1539]: Removed session 5. Sep 4 00:10:02.110186 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 41832 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:10:02.111711 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:10:02.116209 systemd-logind[1539]: New session 6 of user core. Sep 4 00:10:02.125790 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 00:10:02.179986 sshd[1733]: Connection closed by 10.0.0.1 port 41832 Sep 4 00:10:02.180376 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Sep 4 00:10:02.192483 systemd[1]: sshd@5-10.0.0.134:22-10.0.0.1:41832.service: Deactivated successfully. Sep 4 00:10:02.194405 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 00:10:02.195324 systemd-logind[1539]: Session 6 logged out. Waiting for processes to exit. Sep 4 00:10:02.198201 systemd[1]: Started sshd@6-10.0.0.134:22-10.0.0.1:41840.service - OpenSSH per-connection server daemon (10.0.0.1:41840). Sep 4 00:10:02.199220 systemd-logind[1539]: Removed session 6. Sep 4 00:10:02.258669 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 41840 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:10:02.260355 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:10:02.265358 systemd-logind[1539]: New session 7 of user core. Sep 4 00:10:02.280822 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 00:10:02.338356 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 00:10:02.338711 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:10:02.356962 sudo[1743]: pam_unix(sudo:session): session closed for user root Sep 4 00:10:02.359016 sshd[1742]: Connection closed by 10.0.0.1 port 41840 Sep 4 00:10:02.360160 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Sep 4 00:10:02.370222 systemd[1]: sshd@6-10.0.0.134:22-10.0.0.1:41840.service: Deactivated successfully. Sep 4 00:10:02.372510 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 00:10:02.373372 systemd-logind[1539]: Session 7 logged out. Waiting for processes to exit. Sep 4 00:10:02.377623 systemd[1]: Started sshd@7-10.0.0.134:22-10.0.0.1:41852.service - OpenSSH per-connection server daemon (10.0.0.1:41852). Sep 4 00:10:02.378558 systemd-logind[1539]: Removed session 7. Sep 4 00:10:02.440057 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 41852 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:10:02.442063 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:10:02.448005 systemd-logind[1539]: New session 8 of user core. Sep 4 00:10:02.457931 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 00:10:02.515953 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 00:10:02.516348 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:10:02.537298 sudo[1753]: pam_unix(sudo:session): session closed for user root Sep 4 00:10:02.546067 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 00:10:02.546468 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:10:02.559832 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:10:02.618904 augenrules[1775]: No rules Sep 4 00:10:02.621331 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:10:02.621740 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:10:02.623255 sudo[1752]: pam_unix(sudo:session): session closed for user root Sep 4 00:10:02.625400 sshd[1751]: Connection closed by 10.0.0.1 port 41852 Sep 4 00:10:02.625831 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Sep 4 00:10:02.640707 systemd[1]: sshd@7-10.0.0.134:22-10.0.0.1:41852.service: Deactivated successfully. Sep 4 00:10:02.643193 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 00:10:02.644238 systemd-logind[1539]: Session 8 logged out. Waiting for processes to exit. Sep 4 00:10:02.648003 systemd[1]: Started sshd@8-10.0.0.134:22-10.0.0.1:41866.service - OpenSSH per-connection server daemon (10.0.0.1:41866). Sep 4 00:10:02.648724 systemd-logind[1539]: Removed session 8. Sep 4 00:10:02.712988 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 41866 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:10:02.715034 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:10:02.720382 systemd-logind[1539]: New session 9 of user core. Sep 4 00:10:02.734000 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 00:10:02.796093 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 00:10:02.796566 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:10:03.635354 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 00:10:03.661470 (dockerd)[1807]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 00:10:04.216807 dockerd[1807]: time="2025-09-04T00:10:04.216707991Z" level=info msg="Starting up" Sep 4 00:10:04.218667 dockerd[1807]: time="2025-09-04T00:10:04.218614297Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 00:10:04.611187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 00:10:04.613459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:10:05.077628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:10:05.094120 (kubelet)[1838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:10:05.362865 kubelet[1838]: E0904 00:10:05.362661 1838 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:10:05.369036 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:10:05.369314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:10:05.369856 systemd[1]: kubelet.service: Consumed 385ms CPU time, 111.2M memory peak. Sep 4 00:10:05.539748 dockerd[1807]: time="2025-09-04T00:10:05.539576755Z" level=info msg="Loading containers: start." Sep 4 00:10:05.552691 kernel: Initializing XFRM netlink socket Sep 4 00:10:06.011520 systemd-networkd[1467]: docker0: Link UP Sep 4 00:10:06.052350 dockerd[1807]: time="2025-09-04T00:10:06.052262817Z" level=info msg="Loading containers: done." Sep 4 00:10:06.117539 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1896307062-merged.mount: Deactivated successfully. Sep 4 00:10:06.142906 dockerd[1807]: time="2025-09-04T00:10:06.142801081Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 00:10:06.143029 dockerd[1807]: time="2025-09-04T00:10:06.142998071Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 4 00:10:06.143210 dockerd[1807]: time="2025-09-04T00:10:06.143183629Z" level=info msg="Initializing buildkit" Sep 4 00:10:06.544374 dockerd[1807]: time="2025-09-04T00:10:06.544293436Z" level=info msg="Completed buildkit initialization" Sep 4 00:10:06.551532 dockerd[1807]: time="2025-09-04T00:10:06.551466801Z" level=info msg="Daemon has completed initialization" Sep 4 00:10:06.551710 dockerd[1807]: time="2025-09-04T00:10:06.551578030Z" level=info msg="API listen on /run/docker.sock" Sep 4 00:10:06.551824 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 00:10:07.678108 containerd[1569]: time="2025-09-04T00:10:07.678042806Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 4 00:10:09.162556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4044824002.mount: Deactivated successfully. Sep 4 00:10:13.280539 containerd[1569]: time="2025-09-04T00:10:13.280418712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:13.281412 containerd[1569]: time="2025-09-04T00:10:13.281345260Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 4 00:10:13.284322 containerd[1569]: time="2025-09-04T00:10:13.284279313Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:13.742038 containerd[1569]: time="2025-09-04T00:10:13.741611487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:13.742988 containerd[1569]: time="2025-09-04T00:10:13.742922126Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 6.064816602s" Sep 4 00:10:13.742988 containerd[1569]: time="2025-09-04T00:10:13.742980826Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 4 00:10:13.743999 containerd[1569]: time="2025-09-04T00:10:13.743859314Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 4 00:10:15.619849 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 00:10:15.621857 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:10:15.865934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:10:15.877064 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:10:15.943001 kubelet[2093]: E0904 00:10:15.942920 2093 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:10:15.947874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:10:15.948080 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:10:15.948475 systemd[1]: kubelet.service: Consumed 253ms CPU time, 110.8M memory peak. Sep 4 00:10:17.487408 containerd[1569]: time="2025-09-04T00:10:17.487308003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:17.498330 containerd[1569]: time="2025-09-04T00:10:17.498255899Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 4 00:10:17.500428 containerd[1569]: time="2025-09-04T00:10:17.500352932Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:17.514383 containerd[1569]: time="2025-09-04T00:10:17.514297278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:17.515733 containerd[1569]: time="2025-09-04T00:10:17.515681775Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 3.771787616s" Sep 4 00:10:17.515733 containerd[1569]: time="2025-09-04T00:10:17.515725577Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 4 00:10:17.516509 containerd[1569]: time="2025-09-04T00:10:17.516385705Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 4 00:10:20.910582 containerd[1569]: time="2025-09-04T00:10:20.910489453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:20.919925 containerd[1569]: time="2025-09-04T00:10:20.919864228Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 4 00:10:20.927844 containerd[1569]: time="2025-09-04T00:10:20.927759136Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:20.936604 containerd[1569]: time="2025-09-04T00:10:20.936498198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:20.938102 containerd[1569]: time="2025-09-04T00:10:20.938040862Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 3.421603841s" Sep 4 00:10:20.938102 containerd[1569]: time="2025-09-04T00:10:20.938091246Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 4 00:10:20.938695 containerd[1569]: time="2025-09-04T00:10:20.938658440Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 4 00:10:22.867352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1023813606.mount: Deactivated successfully. Sep 4 00:10:24.928790 containerd[1569]: time="2025-09-04T00:10:24.928687241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:24.933083 containerd[1569]: time="2025-09-04T00:10:24.933055827Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 4 00:10:24.964276 containerd[1569]: time="2025-09-04T00:10:24.964162643Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:24.975729 containerd[1569]: time="2025-09-04T00:10:24.975627658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:24.976283 containerd[1569]: time="2025-09-04T00:10:24.976229505Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 4.037530018s" Sep 4 00:10:24.976283 containerd[1569]: time="2025-09-04T00:10:24.976279080Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 4 00:10:24.976919 containerd[1569]: time="2025-09-04T00:10:24.976869345Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 00:10:25.905355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4253250066.mount: Deactivated successfully. Sep 4 00:10:26.170687 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 00:10:26.173881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:10:26.752801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:10:26.775134 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:10:26.812698 kubelet[2137]: E0904 00:10:26.812607 2137 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:10:26.816936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:10:26.817195 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:10:26.817713 systemd[1]: kubelet.service: Consumed 232ms CPU time, 110.5M memory peak. Sep 4 00:10:28.103322 containerd[1569]: time="2025-09-04T00:10:28.103238257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:28.252020 containerd[1569]: time="2025-09-04T00:10:28.251927643Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 4 00:10:28.253937 containerd[1569]: time="2025-09-04T00:10:28.253864686Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:28.280931 containerd[1569]: time="2025-09-04T00:10:28.280860075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:28.282317 containerd[1569]: time="2025-09-04T00:10:28.282276002Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.305371368s" Sep 4 00:10:28.282317 containerd[1569]: time="2025-09-04T00:10:28.282311219Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 00:10:28.283001 containerd[1569]: time="2025-09-04T00:10:28.282960450Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 00:10:29.793355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1468478508.mount: Deactivated successfully. Sep 4 00:10:29.877221 containerd[1569]: time="2025-09-04T00:10:29.877129588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:10:29.887166 containerd[1569]: time="2025-09-04T00:10:29.887078954Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 00:10:29.890267 containerd[1569]: time="2025-09-04T00:10:29.890222845Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:10:29.898704 containerd[1569]: time="2025-09-04T00:10:29.898609457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:10:29.899279 containerd[1569]: time="2025-09-04T00:10:29.899233118Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.616215959s" Sep 4 00:10:29.899279 containerd[1569]: time="2025-09-04T00:10:29.899266110Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 00:10:29.900031 containerd[1569]: time="2025-09-04T00:10:29.899834626Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 4 00:10:31.563584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1533872208.mount: Deactivated successfully. Sep 4 00:10:35.624824 containerd[1569]: time="2025-09-04T00:10:35.624717664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:35.630446 containerd[1569]: time="2025-09-04T00:10:35.630379252Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 4 00:10:35.647704 containerd[1569]: time="2025-09-04T00:10:35.647595977Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:35.660911 containerd[1569]: time="2025-09-04T00:10:35.660819170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:10:35.661916 containerd[1569]: time="2025-09-04T00:10:35.661875756Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.762007306s" Sep 4 00:10:35.662001 containerd[1569]: time="2025-09-04T00:10:35.661918337Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 4 00:10:35.662539 update_engine[1544]: I20250904 00:10:35.662428 1544 update_attempter.cc:509] Updating boot flags... Sep 4 00:10:36.920458 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 4 00:10:36.922347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:10:37.159417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:10:37.179076 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:10:37.233547 kubelet[2287]: E0904 00:10:37.233487 2287 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:10:37.237610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:10:37.237889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:10:37.238349 systemd[1]: kubelet.service: Consumed 244ms CPU time, 110.6M memory peak. Sep 4 00:10:38.271265 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:10:38.271426 systemd[1]: kubelet.service: Consumed 244ms CPU time, 110.6M memory peak. Sep 4 00:10:38.273813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:10:38.301313 systemd[1]: Reload requested from client PID 2302 ('systemctl') (unit session-9.scope)... Sep 4 00:10:38.301332 systemd[1]: Reloading... Sep 4 00:10:38.406684 zram_generator::config[2348]: No configuration found. Sep 4 00:10:38.695282 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:10:38.847922 systemd[1]: Reloading finished in 546 ms. Sep 4 00:10:38.933326 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 00:10:38.933427 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 00:10:38.933772 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:10:38.933818 systemd[1]: kubelet.service: Consumed 171ms CPU time, 98.3M memory peak. Sep 4 00:10:38.935496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:10:39.144694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:10:39.149066 (kubelet)[2393]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:10:39.299564 kubelet[2393]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:10:39.299564 kubelet[2393]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 00:10:39.299564 kubelet[2393]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:10:39.300015 kubelet[2393]: I0904 00:10:39.299637 2393 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:10:39.518928 kubelet[2393]: I0904 00:10:39.518882 2393 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 4 00:10:39.518928 kubelet[2393]: I0904 00:10:39.518916 2393 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:10:39.519207 kubelet[2393]: I0904 00:10:39.519185 2393 server.go:934] "Client rotation is on, will bootstrap in background" Sep 4 00:10:39.577532 kubelet[2393]: E0904 00:10:39.577471 2393 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:10:39.578496 kubelet[2393]: I0904 00:10:39.578474 2393 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:10:39.592872 kubelet[2393]: I0904 00:10:39.592834 2393 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:10:39.602102 kubelet[2393]: I0904 00:10:39.602058 2393 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:10:39.605265 kubelet[2393]: I0904 00:10:39.605216 2393 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 4 00:10:39.605480 kubelet[2393]: I0904 00:10:39.605415 2393 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:10:39.605703 kubelet[2393]: I0904 00:10:39.605464 2393 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:10:39.605838 kubelet[2393]: I0904 00:10:39.605714 2393 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:10:39.605838 kubelet[2393]: I0904 00:10:39.605725 2393 container_manager_linux.go:300] "Creating device plugin manager" Sep 4 00:10:39.605905 kubelet[2393]: I0904 00:10:39.605881 2393 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:10:39.612738 kubelet[2393]: I0904 00:10:39.612699 2393 kubelet.go:408] "Attempting to sync node with API server" Sep 4 00:10:39.612738 kubelet[2393]: I0904 00:10:39.612735 2393 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:10:39.612821 kubelet[2393]: I0904 00:10:39.612794 2393 kubelet.go:314] "Adding apiserver pod source" Sep 4 00:10:39.612846 kubelet[2393]: I0904 00:10:39.612834 2393 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:10:39.619319 kubelet[2393]: W0904 00:10:39.619224 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 4 00:10:39.619319 kubelet[2393]: E0904 00:10:39.619307 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:10:39.619521 kubelet[2393]: W0904 00:10:39.619374 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 4 00:10:39.619521 kubelet[2393]: E0904 00:10:39.619402 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:10:39.628079 kubelet[2393]: I0904 00:10:39.627730 2393 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:10:39.628229 kubelet[2393]: I0904 00:10:39.628203 2393 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 00:10:39.630501 kubelet[2393]: W0904 00:10:39.630471 2393 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 00:10:39.639918 kubelet[2393]: I0904 00:10:39.639880 2393 server.go:1274] "Started kubelet" Sep 4 00:10:39.640349 kubelet[2393]: I0904 00:10:39.640315 2393 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:10:39.640893 kubelet[2393]: I0904 00:10:39.640857 2393 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:10:39.641000 kubelet[2393]: I0904 00:10:39.640943 2393 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:10:39.641419 kubelet[2393]: I0904 00:10:39.641392 2393 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:10:39.642033 kubelet[2393]: I0904 00:10:39.642002 2393 server.go:449] "Adding debug handlers to kubelet server" Sep 4 00:10:39.648687 kubelet[2393]: I0904 00:10:39.648636 2393 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:10:39.650493 kubelet[2393]: I0904 00:10:39.650409 2393 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 4 00:10:39.650641 kubelet[2393]: E0904 00:10:39.650613 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:39.653724 kubelet[2393]: I0904 00:10:39.653694 2393 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 4 00:10:39.653781 kubelet[2393]: I0904 00:10:39.653753 2393 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:10:39.655162 kubelet[2393]: E0904 00:10:39.655130 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="200ms" Sep 4 00:10:39.655634 kubelet[2393]: I0904 00:10:39.655603 2393 factory.go:221] Registration of the systemd container factory successfully Sep 4 00:10:39.655727 kubelet[2393]: I0904 00:10:39.655708 2393 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:10:39.658694 kubelet[2393]: W0904 00:10:39.658546 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 4 00:10:39.658694 kubelet[2393]: E0904 00:10:39.658593 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:10:39.659790 kubelet[2393]: I0904 00:10:39.659759 2393 factory.go:221] Registration of the containerd container factory successfully Sep 4 00:10:39.663995 kubelet[2393]: I0904 00:10:39.663755 2393 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 00:10:39.665123 kubelet[2393]: I0904 00:10:39.665085 2393 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 00:10:39.665123 kubelet[2393]: I0904 00:10:39.665119 2393 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 00:10:39.668398 kubelet[2393]: I0904 00:10:39.668365 2393 kubelet.go:2321] "Starting kubelet main sync loop" Sep 4 00:10:39.668613 kubelet[2393]: E0904 00:10:39.668427 2393 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:10:39.674826 kubelet[2393]: W0904 00:10:39.674786 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 4 00:10:39.674966 kubelet[2393]: E0904 00:10:39.674831 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:10:39.674966 kubelet[2393]: I0904 00:10:39.674957 2393 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 00:10:39.675012 kubelet[2393]: I0904 00:10:39.674969 2393 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 00:10:39.675012 kubelet[2393]: I0904 00:10:39.674992 2393 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:10:39.676420 kubelet[2393]: E0904 00:10:39.675216 2393 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1861ebd74fc253d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 00:10:39.639835603 +0000 UTC m=+0.483938507,LastTimestamp:2025-09-04 00:10:39.639835603 +0000 UTC m=+0.483938507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 00:10:39.750864 kubelet[2393]: E0904 00:10:39.750788 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:39.769440 kubelet[2393]: E0904 00:10:39.769304 2393 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 00:10:39.851717 kubelet[2393]: E0904 00:10:39.851618 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:39.856707 kubelet[2393]: E0904 00:10:39.856607 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="400ms" Sep 4 00:10:39.952817 kubelet[2393]: E0904 00:10:39.952745 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:39.970394 kubelet[2393]: E0904 00:10:39.970334 2393 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 00:10:40.044362 kubelet[2393]: E0904 00:10:40.044139 2393 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1861ebd74fc253d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 00:10:39.639835603 +0000 UTC m=+0.483938507,LastTimestamp:2025-09-04 00:10:39.639835603 +0000 UTC m=+0.483938507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 00:10:40.053519 kubelet[2393]: E0904 00:10:40.053374 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:40.154153 kubelet[2393]: E0904 00:10:40.154064 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:40.254798 kubelet[2393]: E0904 00:10:40.254712 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:40.258446 kubelet[2393]: E0904 00:10:40.258391 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="800ms" Sep 4 00:10:40.355760 kubelet[2393]: E0904 00:10:40.355573 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:40.371002 kubelet[2393]: E0904 00:10:40.370898 2393 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 00:10:40.456512 kubelet[2393]: E0904 00:10:40.456407 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:40.556961 kubelet[2393]: E0904 00:10:40.556823 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:40.614140 kubelet[2393]: W0904 00:10:40.613973 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 4 00:10:40.614140 kubelet[2393]: E0904 00:10:40.614049 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:10:40.657999 kubelet[2393]: E0904 00:10:40.657872 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:40.719100 kubelet[2393]: W0904 00:10:40.719040 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 4 00:10:40.719100 kubelet[2393]: E0904 00:10:40.719106 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:10:40.758783 kubelet[2393]: E0904 00:10:40.758711 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:40.859518 kubelet[2393]: E0904 00:10:40.859445 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:40.960299 kubelet[2393]: E0904 00:10:40.960208 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:41.059533 kubelet[2393]: E0904 00:10:41.059463 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="1.6s" Sep 4 00:10:41.060488 kubelet[2393]: E0904 00:10:41.060430 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:41.111681 kubelet[2393]: W0904 00:10:41.111564 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 4 00:10:41.111681 kubelet[2393]: E0904 00:10:41.111640 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:10:41.160834 kubelet[2393]: E0904 00:10:41.160759 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:41.171203 kubelet[2393]: E0904 00:10:41.171127 2393 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 00:10:41.220068 kubelet[2393]: W0904 00:10:41.219851 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 4 00:10:41.220068 kubelet[2393]: E0904 00:10:41.219957 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:10:41.261749 kubelet[2393]: E0904 00:10:41.261638 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:41.362540 kubelet[2393]: E0904 00:10:41.362460 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:41.463191 kubelet[2393]: E0904 00:10:41.463125 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:41.564074 kubelet[2393]: E0904 00:10:41.563899 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:41.655595 kubelet[2393]: E0904 00:10:41.655490 2393 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:10:41.664962 kubelet[2393]: E0904 00:10:41.664901 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:41.682591 kubelet[2393]: I0904 00:10:41.682537 2393 policy_none.go:49] "None policy: Start" Sep 4 00:10:41.683177 kubelet[2393]: I0904 00:10:41.683155 2393 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 00:10:41.683221 kubelet[2393]: I0904 00:10:41.683180 2393 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:10:41.765985 kubelet[2393]: E0904 00:10:41.765873 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:41.817140 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 00:10:41.844279 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 00:10:41.847985 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 00:10:41.866168 kubelet[2393]: E0904 00:10:41.866093 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:41.866467 kubelet[2393]: I0904 00:10:41.866429 2393 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 00:10:41.866832 kubelet[2393]: I0904 00:10:41.866785 2393 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:10:41.866832 kubelet[2393]: I0904 00:10:41.866805 2393 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:10:41.867130 kubelet[2393]: I0904 00:10:41.867097 2393 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:10:41.869066 kubelet[2393]: E0904 00:10:41.869036 2393 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 00:10:41.968636 kubelet[2393]: I0904 00:10:41.968575 2393 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 00:10:41.969231 kubelet[2393]: E0904 00:10:41.969192 2393 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Sep 4 00:10:42.171527 kubelet[2393]: I0904 00:10:42.171488 2393 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 00:10:42.171993 kubelet[2393]: E0904 00:10:42.171933 2393 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Sep 4 00:10:42.489785 kubelet[2393]: W0904 00:10:42.488497 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 4 00:10:42.489785 kubelet[2393]: E0904 00:10:42.488623 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:10:42.574306 kubelet[2393]: I0904 00:10:42.574240 2393 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 00:10:42.574929 kubelet[2393]: E0904 00:10:42.574859 2393 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Sep 4 00:10:42.661047 kubelet[2393]: E0904 00:10:42.660965 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="3.2s" Sep 4 00:10:42.782030 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 4 00:10:42.822085 systemd[1]: Created slice kubepods-burstable-pod8318028e2b234682d5bd4ebec7ccc9a4.slice - libcontainer container kubepods-burstable-pod8318028e2b234682d5bd4ebec7ccc9a4.slice. Sep 4 00:10:42.828549 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 4 00:10:42.875076 kubelet[2393]: I0904 00:10:42.874291 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 4 00:10:42.875313 kubelet[2393]: I0904 00:10:42.875254 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8318028e2b234682d5bd4ebec7ccc9a4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8318028e2b234682d5bd4ebec7ccc9a4\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:10:42.875364 kubelet[2393]: I0904 00:10:42.875315 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8318028e2b234682d5bd4ebec7ccc9a4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8318028e2b234682d5bd4ebec7ccc9a4\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:10:42.875443 kubelet[2393]: I0904 00:10:42.875403 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8318028e2b234682d5bd4ebec7ccc9a4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8318028e2b234682d5bd4ebec7ccc9a4\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:10:42.875443 kubelet[2393]: I0904 00:10:42.875441 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:10:42.875528 kubelet[2393]: I0904 00:10:42.875462 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:10:42.875528 kubelet[2393]: I0904 00:10:42.875486 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:10:42.875528 kubelet[2393]: I0904 00:10:42.875507 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:10:42.875528 kubelet[2393]: I0904 00:10:42.875529 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:10:43.054753 kubelet[2393]: W0904 00:10:43.054487 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 4 00:10:43.054753 kubelet[2393]: E0904 00:10:43.054608 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:10:43.119603 containerd[1569]: time="2025-09-04T00:10:43.119529696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 4 00:10:43.126227 containerd[1569]: time="2025-09-04T00:10:43.126193295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8318028e2b234682d5bd4ebec7ccc9a4,Namespace:kube-system,Attempt:0,}" Sep 4 00:10:43.132135 containerd[1569]: time="2025-09-04T00:10:43.132093502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 4 00:10:43.142110 kubelet[2393]: W0904 00:10:43.142017 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 4 00:10:43.142233 kubelet[2393]: E0904 00:10:43.142118 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:10:43.172562 containerd[1569]: time="2025-09-04T00:10:43.172491566Z" level=info msg="connecting to shim 0cf83eff7a64676d03f9130cab7475fa4c12673ff190f327616c737adc8b80b0" address="unix:///run/containerd/s/c0d66ff6afb72a71894743b5b871be1bf3a41bdbc0d83aeed08f6411dfd6584b" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:10:43.178526 containerd[1569]: time="2025-09-04T00:10:43.178434113Z" level=info msg="connecting to shim 0db31186a837e43a79c9685e12bd5ea29dc65054c4318a27cee3fd219961a1a1" address="unix:///run/containerd/s/d4375fbca1d9159a975371cd6a92a7a47124ff71a1158f89e9b100931399b7bf" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:10:43.207035 containerd[1569]: time="2025-09-04T00:10:43.206960406Z" level=info msg="connecting to shim 865105d0dfb0dff18142392060a8ed718dfdbbd3ec82c2fd758feb8d8e3bc6d3" address="unix:///run/containerd/s/36f7e927a793e300563bceb7f7239ec97fcbd91f0610443606dd9fe0755c9812" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:10:43.213006 systemd[1]: Started cri-containerd-0cf83eff7a64676d03f9130cab7475fa4c12673ff190f327616c737adc8b80b0.scope - libcontainer container 0cf83eff7a64676d03f9130cab7475fa4c12673ff190f327616c737adc8b80b0. Sep 4 00:10:43.223798 systemd[1]: Started cri-containerd-0db31186a837e43a79c9685e12bd5ea29dc65054c4318a27cee3fd219961a1a1.scope - libcontainer container 0db31186a837e43a79c9685e12bd5ea29dc65054c4318a27cee3fd219961a1a1. Sep 4 00:10:43.259903 systemd[1]: Started cri-containerd-865105d0dfb0dff18142392060a8ed718dfdbbd3ec82c2fd758feb8d8e3bc6d3.scope - libcontainer container 865105d0dfb0dff18142392060a8ed718dfdbbd3ec82c2fd758feb8d8e3bc6d3. Sep 4 00:10:43.301181 containerd[1569]: time="2025-09-04T00:10:43.301115993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cf83eff7a64676d03f9130cab7475fa4c12673ff190f327616c737adc8b80b0\"" Sep 4 00:10:43.305663 containerd[1569]: time="2025-09-04T00:10:43.305538368Z" level=info msg="CreateContainer within sandbox \"0cf83eff7a64676d03f9130cab7475fa4c12673ff190f327616c737adc8b80b0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 00:10:43.313272 containerd[1569]: time="2025-09-04T00:10:43.313233286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8318028e2b234682d5bd4ebec7ccc9a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0db31186a837e43a79c9685e12bd5ea29dc65054c4318a27cee3fd219961a1a1\"" Sep 4 00:10:43.316025 containerd[1569]: time="2025-09-04T00:10:43.315898181Z" level=info msg="CreateContainer within sandbox \"0db31186a837e43a79c9685e12bd5ea29dc65054c4318a27cee3fd219961a1a1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 00:10:43.320891 containerd[1569]: time="2025-09-04T00:10:43.320853653Z" level=info msg="Container 8dd560946cd556204eea4fe02595def36adb4c058f7f514d64dffe8ad76623cf: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:10:43.323019 containerd[1569]: time="2025-09-04T00:10:43.322969891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"865105d0dfb0dff18142392060a8ed718dfdbbd3ec82c2fd758feb8d8e3bc6d3\"" Sep 4 00:10:43.324991 containerd[1569]: time="2025-09-04T00:10:43.324968206Z" level=info msg="CreateContainer within sandbox \"865105d0dfb0dff18142392060a8ed718dfdbbd3ec82c2fd758feb8d8e3bc6d3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 00:10:43.334598 containerd[1569]: time="2025-09-04T00:10:43.334546332Z" level=info msg="CreateContainer within sandbox \"0cf83eff7a64676d03f9130cab7475fa4c12673ff190f327616c737adc8b80b0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8dd560946cd556204eea4fe02595def36adb4c058f7f514d64dffe8ad76623cf\"" Sep 4 00:10:43.335100 containerd[1569]: time="2025-09-04T00:10:43.335074970Z" level=info msg="StartContainer for \"8dd560946cd556204eea4fe02595def36adb4c058f7f514d64dffe8ad76623cf\"" Sep 4 00:10:43.336712 containerd[1569]: time="2025-09-04T00:10:43.335990881Z" level=info msg="Container fd7e824c3a8433b3c202d863dc91c549bb6ac11fd5e82b77af7c57f5751a63fb: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:10:43.336712 containerd[1569]: time="2025-09-04T00:10:43.336085961Z" level=info msg="connecting to shim 8dd560946cd556204eea4fe02595def36adb4c058f7f514d64dffe8ad76623cf" address="unix:///run/containerd/s/c0d66ff6afb72a71894743b5b871be1bf3a41bdbc0d83aeed08f6411dfd6584b" protocol=ttrpc version=3 Sep 4 00:10:43.346936 containerd[1569]: time="2025-09-04T00:10:43.346891987Z" level=info msg="Container 9147f4d7ce7ae0cc210ee8c3d2071b97f89306430d577a11cf0b3939d38798cc: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:10:43.350780 containerd[1569]: time="2025-09-04T00:10:43.350741579Z" level=info msg="CreateContainer within sandbox \"0db31186a837e43a79c9685e12bd5ea29dc65054c4318a27cee3fd219961a1a1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fd7e824c3a8433b3c202d863dc91c549bb6ac11fd5e82b77af7c57f5751a63fb\"" Sep 4 00:10:43.351520 containerd[1569]: time="2025-09-04T00:10:43.351363585Z" level=info msg="StartContainer for \"fd7e824c3a8433b3c202d863dc91c549bb6ac11fd5e82b77af7c57f5751a63fb\"" Sep 4 00:10:43.353947 containerd[1569]: time="2025-09-04T00:10:43.353927800Z" level=info msg="connecting to shim fd7e824c3a8433b3c202d863dc91c549bb6ac11fd5e82b77af7c57f5751a63fb" address="unix:///run/containerd/s/d4375fbca1d9159a975371cd6a92a7a47124ff71a1158f89e9b100931399b7bf" protocol=ttrpc version=3 Sep 4 00:10:43.355754 containerd[1569]: time="2025-09-04T00:10:43.355709606Z" level=info msg="CreateContainer within sandbox \"865105d0dfb0dff18142392060a8ed718dfdbbd3ec82c2fd758feb8d8e3bc6d3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9147f4d7ce7ae0cc210ee8c3d2071b97f89306430d577a11cf0b3939d38798cc\"" Sep 4 00:10:43.356205 containerd[1569]: time="2025-09-04T00:10:43.356183781Z" level=info msg="StartContainer for \"9147f4d7ce7ae0cc210ee8c3d2071b97f89306430d577a11cf0b3939d38798cc\"" Sep 4 00:10:43.357153 containerd[1569]: time="2025-09-04T00:10:43.357131241Z" level=info msg="connecting to shim 9147f4d7ce7ae0cc210ee8c3d2071b97f89306430d577a11cf0b3939d38798cc" address="unix:///run/containerd/s/36f7e927a793e300563bceb7f7239ec97fcbd91f0610443606dd9fe0755c9812" protocol=ttrpc version=3 Sep 4 00:10:43.360873 systemd[1]: Started cri-containerd-8dd560946cd556204eea4fe02595def36adb4c058f7f514d64dffe8ad76623cf.scope - libcontainer container 8dd560946cd556204eea4fe02595def36adb4c058f7f514d64dffe8ad76623cf. Sep 4 00:10:43.377993 kubelet[2393]: I0904 00:10:43.377679 2393 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 00:10:43.378278 kubelet[2393]: E0904 00:10:43.378251 2393 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Sep 4 00:10:43.384922 systemd[1]: Started cri-containerd-9147f4d7ce7ae0cc210ee8c3d2071b97f89306430d577a11cf0b3939d38798cc.scope - libcontainer container 9147f4d7ce7ae0cc210ee8c3d2071b97f89306430d577a11cf0b3939d38798cc. Sep 4 00:10:43.395920 systemd[1]: Started cri-containerd-fd7e824c3a8433b3c202d863dc91c549bb6ac11fd5e82b77af7c57f5751a63fb.scope - libcontainer container fd7e824c3a8433b3c202d863dc91c549bb6ac11fd5e82b77af7c57f5751a63fb. Sep 4 00:10:43.438761 containerd[1569]: time="2025-09-04T00:10:43.438709626Z" level=info msg="StartContainer for \"8dd560946cd556204eea4fe02595def36adb4c058f7f514d64dffe8ad76623cf\" returns successfully" Sep 4 00:10:43.494114 containerd[1569]: time="2025-09-04T00:10:43.493999824Z" level=info msg="StartContainer for \"9147f4d7ce7ae0cc210ee8c3d2071b97f89306430d577a11cf0b3939d38798cc\" returns successfully" Sep 4 00:10:43.494800 containerd[1569]: time="2025-09-04T00:10:43.494775039Z" level=info msg="StartContainer for \"fd7e824c3a8433b3c202d863dc91c549bb6ac11fd5e82b77af7c57f5751a63fb\" returns successfully" Sep 4 00:10:44.980533 kubelet[2393]: I0904 00:10:44.980483 2393 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 00:10:45.025925 kubelet[2393]: I0904 00:10:45.025858 2393 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 4 00:10:45.025925 kubelet[2393]: E0904 00:10:45.025913 2393 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 4 00:10:45.035731 kubelet[2393]: E0904 00:10:45.035677 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:45.136677 kubelet[2393]: E0904 00:10:45.136612 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:45.237286 kubelet[2393]: E0904 00:10:45.237165 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:45.337817 kubelet[2393]: E0904 00:10:45.337772 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:45.438388 kubelet[2393]: E0904 00:10:45.438345 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:45.539240 kubelet[2393]: E0904 00:10:45.539095 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:45.639810 kubelet[2393]: E0904 00:10:45.639744 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:45.740834 kubelet[2393]: E0904 00:10:45.740735 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:45.841476 kubelet[2393]: E0904 00:10:45.841339 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:45.942142 kubelet[2393]: E0904 00:10:45.942074 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:46.042707 kubelet[2393]: E0904 00:10:46.042635 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:46.143102 kubelet[2393]: E0904 00:10:46.142971 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:46.243901 kubelet[2393]: E0904 00:10:46.243849 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:46.344561 kubelet[2393]: E0904 00:10:46.344488 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:46.445149 kubelet[2393]: E0904 00:10:46.445099 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:46.545777 kubelet[2393]: E0904 00:10:46.545734 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:46.646220 kubelet[2393]: E0904 00:10:46.646172 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:46.747177 kubelet[2393]: E0904 00:10:46.747055 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:46.847671 kubelet[2393]: E0904 00:10:46.847599 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:46.948331 kubelet[2393]: E0904 00:10:46.948283 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:47.049099 kubelet[2393]: E0904 00:10:47.048964 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:47.150131 kubelet[2393]: E0904 00:10:47.150087 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:47.218760 systemd[1]: Reload requested from client PID 2667 ('systemctl') (unit session-9.scope)... Sep 4 00:10:47.218777 systemd[1]: Reloading... Sep 4 00:10:47.251670 kubelet[2393]: E0904 00:10:47.250701 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:47.298685 zram_generator::config[2710]: No configuration found. Sep 4 00:10:47.351291 kubelet[2393]: E0904 00:10:47.351162 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:47.400912 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:10:47.451871 kubelet[2393]: E0904 00:10:47.451812 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:47.553873 kubelet[2393]: E0904 00:10:47.553753 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:10:47.570420 systemd[1]: Reloading finished in 351 ms. Sep 4 00:10:47.599410 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:10:47.617093 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 00:10:47.617564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:10:47.617663 systemd[1]: kubelet.service: Consumed 920ms CPU time, 128.4M memory peak. Sep 4 00:10:47.620789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:10:47.873943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:10:47.886283 (kubelet)[2754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:10:48.048076 kubelet[2754]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:10:48.048076 kubelet[2754]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 00:10:48.048076 kubelet[2754]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:10:48.048580 kubelet[2754]: I0904 00:10:48.048314 2754 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:10:48.057961 kubelet[2754]: I0904 00:10:48.057893 2754 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 4 00:10:48.057961 kubelet[2754]: I0904 00:10:48.057925 2754 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:10:48.058227 kubelet[2754]: I0904 00:10:48.058207 2754 server.go:934] "Client rotation is on, will bootstrap in background" Sep 4 00:10:48.059888 kubelet[2754]: I0904 00:10:48.059828 2754 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 00:10:48.063042 kubelet[2754]: I0904 00:10:48.062994 2754 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:10:48.126742 kubelet[2754]: I0904 00:10:48.125023 2754 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:10:48.130024 kubelet[2754]: I0904 00:10:48.130001 2754 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:10:48.130111 kubelet[2754]: I0904 00:10:48.130105 2754 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 4 00:10:48.130263 kubelet[2754]: I0904 00:10:48.130230 2754 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:10:48.130416 kubelet[2754]: I0904 00:10:48.130259 2754 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:10:48.130522 kubelet[2754]: I0904 00:10:48.130426 2754 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:10:48.130522 kubelet[2754]: I0904 00:10:48.130435 2754 container_manager_linux.go:300] "Creating device plugin manager" Sep 4 00:10:48.130522 kubelet[2754]: I0904 00:10:48.130460 2754 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:10:48.130591 kubelet[2754]: I0904 00:10:48.130573 2754 kubelet.go:408] "Attempting to sync node with API server" Sep 4 00:10:48.130591 kubelet[2754]: I0904 00:10:48.130588 2754 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:10:48.130639 kubelet[2754]: I0904 00:10:48.130620 2754 kubelet.go:314] "Adding apiserver pod source" Sep 4 00:10:48.130639 kubelet[2754]: I0904 00:10:48.130631 2754 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:10:48.132803 kubelet[2754]: I0904 00:10:48.132784 2754 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:10:48.133542 kubelet[2754]: I0904 00:10:48.133434 2754 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 00:10:48.134298 kubelet[2754]: I0904 00:10:48.134284 2754 server.go:1274] "Started kubelet" Sep 4 00:10:48.140685 kubelet[2754]: I0904 00:10:48.139512 2754 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:10:48.140685 kubelet[2754]: I0904 00:10:48.140204 2754 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:10:48.140685 kubelet[2754]: I0904 00:10:48.140444 2754 server.go:449] "Adding debug handlers to kubelet server" Sep 4 00:10:48.141671 kubelet[2754]: I0904 00:10:48.141565 2754 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:10:48.145318 kubelet[2754]: I0904 00:10:48.145275 2754 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:10:48.145432 kubelet[2754]: I0904 00:10:48.145403 2754 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 4 00:10:48.146119 kubelet[2754]: I0904 00:10:48.146090 2754 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:10:48.149742 kubelet[2754]: I0904 00:10:48.149683 2754 factory.go:221] Registration of the systemd container factory successfully Sep 4 00:10:48.149882 kubelet[2754]: I0904 00:10:48.149839 2754 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:10:48.153959 kubelet[2754]: I0904 00:10:48.152939 2754 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:10:48.153959 kubelet[2754]: I0904 00:10:48.153080 2754 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 4 00:10:48.157898 kubelet[2754]: I0904 00:10:48.157849 2754 factory.go:221] Registration of the containerd container factory successfully Sep 4 00:10:48.161487 kubelet[2754]: E0904 00:10:48.161451 2754 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:10:48.179523 kubelet[2754]: I0904 00:10:48.179462 2754 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 00:10:48.181312 kubelet[2754]: I0904 00:10:48.181194 2754 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 00:10:48.181312 kubelet[2754]: I0904 00:10:48.181241 2754 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 00:10:48.181312 kubelet[2754]: I0904 00:10:48.181267 2754 kubelet.go:2321] "Starting kubelet main sync loop" Sep 4 00:10:48.181442 kubelet[2754]: E0904 00:10:48.181325 2754 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:10:48.216413 kubelet[2754]: I0904 00:10:48.216207 2754 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 00:10:48.216413 kubelet[2754]: I0904 00:10:48.216224 2754 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 00:10:48.216413 kubelet[2754]: I0904 00:10:48.216243 2754 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:10:48.216413 kubelet[2754]: I0904 00:10:48.216388 2754 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 00:10:48.216413 kubelet[2754]: I0904 00:10:48.216399 2754 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 00:10:48.216413 kubelet[2754]: I0904 00:10:48.216417 2754 policy_none.go:49] "None policy: Start" Sep 4 00:10:48.217593 kubelet[2754]: I0904 00:10:48.217558 2754 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 00:10:48.217593 kubelet[2754]: I0904 00:10:48.217598 2754 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:10:48.217919 kubelet[2754]: I0904 00:10:48.217878 2754 state_mem.go:75] "Updated machine memory state" Sep 4 00:10:48.223714 kubelet[2754]: I0904 00:10:48.223686 2754 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 00:10:48.224121 kubelet[2754]: I0904 00:10:48.223893 2754 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:10:48.224121 kubelet[2754]: I0904 00:10:48.223905 2754 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:10:48.224200 kubelet[2754]: I0904 00:10:48.224136 2754 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:10:48.225137 sudo[2790]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 00:10:48.225596 sudo[2790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 00:10:48.332453 kubelet[2754]: I0904 00:10:48.332415 2754 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 00:10:48.345597 kubelet[2754]: I0904 00:10:48.345546 2754 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 4 00:10:48.345861 kubelet[2754]: I0904 00:10:48.345671 2754 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 4 00:10:48.454676 kubelet[2754]: I0904 00:10:48.454430 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 4 00:10:48.454676 kubelet[2754]: I0904 00:10:48.454495 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:10:48.454676 kubelet[2754]: I0904 00:10:48.454544 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:10:48.454676 kubelet[2754]: I0904 00:10:48.454569 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:10:48.454676 kubelet[2754]: I0904 00:10:48.454594 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8318028e2b234682d5bd4ebec7ccc9a4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8318028e2b234682d5bd4ebec7ccc9a4\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:10:48.455003 kubelet[2754]: I0904 00:10:48.454613 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8318028e2b234682d5bd4ebec7ccc9a4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8318028e2b234682d5bd4ebec7ccc9a4\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:10:48.455003 kubelet[2754]: I0904 00:10:48.454633 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8318028e2b234682d5bd4ebec7ccc9a4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8318028e2b234682d5bd4ebec7ccc9a4\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:10:48.455178 kubelet[2754]: I0904 00:10:48.455145 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:10:48.455226 kubelet[2754]: I0904 00:10:48.455193 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:10:48.803842 sudo[2790]: pam_unix(sudo:session): session closed for user root Sep 4 00:10:49.131719 kubelet[2754]: I0904 00:10:49.131541 2754 apiserver.go:52] "Watching apiserver" Sep 4 00:10:49.154051 kubelet[2754]: I0904 00:10:49.153979 2754 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 4 00:10:49.205398 kubelet[2754]: E0904 00:10:49.205333 2754 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 00:10:49.218567 kubelet[2754]: I0904 00:10:49.218429 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.21840387 podStartE2EDuration="1.21840387s" podCreationTimestamp="2025-09-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:10:49.218308029 +0000 UTC m=+1.326368063" watchObservedRunningTime="2025-09-04 00:10:49.21840387 +0000 UTC m=+1.326463904" Sep 4 00:10:49.242685 kubelet[2754]: I0904 00:10:49.241913 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.2418887060000001 podStartE2EDuration="1.241888706s" podCreationTimestamp="2025-09-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:10:49.231252949 +0000 UTC m=+1.339312983" watchObservedRunningTime="2025-09-04 00:10:49.241888706 +0000 UTC m=+1.349948740" Sep 4 00:10:49.255941 kubelet[2754]: I0904 00:10:49.255858 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.2558338820000001 podStartE2EDuration="1.255833882s" podCreationTimestamp="2025-09-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:10:49.242098121 +0000 UTC m=+1.350158155" watchObservedRunningTime="2025-09-04 00:10:49.255833882 +0000 UTC m=+1.363893916" Sep 4 00:10:50.359795 sudo[1787]: pam_unix(sudo:session): session closed for user root Sep 4 00:10:50.361793 sshd[1786]: Connection closed by 10.0.0.1 port 41866 Sep 4 00:10:50.362595 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Sep 4 00:10:50.368577 systemd[1]: sshd@8-10.0.0.134:22-10.0.0.1:41866.service: Deactivated successfully. Sep 4 00:10:50.371910 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 00:10:50.372259 systemd[1]: session-9.scope: Consumed 5.640s CPU time, 254.5M memory peak. Sep 4 00:10:50.374290 systemd-logind[1539]: Session 9 logged out. Waiting for processes to exit. Sep 4 00:10:50.375919 systemd-logind[1539]: Removed session 9. Sep 4 00:10:52.578381 kubelet[2754]: I0904 00:10:52.578339 2754 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 00:10:52.578998 kubelet[2754]: I0904 00:10:52.578920 2754 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 00:10:52.579051 containerd[1569]: time="2025-09-04T00:10:52.578744854Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 00:10:53.285843 systemd[1]: Created slice kubepods-besteffort-pod2d990959_1029_45ca_9110_2c64bb78d4a0.slice - libcontainer container kubepods-besteffort-pod2d990959_1029_45ca_9110_2c64bb78d4a0.slice. Sep 4 00:10:53.288830 kubelet[2754]: I0904 00:10:53.288773 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-bpf-maps\") pod \"cilium-tksl9\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " pod="kube-system/cilium-tksl9" Sep 4 00:10:53.289166 kubelet[2754]: I0904 00:10:53.288885 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-hostproc\") pod \"cilium-tksl9\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " pod="kube-system/cilium-tksl9" Sep 4 00:10:53.289368 kubelet[2754]: I0904 00:10:53.289177 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-cni-path\") pod \"cilium-tksl9\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " pod="kube-system/cilium-tksl9" Sep 4 00:10:53.289368 kubelet[2754]: I0904 00:10:53.289214 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-etc-cni-netd\") pod \"cilium-tksl9\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " pod="kube-system/cilium-tksl9" Sep 4 00:10:53.289368 kubelet[2754]: I0904 00:10:53.289237 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-xtables-lock\") pod \"cilium-tksl9\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " pod="kube-system/cilium-tksl9" Sep 4 00:10:53.289368 kubelet[2754]: I0904 00:10:53.289260 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-cilium-config-path\") pod \"cilium-tksl9\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " pod="kube-system/cilium-tksl9" Sep 4 00:10:53.289368 kubelet[2754]: I0904 00:10:53.289281 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fstsz\" (UniqueName: \"kubernetes.io/projected/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-kube-api-access-fstsz\") pod \"cilium-tksl9\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " pod="kube-system/cilium-tksl9" Sep 4 00:10:53.289368 kubelet[2754]: I0904 00:10:53.289305 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-host-proc-sys-kernel\") pod \"cilium-tksl9\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " pod="kube-system/cilium-tksl9" Sep 4 00:10:53.289596 kubelet[2754]: I0904 00:10:53.289326 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-cilium-cgroup\") pod \"cilium-tksl9\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " pod="kube-system/cilium-tksl9" Sep 4 00:10:53.289596 kubelet[2754]: I0904 00:10:53.289346 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d990959-1029-45ca-9110-2c64bb78d4a0-lib-modules\") pod \"kube-proxy-g4g5z\" (UID: \"2d990959-1029-45ca-9110-2c64bb78d4a0\") " pod="kube-system/kube-proxy-g4g5z" Sep 4 00:10:53.289596 kubelet[2754]: I0904 00:10:53.289373 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-clustermesh-secrets\") pod \"cilium-tksl9\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " pod="kube-system/cilium-tksl9" Sep 4 00:10:53.289596 kubelet[2754]: I0904 00:10:53.289395 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-host-proc-sys-net\") pod \"cilium-tksl9\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " pod="kube-system/cilium-tksl9" Sep 4 00:10:53.289596 kubelet[2754]: I0904 00:10:53.289426 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2d990959-1029-45ca-9110-2c64bb78d4a0-kube-proxy\") pod \"kube-proxy-g4g5z\" (UID: \"2d990959-1029-45ca-9110-2c64bb78d4a0\") " pod="kube-system/kube-proxy-g4g5z" Sep 4 00:10:53.289906 kubelet[2754]: I0904 00:10:53.289460 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d990959-1029-45ca-9110-2c64bb78d4a0-xtables-lock\") pod \"kube-proxy-g4g5z\" (UID: \"2d990959-1029-45ca-9110-2c64bb78d4a0\") " pod="kube-system/kube-proxy-g4g5z" Sep 4 00:10:53.289906 kubelet[2754]: I0904 00:10:53.289484 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqvhd\" (UniqueName: \"kubernetes.io/projected/2d990959-1029-45ca-9110-2c64bb78d4a0-kube-api-access-hqvhd\") pod \"kube-proxy-g4g5z\" (UID: \"2d990959-1029-45ca-9110-2c64bb78d4a0\") " pod="kube-system/kube-proxy-g4g5z" Sep 4 00:10:53.289906 kubelet[2754]: I0904 00:10:53.289508 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-cilium-run\") pod \"cilium-tksl9\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " pod="kube-system/cilium-tksl9" Sep 4 00:10:53.289906 kubelet[2754]: I0904 00:10:53.289531 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-lib-modules\") pod \"cilium-tksl9\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " pod="kube-system/cilium-tksl9" Sep 4 00:10:53.289906 kubelet[2754]: I0904 00:10:53.289552 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-hubble-tls\") pod \"cilium-tksl9\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " pod="kube-system/cilium-tksl9" Sep 4 00:10:53.304823 systemd[1]: Created slice kubepods-burstable-podc2ccef98_f022_42dc_9bb4_5ff35b8600fc.slice - libcontainer container kubepods-burstable-podc2ccef98_f022_42dc_9bb4_5ff35b8600fc.slice. Sep 4 00:10:53.485709 systemd[1]: Created slice kubepods-besteffort-pod09d87aba_88fe_4de6_bdf3_1647db944452.slice - libcontainer container kubepods-besteffort-pod09d87aba_88fe_4de6_bdf3_1647db944452.slice. Sep 4 00:10:53.491592 kubelet[2754]: I0904 00:10:53.491551 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxq2m\" (UniqueName: \"kubernetes.io/projected/09d87aba-88fe-4de6-bdf3-1647db944452-kube-api-access-mxq2m\") pod \"cilium-operator-5d85765b45-m5wxf\" (UID: \"09d87aba-88fe-4de6-bdf3-1647db944452\") " pod="kube-system/cilium-operator-5d85765b45-m5wxf" Sep 4 00:10:53.491592 kubelet[2754]: I0904 00:10:53.491600 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09d87aba-88fe-4de6-bdf3-1647db944452-cilium-config-path\") pod \"cilium-operator-5d85765b45-m5wxf\" (UID: \"09d87aba-88fe-4de6-bdf3-1647db944452\") " pod="kube-system/cilium-operator-5d85765b45-m5wxf" Sep 4 00:10:53.598712 containerd[1569]: time="2025-09-04T00:10:53.598579585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g4g5z,Uid:2d990959-1029-45ca-9110-2c64bb78d4a0,Namespace:kube-system,Attempt:0,}" Sep 4 00:10:53.611485 containerd[1569]: time="2025-09-04T00:10:53.611449507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tksl9,Uid:c2ccef98-f022-42dc-9bb4-5ff35b8600fc,Namespace:kube-system,Attempt:0,}" Sep 4 00:10:53.818468 containerd[1569]: time="2025-09-04T00:10:53.818377727Z" level=info msg="connecting to shim 2eb23879e7b5089b28e0f00be3929287223dd63ced01d35c1c10adf4d92b731f" address="unix:///run/containerd/s/8efb39d76977efc42882b050fd40717185b7ef6c7e16958722b469f2cb5e033a" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:10:53.822374 containerd[1569]: time="2025-09-04T00:10:53.822303338Z" level=info msg="connecting to shim 97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a" address="unix:///run/containerd/s/5792b38e4306f31ab6bd9e4d28f1ec74f72a7b0cecd89ebe9cab798beca1f946" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:10:53.841794 systemd[1]: Started cri-containerd-2eb23879e7b5089b28e0f00be3929287223dd63ced01d35c1c10adf4d92b731f.scope - libcontainer container 2eb23879e7b5089b28e0f00be3929287223dd63ced01d35c1c10adf4d92b731f. Sep 4 00:10:53.867815 systemd[1]: Started cri-containerd-97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a.scope - libcontainer container 97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a. Sep 4 00:10:53.905745 containerd[1569]: time="2025-09-04T00:10:53.905041039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g4g5z,Uid:2d990959-1029-45ca-9110-2c64bb78d4a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2eb23879e7b5089b28e0f00be3929287223dd63ced01d35c1c10adf4d92b731f\"" Sep 4 00:10:53.908360 containerd[1569]: time="2025-09-04T00:10:53.908320012Z" level=info msg="CreateContainer within sandbox \"2eb23879e7b5089b28e0f00be3929287223dd63ced01d35c1c10adf4d92b731f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 00:10:53.911952 containerd[1569]: time="2025-09-04T00:10:53.911914060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tksl9,Uid:c2ccef98-f022-42dc-9bb4-5ff35b8600fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\"" Sep 4 00:10:53.913597 containerd[1569]: time="2025-09-04T00:10:53.913575457Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 00:10:53.924842 containerd[1569]: time="2025-09-04T00:10:53.924789040Z" level=info msg="Container 6d691ee62d63dbb7434efa121c783108f3d87563bd5efa0afb88096199f616d1: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:10:53.932622 containerd[1569]: time="2025-09-04T00:10:53.932570232Z" level=info msg="CreateContainer within sandbox \"2eb23879e7b5089b28e0f00be3929287223dd63ced01d35c1c10adf4d92b731f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d691ee62d63dbb7434efa121c783108f3d87563bd5efa0afb88096199f616d1\"" Sep 4 00:10:53.933217 containerd[1569]: time="2025-09-04T00:10:53.933123513Z" level=info msg="StartContainer for \"6d691ee62d63dbb7434efa121c783108f3d87563bd5efa0afb88096199f616d1\"" Sep 4 00:10:53.934546 containerd[1569]: time="2025-09-04T00:10:53.934518039Z" level=info msg="connecting to shim 6d691ee62d63dbb7434efa121c783108f3d87563bd5efa0afb88096199f616d1" address="unix:///run/containerd/s/8efb39d76977efc42882b050fd40717185b7ef6c7e16958722b469f2cb5e033a" protocol=ttrpc version=3 Sep 4 00:10:53.956791 systemd[1]: Started cri-containerd-6d691ee62d63dbb7434efa121c783108f3d87563bd5efa0afb88096199f616d1.scope - libcontainer container 6d691ee62d63dbb7434efa121c783108f3d87563bd5efa0afb88096199f616d1. Sep 4 00:10:54.034602 containerd[1569]: time="2025-09-04T00:10:54.034554596Z" level=info msg="StartContainer for \"6d691ee62d63dbb7434efa121c783108f3d87563bd5efa0afb88096199f616d1\" returns successfully" Sep 4 00:10:54.089798 containerd[1569]: time="2025-09-04T00:10:54.089739053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-m5wxf,Uid:09d87aba-88fe-4de6-bdf3-1647db944452,Namespace:kube-system,Attempt:0,}" Sep 4 00:10:54.131895 containerd[1569]: time="2025-09-04T00:10:54.129563067Z" level=info msg="connecting to shim 0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb" address="unix:///run/containerd/s/64f4ba59e612808d9e5b76bb00a0f367fb9ae30029a5c5d15eafdabde1ca93e3" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:10:54.160817 systemd[1]: Started cri-containerd-0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb.scope - libcontainer container 0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb. Sep 4 00:10:54.213693 containerd[1569]: time="2025-09-04T00:10:54.213621157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-m5wxf,Uid:09d87aba-88fe-4de6-bdf3-1647db944452,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\"" Sep 4 00:10:54.232842 kubelet[2754]: I0904 00:10:54.232788 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g4g5z" podStartSLOduration=1.232766136 podStartE2EDuration="1.232766136s" podCreationTimestamp="2025-09-04 00:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:10:54.223406637 +0000 UTC m=+6.331466921" watchObservedRunningTime="2025-09-04 00:10:54.232766136 +0000 UTC m=+6.340826170" Sep 4 00:11:02.438005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1665889376.mount: Deactivated successfully. Sep 4 00:11:05.630974 containerd[1569]: time="2025-09-04T00:11:05.630900886Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:11:05.631927 containerd[1569]: time="2025-09-04T00:11:05.631892359Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 4 00:11:05.637043 containerd[1569]: time="2025-09-04T00:11:05.636917281Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:11:05.638882 containerd[1569]: time="2025-09-04T00:11:05.638834874Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.725131005s" Sep 4 00:11:05.638967 containerd[1569]: time="2025-09-04T00:11:05.638889637Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 00:11:05.639841 containerd[1569]: time="2025-09-04T00:11:05.639812181Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 00:11:05.641170 containerd[1569]: time="2025-09-04T00:11:05.641122303Z" level=info msg="CreateContainer within sandbox \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 00:11:05.653688 containerd[1569]: time="2025-09-04T00:11:05.652912256Z" level=info msg="Container a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:11:05.657689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355006498.mount: Deactivated successfully. Sep 4 00:11:05.676986 containerd[1569]: time="2025-09-04T00:11:05.676922931Z" level=info msg="CreateContainer within sandbox \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f\"" Sep 4 00:11:05.677757 containerd[1569]: time="2025-09-04T00:11:05.677727834Z" level=info msg="StartContainer for \"a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f\"" Sep 4 00:11:05.678836 containerd[1569]: time="2025-09-04T00:11:05.678804917Z" level=info msg="connecting to shim a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f" address="unix:///run/containerd/s/5792b38e4306f31ab6bd9e4d28f1ec74f72a7b0cecd89ebe9cab798beca1f946" protocol=ttrpc version=3 Sep 4 00:11:05.704909 systemd[1]: Started cri-containerd-a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f.scope - libcontainer container a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f. Sep 4 00:11:05.744664 containerd[1569]: time="2025-09-04T00:11:05.744594567Z" level=info msg="StartContainer for \"a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f\" returns successfully" Sep 4 00:11:05.756975 systemd[1]: cri-containerd-a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f.scope: Deactivated successfully. Sep 4 00:11:05.758923 containerd[1569]: time="2025-09-04T00:11:05.758876893Z" level=info msg="received exit event container_id:\"a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f\" id:\"a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f\" pid:3175 exited_at:{seconds:1756944665 nanos:758324094}" Sep 4 00:11:05.759055 containerd[1569]: time="2025-09-04T00:11:05.759009021Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f\" id:\"a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f\" pid:3175 exited_at:{seconds:1756944665 nanos:758324094}" Sep 4 00:11:05.784475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f-rootfs.mount: Deactivated successfully. Sep 4 00:11:06.386369 containerd[1569]: time="2025-09-04T00:11:06.386314043Z" level=info msg="CreateContainer within sandbox \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 00:11:06.397665 containerd[1569]: time="2025-09-04T00:11:06.397578918Z" level=info msg="Container 33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:11:06.405310 containerd[1569]: time="2025-09-04T00:11:06.405245061Z" level=info msg="CreateContainer within sandbox \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13\"" Sep 4 00:11:06.410696 containerd[1569]: time="2025-09-04T00:11:06.409929332Z" level=info msg="StartContainer for \"33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13\"" Sep 4 00:11:06.411224 containerd[1569]: time="2025-09-04T00:11:06.411197875Z" level=info msg="connecting to shim 33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13" address="unix:///run/containerd/s/5792b38e4306f31ab6bd9e4d28f1ec74f72a7b0cecd89ebe9cab798beca1f946" protocol=ttrpc version=3 Sep 4 00:11:06.440957 systemd[1]: Started cri-containerd-33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13.scope - libcontainer container 33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13. Sep 4 00:11:06.476795 containerd[1569]: time="2025-09-04T00:11:06.476749843Z" level=info msg="StartContainer for \"33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13\" returns successfully" Sep 4 00:11:06.500140 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 00:11:06.500384 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:11:06.500728 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:11:06.502422 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:11:06.504858 systemd[1]: cri-containerd-33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13.scope: Deactivated successfully. Sep 4 00:11:06.506558 containerd[1569]: time="2025-09-04T00:11:06.506500308Z" level=info msg="received exit event container_id:\"33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13\" id:\"33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13\" pid:3221 exited_at:{seconds:1756944666 nanos:506213800}" Sep 4 00:11:06.507889 containerd[1569]: time="2025-09-04T00:11:06.507852268Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13\" id:\"33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13\" pid:3221 exited_at:{seconds:1756944666 nanos:506213800}" Sep 4 00:11:06.540362 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:11:07.391745 containerd[1569]: time="2025-09-04T00:11:07.391485394Z" level=info msg="CreateContainer within sandbox \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 00:11:07.409363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1883428886.mount: Deactivated successfully. Sep 4 00:11:07.417592 containerd[1569]: time="2025-09-04T00:11:07.417552122Z" level=info msg="Container 57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:11:07.427866 containerd[1569]: time="2025-09-04T00:11:07.427804873Z" level=info msg="CreateContainer within sandbox \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6\"" Sep 4 00:11:07.428974 containerd[1569]: time="2025-09-04T00:11:07.428940466Z" level=info msg="StartContainer for \"57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6\"" Sep 4 00:11:07.433694 containerd[1569]: time="2025-09-04T00:11:07.432757046Z" level=info msg="connecting to shim 57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6" address="unix:///run/containerd/s/5792b38e4306f31ab6bd9e4d28f1ec74f72a7b0cecd89ebe9cab798beca1f946" protocol=ttrpc version=3 Sep 4 00:11:07.459856 systemd[1]: Started cri-containerd-57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6.scope - libcontainer container 57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6. Sep 4 00:11:07.512527 systemd[1]: cri-containerd-57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6.scope: Deactivated successfully. Sep 4 00:11:07.513298 containerd[1569]: time="2025-09-04T00:11:07.512525729Z" level=info msg="StartContainer for \"57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6\" returns successfully" Sep 4 00:11:07.516639 containerd[1569]: time="2025-09-04T00:11:07.516605954Z" level=info msg="received exit event container_id:\"57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6\" id:\"57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6\" pid:3276 exited_at:{seconds:1756944667 nanos:516366134}" Sep 4 00:11:07.516747 containerd[1569]: time="2025-09-04T00:11:07.516605664Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6\" id:\"57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6\" pid:3276 exited_at:{seconds:1756944667 nanos:516366134}" Sep 4 00:11:07.751103 containerd[1569]: time="2025-09-04T00:11:07.751038018Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:11:07.751863 containerd[1569]: time="2025-09-04T00:11:07.751836949Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 4 00:11:07.752971 containerd[1569]: time="2025-09-04T00:11:07.752941474Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:11:07.754013 containerd[1569]: time="2025-09-04T00:11:07.753985475Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.114142255s" Sep 4 00:11:07.754013 containerd[1569]: time="2025-09-04T00:11:07.754011454Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 00:11:07.755717 containerd[1569]: time="2025-09-04T00:11:07.755688094Z" level=info msg="CreateContainer within sandbox \"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 00:11:07.765089 containerd[1569]: time="2025-09-04T00:11:07.764159107Z" level=info msg="Container 03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:11:07.774606 containerd[1569]: time="2025-09-04T00:11:07.774560607Z" level=info msg="CreateContainer within sandbox \"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\"" Sep 4 00:11:07.775163 containerd[1569]: time="2025-09-04T00:11:07.775111613Z" level=info msg="StartContainer for \"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\"" Sep 4 00:11:07.776376 containerd[1569]: time="2025-09-04T00:11:07.776339159Z" level=info msg="connecting to shim 03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e" address="unix:///run/containerd/s/64f4ba59e612808d9e5b76bb00a0f367fb9ae30029a5c5d15eafdabde1ca93e3" protocol=ttrpc version=3 Sep 4 00:11:07.802789 systemd[1]: Started cri-containerd-03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e.scope - libcontainer container 03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e. Sep 4 00:11:07.834579 containerd[1569]: time="2025-09-04T00:11:07.834521590Z" level=info msg="StartContainer for \"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\" returns successfully" Sep 4 00:11:08.407677 containerd[1569]: time="2025-09-04T00:11:08.407382841Z" level=info msg="CreateContainer within sandbox \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 00:11:08.430496 containerd[1569]: time="2025-09-04T00:11:08.430435985Z" level=info msg="Container 100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:11:08.447690 containerd[1569]: time="2025-09-04T00:11:08.447206186Z" level=info msg="CreateContainer within sandbox \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4\"" Sep 4 00:11:08.450321 containerd[1569]: time="2025-09-04T00:11:08.450283216Z" level=info msg="StartContainer for \"100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4\"" Sep 4 00:11:08.453996 containerd[1569]: time="2025-09-04T00:11:08.453867659Z" level=info msg="connecting to shim 100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4" address="unix:///run/containerd/s/5792b38e4306f31ab6bd9e4d28f1ec74f72a7b0cecd89ebe9cab798beca1f946" protocol=ttrpc version=3 Sep 4 00:11:08.499696 kubelet[2754]: I0904 00:11:08.499092 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-m5wxf" podStartSLOduration=1.9601185920000002 podStartE2EDuration="15.499072573s" podCreationTimestamp="2025-09-04 00:10:53 +0000 UTC" firstStartedPulling="2025-09-04 00:10:54.215621793 +0000 UTC m=+6.323681827" lastFinishedPulling="2025-09-04 00:11:07.754575774 +0000 UTC m=+19.862635808" observedRunningTime="2025-09-04 00:11:08.498997632 +0000 UTC m=+20.607057666" watchObservedRunningTime="2025-09-04 00:11:08.499072573 +0000 UTC m=+20.607132607" Sep 4 00:11:08.505008 systemd[1]: Started cri-containerd-100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4.scope - libcontainer container 100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4. Sep 4 00:11:08.563232 systemd[1]: cri-containerd-100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4.scope: Deactivated successfully. Sep 4 00:11:08.564458 containerd[1569]: time="2025-09-04T00:11:08.564419071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4\" id:\"100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4\" pid:3358 exited_at:{seconds:1756944668 nanos:564173290}" Sep 4 00:11:08.567493 containerd[1569]: time="2025-09-04T00:11:08.567454222Z" level=info msg="received exit event container_id:\"100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4\" id:\"100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4\" pid:3358 exited_at:{seconds:1756944668 nanos:564173290}" Sep 4 00:11:08.577711 containerd[1569]: time="2025-09-04T00:11:08.577641869Z" level=info msg="StartContainer for \"100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4\" returns successfully" Sep 4 00:11:09.417879 containerd[1569]: time="2025-09-04T00:11:09.417814562Z" level=info msg="CreateContainer within sandbox \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 00:11:09.432387 containerd[1569]: time="2025-09-04T00:11:09.432322021Z" level=info msg="Container 73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:11:09.438586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount251522827.mount: Deactivated successfully. Sep 4 00:11:09.444623 containerd[1569]: time="2025-09-04T00:11:09.444582690Z" level=info msg="CreateContainer within sandbox \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\"" Sep 4 00:11:09.445348 containerd[1569]: time="2025-09-04T00:11:09.445306850Z" level=info msg="StartContainer for \"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\"" Sep 4 00:11:09.446795 containerd[1569]: time="2025-09-04T00:11:09.446751894Z" level=info msg="connecting to shim 73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee" address="unix:///run/containerd/s/5792b38e4306f31ab6bd9e4d28f1ec74f72a7b0cecd89ebe9cab798beca1f946" protocol=ttrpc version=3 Sep 4 00:11:09.480018 systemd[1]: Started cri-containerd-73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee.scope - libcontainer container 73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee. Sep 4 00:11:09.533286 containerd[1569]: time="2025-09-04T00:11:09.533229870Z" level=info msg="StartContainer for \"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\" returns successfully" Sep 4 00:11:09.623557 containerd[1569]: time="2025-09-04T00:11:09.623508876Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\" id:\"8ee77806b3277b7ab8ce28cd8a47caf1e2f43d573384e70e41501f5bfedab446\" pid:3427 exited_at:{seconds:1756944669 nanos:623144823}" Sep 4 00:11:09.701019 kubelet[2754]: I0904 00:11:09.700967 2754 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 4 00:11:09.734066 systemd[1]: Created slice kubepods-burstable-pod07c486d5_bac6_45d9_9669_76cd8b974684.slice - libcontainer container kubepods-burstable-pod07c486d5_bac6_45d9_9669_76cd8b974684.slice. Sep 4 00:11:09.745171 systemd[1]: Created slice kubepods-burstable-pod0965d66d_1d57_435d_a79c_b6ddb3ce42e4.slice - libcontainer container kubepods-burstable-pod0965d66d_1d57_435d_a79c_b6ddb3ce42e4.slice. Sep 4 00:11:09.798933 kubelet[2754]: I0904 00:11:09.798878 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtbjj\" (UniqueName: \"kubernetes.io/projected/0965d66d-1d57-435d-a79c-b6ddb3ce42e4-kube-api-access-xtbjj\") pod \"coredns-7c65d6cfc9-8bcbc\" (UID: \"0965d66d-1d57-435d-a79c-b6ddb3ce42e4\") " pod="kube-system/coredns-7c65d6cfc9-8bcbc" Sep 4 00:11:09.798933 kubelet[2754]: I0904 00:11:09.798921 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07c486d5-bac6-45d9-9669-76cd8b974684-config-volume\") pod \"coredns-7c65d6cfc9-mx5pl\" (UID: \"07c486d5-bac6-45d9-9669-76cd8b974684\") " pod="kube-system/coredns-7c65d6cfc9-mx5pl" Sep 4 00:11:09.798933 kubelet[2754]: I0904 00:11:09.798939 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0965d66d-1d57-435d-a79c-b6ddb3ce42e4-config-volume\") pod \"coredns-7c65d6cfc9-8bcbc\" (UID: \"0965d66d-1d57-435d-a79c-b6ddb3ce42e4\") " pod="kube-system/coredns-7c65d6cfc9-8bcbc" Sep 4 00:11:09.799192 kubelet[2754]: I0904 00:11:09.798955 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-244wz\" (UniqueName: \"kubernetes.io/projected/07c486d5-bac6-45d9-9669-76cd8b974684-kube-api-access-244wz\") pod \"coredns-7c65d6cfc9-mx5pl\" (UID: \"07c486d5-bac6-45d9-9669-76cd8b974684\") " pod="kube-system/coredns-7c65d6cfc9-mx5pl" Sep 4 00:11:10.043502 containerd[1569]: time="2025-09-04T00:11:10.043350136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5pl,Uid:07c486d5-bac6-45d9-9669-76cd8b974684,Namespace:kube-system,Attempt:0,}" Sep 4 00:11:10.049219 containerd[1569]: time="2025-09-04T00:11:10.049161811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8bcbc,Uid:0965d66d-1d57-435d-a79c-b6ddb3ce42e4,Namespace:kube-system,Attempt:0,}" Sep 4 00:11:11.881159 systemd-networkd[1467]: cilium_host: Link UP Sep 4 00:11:11.881348 systemd-networkd[1467]: cilium_net: Link UP Sep 4 00:11:11.881566 systemd-networkd[1467]: cilium_net: Gained carrier Sep 4 00:11:11.881791 systemd-networkd[1467]: cilium_host: Gained carrier Sep 4 00:11:11.991729 systemd-networkd[1467]: cilium_vxlan: Link UP Sep 4 00:11:11.991742 systemd-networkd[1467]: cilium_vxlan: Gained carrier Sep 4 00:11:12.223679 kernel: NET: Registered PF_ALG protocol family Sep 4 00:11:12.407869 systemd-networkd[1467]: cilium_net: Gained IPv6LL Sep 4 00:11:12.408320 systemd-networkd[1467]: cilium_host: Gained IPv6LL Sep 4 00:11:13.055400 systemd-networkd[1467]: lxc_health: Link UP Sep 4 00:11:13.056983 systemd-networkd[1467]: lxc_health: Gained carrier Sep 4 00:11:13.624157 systemd-networkd[1467]: cilium_vxlan: Gained IPv6LL Sep 4 00:11:13.649177 kubelet[2754]: I0904 00:11:13.649112 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tksl9" podStartSLOduration=8.92262699 podStartE2EDuration="20.649091723s" podCreationTimestamp="2025-09-04 00:10:53 +0000 UTC" firstStartedPulling="2025-09-04 00:10:53.91321562 +0000 UTC m=+6.021275654" lastFinishedPulling="2025-09-04 00:11:05.639680353 +0000 UTC m=+17.747740387" observedRunningTime="2025-09-04 00:11:10.437976559 +0000 UTC m=+22.546036603" watchObservedRunningTime="2025-09-04 00:11:13.649091723 +0000 UTC m=+25.757151757" Sep 4 00:11:13.668879 kernel: eth0: renamed from tmp8f9b2 Sep 4 00:11:13.669018 systemd-networkd[1467]: lxca34b69de217d: Link UP Sep 4 00:11:13.691574 kernel: eth0: renamed from tmpcc909 Sep 4 00:11:13.694325 systemd-networkd[1467]: lxca34b69de217d: Gained carrier Sep 4 00:11:13.696830 systemd-networkd[1467]: lxcea95591a699a: Link UP Sep 4 00:11:13.710043 systemd-networkd[1467]: lxcea95591a699a: Gained carrier Sep 4 00:11:14.456003 systemd-networkd[1467]: lxc_health: Gained IPv6LL Sep 4 00:11:15.351953 systemd-networkd[1467]: lxcea95591a699a: Gained IPv6LL Sep 4 00:11:15.543888 systemd-networkd[1467]: lxca34b69de217d: Gained IPv6LL Sep 4 00:11:18.231920 systemd[1]: Started sshd@9-10.0.0.134:22-10.0.0.1:37188.service - OpenSSH per-connection server daemon (10.0.0.1:37188). Sep 4 00:11:18.405681 sshd[3908]: Accepted publickey for core from 10.0.0.1 port 37188 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:11:18.410776 sshd-session[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:11:18.421744 systemd-logind[1539]: New session 10 of user core. Sep 4 00:11:18.434953 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 00:11:18.613263 containerd[1569]: time="2025-09-04T00:11:18.612961555Z" level=info msg="connecting to shim cc9097471db555d30e4ba3b12d9ab819bd811308af59e12db16306239fee72b4" address="unix:///run/containerd/s/e4a307bd5615f8ac99e17cd10cb4b7ea579853901a4853578bb05dcb594148c6" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:11:18.638039 containerd[1569]: time="2025-09-04T00:11:18.637635832Z" level=info msg="connecting to shim 8f9b2efb6250a40434371e602017edc3a403f502f0d40e85327c685f4ecd20d0" address="unix:///run/containerd/s/1d0a73239bfc209403bed6d2bee2f15444fd8f4520e7b90107cebe2af3dec066" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:11:18.724425 systemd[1]: Started cri-containerd-8f9b2efb6250a40434371e602017edc3a403f502f0d40e85327c685f4ecd20d0.scope - libcontainer container 8f9b2efb6250a40434371e602017edc3a403f502f0d40e85327c685f4ecd20d0. Sep 4 00:11:18.757738 systemd[1]: Started cri-containerd-cc9097471db555d30e4ba3b12d9ab819bd811308af59e12db16306239fee72b4.scope - libcontainer container cc9097471db555d30e4ba3b12d9ab819bd811308af59e12db16306239fee72b4. Sep 4 00:11:18.779571 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 00:11:18.791457 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 00:11:18.793330 sshd[3916]: Connection closed by 10.0.0.1 port 37188 Sep 4 00:11:18.795695 sshd-session[3908]: pam_unix(sshd:session): session closed for user core Sep 4 00:11:18.806179 systemd[1]: sshd@9-10.0.0.134:22-10.0.0.1:37188.service: Deactivated successfully. Sep 4 00:11:18.814904 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 00:11:18.818358 systemd-logind[1539]: Session 10 logged out. Waiting for processes to exit. Sep 4 00:11:18.831773 systemd-logind[1539]: Removed session 10. Sep 4 00:11:18.922233 containerd[1569]: time="2025-09-04T00:11:18.922157466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5pl,Uid:07c486d5-bac6-45d9-9669-76cd8b974684,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc9097471db555d30e4ba3b12d9ab819bd811308af59e12db16306239fee72b4\"" Sep 4 00:11:18.924349 containerd[1569]: time="2025-09-04T00:11:18.924255383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8bcbc,Uid:0965d66d-1d57-435d-a79c-b6ddb3ce42e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f9b2efb6250a40434371e602017edc3a403f502f0d40e85327c685f4ecd20d0\"" Sep 4 00:11:18.929947 containerd[1569]: time="2025-09-04T00:11:18.929900501Z" level=info msg="CreateContainer within sandbox \"cc9097471db555d30e4ba3b12d9ab819bd811308af59e12db16306239fee72b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:11:18.935079 containerd[1569]: time="2025-09-04T00:11:18.934469166Z" level=info msg="CreateContainer within sandbox \"8f9b2efb6250a40434371e602017edc3a403f502f0d40e85327c685f4ecd20d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:11:18.990685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1601022777.mount: Deactivated successfully. Sep 4 00:11:18.993392 containerd[1569]: time="2025-09-04T00:11:18.993292485Z" level=info msg="Container 836d03e21d5001a039844a7ba9cdbff41c972601371b0ca5b8470790d749ed5c: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:11:18.995558 containerd[1569]: time="2025-09-04T00:11:18.995318689Z" level=info msg="Container e45d8904d224ea19d479910b78656ccb12e8cee3de3cbb772ba649a5befd4155: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:11:19.014721 containerd[1569]: time="2025-09-04T00:11:19.014536802Z" level=info msg="CreateContainer within sandbox \"cc9097471db555d30e4ba3b12d9ab819bd811308af59e12db16306239fee72b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"836d03e21d5001a039844a7ba9cdbff41c972601371b0ca5b8470790d749ed5c\"" Sep 4 00:11:19.024403 containerd[1569]: time="2025-09-04T00:11:19.024307313Z" level=info msg="StartContainer for \"836d03e21d5001a039844a7ba9cdbff41c972601371b0ca5b8470790d749ed5c\"" Sep 4 00:11:19.028187 containerd[1569]: time="2025-09-04T00:11:19.028120380Z" level=info msg="connecting to shim 836d03e21d5001a039844a7ba9cdbff41c972601371b0ca5b8470790d749ed5c" address="unix:///run/containerd/s/e4a307bd5615f8ac99e17cd10cb4b7ea579853901a4853578bb05dcb594148c6" protocol=ttrpc version=3 Sep 4 00:11:19.036618 containerd[1569]: time="2025-09-04T00:11:19.036527591Z" level=info msg="CreateContainer within sandbox \"8f9b2efb6250a40434371e602017edc3a403f502f0d40e85327c685f4ecd20d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e45d8904d224ea19d479910b78656ccb12e8cee3de3cbb772ba649a5befd4155\"" Sep 4 00:11:19.040685 containerd[1569]: time="2025-09-04T00:11:19.039379664Z" level=info msg="StartContainer for \"e45d8904d224ea19d479910b78656ccb12e8cee3de3cbb772ba649a5befd4155\"" Sep 4 00:11:19.041334 containerd[1569]: time="2025-09-04T00:11:19.041309376Z" level=info msg="connecting to shim e45d8904d224ea19d479910b78656ccb12e8cee3de3cbb772ba649a5befd4155" address="unix:///run/containerd/s/1d0a73239bfc209403bed6d2bee2f15444fd8f4520e7b90107cebe2af3dec066" protocol=ttrpc version=3 Sep 4 00:11:19.097260 systemd[1]: Started cri-containerd-836d03e21d5001a039844a7ba9cdbff41c972601371b0ca5b8470790d749ed5c.scope - libcontainer container 836d03e21d5001a039844a7ba9cdbff41c972601371b0ca5b8470790d749ed5c. Sep 4 00:11:19.121029 systemd[1]: Started cri-containerd-e45d8904d224ea19d479910b78656ccb12e8cee3de3cbb772ba649a5befd4155.scope - libcontainer container e45d8904d224ea19d479910b78656ccb12e8cee3de3cbb772ba649a5befd4155. Sep 4 00:11:19.229273 containerd[1569]: time="2025-09-04T00:11:19.227910946Z" level=info msg="StartContainer for \"836d03e21d5001a039844a7ba9cdbff41c972601371b0ca5b8470790d749ed5c\" returns successfully" Sep 4 00:11:19.246096 containerd[1569]: time="2025-09-04T00:11:19.243759985Z" level=info msg="StartContainer for \"e45d8904d224ea19d479910b78656ccb12e8cee3de3cbb772ba649a5befd4155\" returns successfully" Sep 4 00:11:19.808855 kubelet[2754]: I0904 00:11:19.807112 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8bcbc" podStartSLOduration=26.807052579 podStartE2EDuration="26.807052579s" podCreationTimestamp="2025-09-04 00:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:11:19.665629382 +0000 UTC m=+31.773689426" watchObservedRunningTime="2025-09-04 00:11:19.807052579 +0000 UTC m=+31.915112613" Sep 4 00:11:19.814479 kubelet[2754]: I0904 00:11:19.813354 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mx5pl" podStartSLOduration=26.813324592 podStartE2EDuration="26.813324592s" podCreationTimestamp="2025-09-04 00:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:11:19.807577875 +0000 UTC m=+31.915637919" watchObservedRunningTime="2025-09-04 00:11:19.813324592 +0000 UTC m=+31.921384626" Sep 4 00:11:23.845228 systemd[1]: Started sshd@10-10.0.0.134:22-10.0.0.1:52508.service - OpenSSH per-connection server daemon (10.0.0.1:52508). Sep 4 00:11:23.972069 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 52508 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:11:23.976305 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:11:23.996103 systemd-logind[1539]: New session 11 of user core. Sep 4 00:11:24.037024 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 00:11:24.309745 sshd[4102]: Connection closed by 10.0.0.1 port 52508 Sep 4 00:11:24.313003 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Sep 4 00:11:24.324717 systemd[1]: sshd@10-10.0.0.134:22-10.0.0.1:52508.service: Deactivated successfully. Sep 4 00:11:24.329567 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 00:11:24.330877 systemd-logind[1539]: Session 11 logged out. Waiting for processes to exit. Sep 4 00:11:24.333088 systemd-logind[1539]: Removed session 11. Sep 4 00:11:29.348108 systemd[1]: Started sshd@11-10.0.0.134:22-10.0.0.1:52572.service - OpenSSH per-connection server daemon (10.0.0.1:52572). Sep 4 00:11:29.447371 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 52572 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:11:29.452052 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:11:29.484531 systemd-logind[1539]: New session 12 of user core. Sep 4 00:11:29.497014 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 00:11:29.577624 kernel: hrtimer: interrupt took 6261899 ns Sep 4 00:11:29.825677 sshd[4122]: Connection closed by 10.0.0.1 port 52572 Sep 4 00:11:29.826569 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Sep 4 00:11:29.839630 systemd[1]: sshd@11-10.0.0.134:22-10.0.0.1:52572.service: Deactivated successfully. Sep 4 00:11:29.848507 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 00:11:29.850034 systemd-logind[1539]: Session 12 logged out. Waiting for processes to exit. Sep 4 00:11:29.853492 systemd-logind[1539]: Removed session 12. Sep 4 00:11:34.848537 systemd[1]: Started sshd@12-10.0.0.134:22-10.0.0.1:60622.service - OpenSSH per-connection server daemon (10.0.0.1:60622). Sep 4 00:11:34.912977 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 60622 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:11:34.915394 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:11:34.921534 systemd-logind[1539]: New session 13 of user core. Sep 4 00:11:34.931100 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 00:11:35.072571 sshd[4139]: Connection closed by 10.0.0.1 port 60622 Sep 4 00:11:35.073330 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Sep 4 00:11:35.078763 systemd[1]: sshd@12-10.0.0.134:22-10.0.0.1:60622.service: Deactivated successfully. Sep 4 00:11:35.081586 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 00:11:35.083674 systemd-logind[1539]: Session 13 logged out. Waiting for processes to exit. Sep 4 00:11:35.085516 systemd-logind[1539]: Removed session 13. Sep 4 00:11:40.107601 systemd[1]: Started sshd@13-10.0.0.134:22-10.0.0.1:35802.service - OpenSSH per-connection server daemon (10.0.0.1:35802). Sep 4 00:11:40.205620 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 35802 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:11:40.208403 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:11:40.233886 systemd-logind[1539]: New session 14 of user core. Sep 4 00:11:40.250170 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 00:11:40.473341 sshd[4157]: Connection closed by 10.0.0.1 port 35802 Sep 4 00:11:40.473910 sshd-session[4155]: pam_unix(sshd:session): session closed for user core Sep 4 00:11:40.483786 systemd[1]: sshd@13-10.0.0.134:22-10.0.0.1:35802.service: Deactivated successfully. Sep 4 00:11:40.491215 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 00:11:40.493465 systemd-logind[1539]: Session 14 logged out. Waiting for processes to exit. Sep 4 00:11:40.502072 systemd-logind[1539]: Removed session 14. Sep 4 00:11:45.527429 systemd[1]: Started sshd@14-10.0.0.134:22-10.0.0.1:35818.service - OpenSSH per-connection server daemon (10.0.0.1:35818). Sep 4 00:11:45.624066 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 35818 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:11:45.631889 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:11:45.651933 systemd-logind[1539]: New session 15 of user core. Sep 4 00:11:45.665247 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 00:11:45.894466 sshd[4173]: Connection closed by 10.0.0.1 port 35818 Sep 4 00:11:45.895059 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Sep 4 00:11:45.908434 systemd[1]: sshd@14-10.0.0.134:22-10.0.0.1:35818.service: Deactivated successfully. Sep 4 00:11:45.912642 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 00:11:45.917258 systemd-logind[1539]: Session 15 logged out. Waiting for processes to exit. Sep 4 00:11:45.923274 systemd[1]: Started sshd@15-10.0.0.134:22-10.0.0.1:35830.service - OpenSSH per-connection server daemon (10.0.0.1:35830). Sep 4 00:11:45.925219 systemd-logind[1539]: Removed session 15. Sep 4 00:11:46.013078 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 35830 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:11:46.012141 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:11:46.026722 systemd-logind[1539]: New session 16 of user core. Sep 4 00:11:46.039308 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 00:11:46.321635 sshd[4189]: Connection closed by 10.0.0.1 port 35830 Sep 4 00:11:46.322802 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Sep 4 00:11:46.339585 systemd[1]: sshd@15-10.0.0.134:22-10.0.0.1:35830.service: Deactivated successfully. Sep 4 00:11:46.344373 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 00:11:46.348776 systemd-logind[1539]: Session 16 logged out. Waiting for processes to exit. Sep 4 00:11:46.360305 systemd[1]: Started sshd@16-10.0.0.134:22-10.0.0.1:35834.service - OpenSSH per-connection server daemon (10.0.0.1:35834). Sep 4 00:11:46.365004 systemd-logind[1539]: Removed session 16. Sep 4 00:11:46.468968 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 35834 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:11:46.473737 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:11:46.490365 systemd-logind[1539]: New session 17 of user core. Sep 4 00:11:46.499996 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 00:11:46.879676 sshd[4202]: Connection closed by 10.0.0.1 port 35834 Sep 4 00:11:46.885076 sshd-session[4200]: pam_unix(sshd:session): session closed for user core Sep 4 00:11:46.896833 systemd[1]: sshd@16-10.0.0.134:22-10.0.0.1:35834.service: Deactivated successfully. Sep 4 00:11:46.902391 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 00:11:46.905322 systemd-logind[1539]: Session 17 logged out. Waiting for processes to exit. Sep 4 00:11:46.917166 systemd-logind[1539]: Removed session 17. Sep 4 00:11:51.915960 systemd[1]: Started sshd@17-10.0.0.134:22-10.0.0.1:47816.service - OpenSSH per-connection server daemon (10.0.0.1:47816). Sep 4 00:11:52.037207 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 47816 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:11:52.040214 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:11:52.059160 systemd-logind[1539]: New session 18 of user core. Sep 4 00:11:52.067415 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 00:11:52.345737 sshd[4220]: Connection closed by 10.0.0.1 port 47816 Sep 4 00:11:52.346226 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Sep 4 00:11:52.356088 systemd[1]: sshd@17-10.0.0.134:22-10.0.0.1:47816.service: Deactivated successfully. Sep 4 00:11:52.364715 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 00:11:52.371782 systemd-logind[1539]: Session 18 logged out. Waiting for processes to exit. Sep 4 00:11:52.375362 systemd-logind[1539]: Removed session 18. Sep 4 00:11:56.184873 kubelet[2754]: E0904 00:11:56.184800 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:11:57.390511 systemd[1]: Started sshd@18-10.0.0.134:22-10.0.0.1:47818.service - OpenSSH per-connection server daemon (10.0.0.1:47818). Sep 4 00:11:57.551538 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 47818 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:11:57.555053 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:11:57.570600 systemd-logind[1539]: New session 19 of user core. Sep 4 00:11:57.588635 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 00:11:57.917639 sshd[4238]: Connection closed by 10.0.0.1 port 47818 Sep 4 00:11:57.919413 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Sep 4 00:11:57.943878 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:47818.service: Deactivated successfully. Sep 4 00:11:57.947100 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 00:11:57.961828 systemd-logind[1539]: Session 19 logged out. Waiting for processes to exit. Sep 4 00:11:57.965298 systemd-logind[1539]: Removed session 19. Sep 4 00:11:58.187687 kubelet[2754]: E0904 00:11:58.187607 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:12:02.948859 systemd[1]: Started sshd@19-10.0.0.134:22-10.0.0.1:41686.service - OpenSSH per-connection server daemon (10.0.0.1:41686). Sep 4 00:12:03.100061 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 41686 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:12:03.109987 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:12:03.129701 systemd-logind[1539]: New session 20 of user core. Sep 4 00:12:03.139384 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 00:12:03.385171 sshd[4253]: Connection closed by 10.0.0.1 port 41686 Sep 4 00:12:03.385362 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Sep 4 00:12:03.391034 systemd[1]: sshd@19-10.0.0.134:22-10.0.0.1:41686.service: Deactivated successfully. Sep 4 00:12:03.394006 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 00:12:03.396815 systemd-logind[1539]: Session 20 logged out. Waiting for processes to exit. Sep 4 00:12:03.401894 systemd-logind[1539]: Removed session 20. Sep 4 00:12:08.420824 systemd[1]: Started sshd@20-10.0.0.134:22-10.0.0.1:41692.service - OpenSSH per-connection server daemon (10.0.0.1:41692). Sep 4 00:12:08.516515 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 41692 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:12:08.522541 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:12:08.535371 systemd-logind[1539]: New session 21 of user core. Sep 4 00:12:08.544089 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 00:12:08.778293 sshd[4268]: Connection closed by 10.0.0.1 port 41692 Sep 4 00:12:08.778575 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Sep 4 00:12:08.825702 systemd[1]: sshd@20-10.0.0.134:22-10.0.0.1:41692.service: Deactivated successfully. Sep 4 00:12:08.834127 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 00:12:08.839356 systemd-logind[1539]: Session 21 logged out. Waiting for processes to exit. Sep 4 00:12:08.858535 systemd[1]: Started sshd@21-10.0.0.134:22-10.0.0.1:41698.service - OpenSSH per-connection server daemon (10.0.0.1:41698). Sep 4 00:12:08.860136 systemd-logind[1539]: Removed session 21. Sep 4 00:12:08.945173 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 41698 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:12:08.946943 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:12:08.957478 systemd-logind[1539]: New session 22 of user core. Sep 4 00:12:08.982988 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 00:12:09.182245 kubelet[2754]: E0904 00:12:09.182093 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:12:09.764413 sshd[4284]: Connection closed by 10.0.0.1 port 41698 Sep 4 00:12:09.764762 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Sep 4 00:12:09.781842 systemd[1]: sshd@21-10.0.0.134:22-10.0.0.1:41698.service: Deactivated successfully. Sep 4 00:12:09.791429 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 00:12:09.794791 systemd-logind[1539]: Session 22 logged out. Waiting for processes to exit. Sep 4 00:12:09.798200 systemd[1]: Started sshd@22-10.0.0.134:22-10.0.0.1:41706.service - OpenSSH per-connection server daemon (10.0.0.1:41706). Sep 4 00:12:09.815912 systemd-logind[1539]: Removed session 22. Sep 4 00:12:09.906207 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 41706 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:12:09.916285 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:12:09.933915 systemd-logind[1539]: New session 23 of user core. Sep 4 00:12:09.943980 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 00:12:12.397494 sshd[4297]: Connection closed by 10.0.0.1 port 41706 Sep 4 00:12:12.398941 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Sep 4 00:12:12.417959 systemd[1]: sshd@22-10.0.0.134:22-10.0.0.1:41706.service: Deactivated successfully. Sep 4 00:12:12.424150 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 00:12:12.424589 systemd[1]: session-23.scope: Consumed 792ms CPU time, 69.6M memory peak. Sep 4 00:12:12.428329 systemd-logind[1539]: Session 23 logged out. Waiting for processes to exit. Sep 4 00:12:12.432235 systemd[1]: Started sshd@23-10.0.0.134:22-10.0.0.1:59380.service - OpenSSH per-connection server daemon (10.0.0.1:59380). Sep 4 00:12:12.436509 systemd-logind[1539]: Removed session 23. Sep 4 00:12:12.527544 sshd[4316]: Accepted publickey for core from 10.0.0.1 port 59380 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:12:12.534406 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:12:12.555434 systemd-logind[1539]: New session 24 of user core. Sep 4 00:12:12.574003 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 00:12:13.535887 sshd[4318]: Connection closed by 10.0.0.1 port 59380 Sep 4 00:12:13.541380 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Sep 4 00:12:13.580329 systemd[1]: sshd@23-10.0.0.134:22-10.0.0.1:59380.service: Deactivated successfully. Sep 4 00:12:13.588362 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 00:12:13.596723 systemd-logind[1539]: Session 24 logged out. Waiting for processes to exit. Sep 4 00:12:13.605417 systemd[1]: Started sshd@24-10.0.0.134:22-10.0.0.1:59392.service - OpenSSH per-connection server daemon (10.0.0.1:59392). Sep 4 00:12:13.608914 systemd-logind[1539]: Removed session 24. Sep 4 00:12:13.726913 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 59392 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:12:13.735178 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:12:13.754123 systemd-logind[1539]: New session 25 of user core. Sep 4 00:12:13.772936 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 00:12:13.997373 sshd[4331]: Connection closed by 10.0.0.1 port 59392 Sep 4 00:12:13.997944 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Sep 4 00:12:14.014472 systemd[1]: sshd@24-10.0.0.134:22-10.0.0.1:59392.service: Deactivated successfully. Sep 4 00:12:14.024345 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 00:12:14.026904 systemd-logind[1539]: Session 25 logged out. Waiting for processes to exit. Sep 4 00:12:14.034799 systemd-logind[1539]: Removed session 25. Sep 4 00:12:18.185266 kubelet[2754]: E0904 00:12:18.182773 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:12:19.031610 systemd[1]: Started sshd@25-10.0.0.134:22-10.0.0.1:59396.service - OpenSSH per-connection server daemon (10.0.0.1:59396). Sep 4 00:12:19.148470 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 59396 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:12:19.155490 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:12:19.171950 systemd-logind[1539]: New session 26 of user core. Sep 4 00:12:19.183138 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 00:12:19.408046 sshd[4346]: Connection closed by 10.0.0.1 port 59396 Sep 4 00:12:19.409626 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Sep 4 00:12:19.419241 systemd[1]: sshd@25-10.0.0.134:22-10.0.0.1:59396.service: Deactivated successfully. Sep 4 00:12:19.429104 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 00:12:19.433543 systemd-logind[1539]: Session 26 logged out. Waiting for processes to exit. Sep 4 00:12:19.435922 systemd-logind[1539]: Removed session 26. Sep 4 00:12:24.440899 systemd[1]: Started sshd@26-10.0.0.134:22-10.0.0.1:48766.service - OpenSSH per-connection server daemon (10.0.0.1:48766). Sep 4 00:12:24.553123 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 48766 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:12:24.568930 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:12:24.593088 systemd-logind[1539]: New session 27 of user core. Sep 4 00:12:24.607062 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 00:12:24.914317 sshd[4364]: Connection closed by 10.0.0.1 port 48766 Sep 4 00:12:24.915140 sshd-session[4362]: pam_unix(sshd:session): session closed for user core Sep 4 00:12:24.922556 systemd[1]: sshd@26-10.0.0.134:22-10.0.0.1:48766.service: Deactivated successfully. Sep 4 00:12:24.937016 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 00:12:24.942179 systemd-logind[1539]: Session 27 logged out. Waiting for processes to exit. Sep 4 00:12:24.954871 systemd-logind[1539]: Removed session 27. Sep 4 00:12:29.952970 systemd[1]: Started sshd@27-10.0.0.134:22-10.0.0.1:43144.service - OpenSSH per-connection server daemon (10.0.0.1:43144). Sep 4 00:12:30.086018 sshd[4381]: Accepted publickey for core from 10.0.0.1 port 43144 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:12:30.090138 sshd-session[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:12:30.116562 systemd-logind[1539]: New session 28 of user core. Sep 4 00:12:30.131267 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 00:12:30.403414 sshd[4384]: Connection closed by 10.0.0.1 port 43144 Sep 4 00:12:30.401342 sshd-session[4381]: pam_unix(sshd:session): session closed for user core Sep 4 00:12:30.424085 systemd[1]: sshd@27-10.0.0.134:22-10.0.0.1:43144.service: Deactivated successfully. Sep 4 00:12:30.429230 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 00:12:30.442478 systemd-logind[1539]: Session 28 logged out. Waiting for processes to exit. Sep 4 00:12:30.444413 systemd-logind[1539]: Removed session 28. Sep 4 00:12:31.184106 kubelet[2754]: E0904 00:12:31.182877 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:12:32.183241 kubelet[2754]: E0904 00:12:32.183145 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:12:35.445266 systemd[1]: Started sshd@28-10.0.0.134:22-10.0.0.1:43160.service - OpenSSH per-connection server daemon (10.0.0.1:43160). Sep 4 00:12:35.559203 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 43160 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:12:35.558899 sshd-session[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:12:35.589898 systemd-logind[1539]: New session 29 of user core. Sep 4 00:12:35.607003 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 00:12:35.826047 sshd[4399]: Connection closed by 10.0.0.1 port 43160 Sep 4 00:12:35.827065 sshd-session[4397]: pam_unix(sshd:session): session closed for user core Sep 4 00:12:35.850388 systemd[1]: sshd@28-10.0.0.134:22-10.0.0.1:43160.service: Deactivated successfully. Sep 4 00:12:35.856101 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 00:12:35.859670 systemd-logind[1539]: Session 29 logged out. Waiting for processes to exit. Sep 4 00:12:35.864981 systemd-logind[1539]: Removed session 29. Sep 4 00:12:36.188485 kubelet[2754]: E0904 00:12:36.186501 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:12:38.183995 kubelet[2754]: E0904 00:12:38.183354 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:12:40.859157 systemd[1]: Started sshd@29-10.0.0.134:22-10.0.0.1:52744.service - OpenSSH per-connection server daemon (10.0.0.1:52744). Sep 4 00:12:40.983512 sshd[4413]: Accepted publickey for core from 10.0.0.1 port 52744 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:12:40.986345 sshd-session[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:12:41.013996 systemd-logind[1539]: New session 30 of user core. Sep 4 00:12:41.023500 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 4 00:12:41.211480 sshd[4415]: Connection closed by 10.0.0.1 port 52744 Sep 4 00:12:41.212044 sshd-session[4413]: pam_unix(sshd:session): session closed for user core Sep 4 00:12:41.227139 systemd[1]: sshd@29-10.0.0.134:22-10.0.0.1:52744.service: Deactivated successfully. Sep 4 00:12:41.232775 systemd[1]: session-30.scope: Deactivated successfully. Sep 4 00:12:41.241076 systemd-logind[1539]: Session 30 logged out. Waiting for processes to exit. Sep 4 00:12:41.244766 systemd[1]: Started sshd@30-10.0.0.134:22-10.0.0.1:52748.service - OpenSSH per-connection server daemon (10.0.0.1:52748). Sep 4 00:12:41.248627 systemd-logind[1539]: Removed session 30. Sep 4 00:12:41.355785 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 52748 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:12:41.361761 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:12:41.391010 systemd-logind[1539]: New session 31 of user core. Sep 4 00:12:41.409396 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 4 00:12:44.067805 containerd[1569]: time="2025-09-04T00:12:44.067735672Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\" id:\"2943b04b832ded74430f508d6366deb69ee016792bd410d20c546a65d693c6b3\" pid:4451 exited_at:{seconds:1756944764 nanos:67179301}" Sep 4 00:12:44.078350 containerd[1569]: time="2025-09-04T00:12:44.078175297Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 00:12:44.084766 containerd[1569]: time="2025-09-04T00:12:44.081425523Z" level=info msg="StopContainer for \"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\" with timeout 2 (s)" Sep 4 00:12:44.089623 containerd[1569]: time="2025-09-04T00:12:44.089544958Z" level=info msg="StopContainer for \"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\" with timeout 30 (s)" Sep 4 00:12:44.101467 containerd[1569]: time="2025-09-04T00:12:44.101409073Z" level=info msg="Stop container \"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\" with signal terminated" Sep 4 00:12:44.102034 containerd[1569]: time="2025-09-04T00:12:44.101964732Z" level=info msg="Stop container \"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\" with signal terminated" Sep 4 00:12:44.134936 systemd-networkd[1467]: lxc_health: Link DOWN Sep 4 00:12:44.134949 systemd-networkd[1467]: lxc_health: Lost carrier Sep 4 00:12:44.156125 containerd[1569]: time="2025-09-04T00:12:44.153223757Z" level=info msg="received exit event container_id:\"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\" id:\"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\" pid:3323 exited_at:{seconds:1756944764 nanos:152841896}" Sep 4 00:12:44.156125 containerd[1569]: time="2025-09-04T00:12:44.153508244Z" level=info msg="TaskExit event in podsandbox handler container_id:\"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\" id:\"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\" pid:3323 exited_at:{seconds:1756944764 nanos:152841896}" Sep 4 00:12:44.153961 systemd[1]: cri-containerd-03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e.scope: Deactivated successfully. Sep 4 00:12:44.204039 systemd[1]: cri-containerd-73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee.scope: Deactivated successfully. Sep 4 00:12:44.204712 systemd[1]: cri-containerd-73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee.scope: Consumed 9.389s CPU time, 126.3M memory peak, 248K read from disk, 13.3M written to disk. Sep 4 00:12:44.209980 containerd[1569]: time="2025-09-04T00:12:44.207259523Z" level=info msg="received exit event container_id:\"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\" id:\"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\" pid:3396 exited_at:{seconds:1756944764 nanos:206954057}" Sep 4 00:12:44.209980 containerd[1569]: time="2025-09-04T00:12:44.207576421Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\" id:\"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\" pid:3396 exited_at:{seconds:1756944764 nanos:206954057}" Sep 4 00:12:44.240101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e-rootfs.mount: Deactivated successfully. Sep 4 00:12:44.262567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee-rootfs.mount: Deactivated successfully. Sep 4 00:12:44.284639 containerd[1569]: time="2025-09-04T00:12:44.284422153Z" level=info msg="StopContainer for \"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\" returns successfully" Sep 4 00:12:44.295390 containerd[1569]: time="2025-09-04T00:12:44.291495534Z" level=info msg="StopPodSandbox for \"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\"" Sep 4 00:12:44.295608 containerd[1569]: time="2025-09-04T00:12:44.295562862Z" level=info msg="Container to stop \"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:12:44.310779 containerd[1569]: time="2025-09-04T00:12:44.310701156Z" level=info msg="StopContainer for \"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\" returns successfully" Sep 4 00:12:44.327456 systemd[1]: cri-containerd-0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb.scope: Deactivated successfully. Sep 4 00:12:44.331010 containerd[1569]: time="2025-09-04T00:12:44.330265636Z" level=info msg="StopPodSandbox for \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\"" Sep 4 00:12:44.331010 containerd[1569]: time="2025-09-04T00:12:44.330386855Z" level=info msg="Container to stop \"a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:12:44.331010 containerd[1569]: time="2025-09-04T00:12:44.330407934Z" level=info msg="Container to stop \"33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:12:44.331010 containerd[1569]: time="2025-09-04T00:12:44.330420809Z" level=info msg="Container to stop \"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:12:44.331010 containerd[1569]: time="2025-09-04T00:12:44.330434144Z" level=info msg="Container to stop \"57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:12:44.331010 containerd[1569]: time="2025-09-04T00:12:44.330446878Z" level=info msg="Container to stop \"100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:12:44.332478 containerd[1569]: time="2025-09-04T00:12:44.332357405Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\" id:\"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\" pid:3010 exit_status:137 exited_at:{seconds:1756944764 nanos:331919337}" Sep 4 00:12:44.353983 systemd[1]: cri-containerd-97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a.scope: Deactivated successfully. Sep 4 00:12:44.470386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a-rootfs.mount: Deactivated successfully. Sep 4 00:12:44.478031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb-rootfs.mount: Deactivated successfully. Sep 4 00:12:44.493770 containerd[1569]: time="2025-09-04T00:12:44.493676125Z" level=info msg="shim disconnected" id=97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a namespace=k8s.io Sep 4 00:12:44.493770 containerd[1569]: time="2025-09-04T00:12:44.493728334Z" level=warning msg="cleaning up after shim disconnected" id=97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a namespace=k8s.io Sep 4 00:12:44.555913 containerd[1569]: time="2025-09-04T00:12:44.493739645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 00:12:44.556143 containerd[1569]: time="2025-09-04T00:12:44.495059928Z" level=info msg="shim disconnected" id=0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb namespace=k8s.io Sep 4 00:12:44.556143 containerd[1569]: time="2025-09-04T00:12:44.556076110Z" level=warning msg="cleaning up after shim disconnected" id=0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb namespace=k8s.io Sep 4 00:12:44.556143 containerd[1569]: time="2025-09-04T00:12:44.556093252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 00:12:44.627277 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a-shm.mount: Deactivated successfully. Sep 4 00:12:44.629677 containerd[1569]: time="2025-09-04T00:12:44.625611062Z" level=error msg="Failed to handle event container_id:\"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\" id:\"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\" pid:3010 exit_status:137 exited_at:{seconds:1756944764 nanos:331919337} for 0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Sep 4 00:12:44.629978 containerd[1569]: time="2025-09-04T00:12:44.629936106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" id:\"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" pid:2908 exit_status:137 exited_at:{seconds:1756944764 nanos:373082747}" Sep 4 00:12:44.639723 containerd[1569]: time="2025-09-04T00:12:44.639632999Z" level=info msg="received exit event sandbox_id:\"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" exit_status:137 exited_at:{seconds:1756944764 nanos:373082747}" Sep 4 00:12:44.640157 containerd[1569]: time="2025-09-04T00:12:44.639835843Z" level=info msg="received exit event sandbox_id:\"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\" exit_status:137 exited_at:{seconds:1756944764 nanos:331919337}" Sep 4 00:12:44.648741 containerd[1569]: time="2025-09-04T00:12:44.648602450Z" level=info msg="TearDown network for sandbox \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" successfully" Sep 4 00:12:44.648741 containerd[1569]: time="2025-09-04T00:12:44.648718749Z" level=info msg="StopPodSandbox for \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" returns successfully" Sep 4 00:12:44.650729 containerd[1569]: time="2025-09-04T00:12:44.650678399Z" level=info msg="TearDown network for sandbox \"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\" successfully" Sep 4 00:12:44.650729 containerd[1569]: time="2025-09-04T00:12:44.650725177Z" level=info msg="StopPodSandbox for \"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\" returns successfully" Sep 4 00:12:44.808127 kubelet[2754]: I0904 00:12:44.808061 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-hubble-tls\") pod \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " Sep 4 00:12:44.808865 kubelet[2754]: I0904 00:12:44.808839 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-clustermesh-secrets\") pod \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " Sep 4 00:12:44.810339 kubelet[2754]: I0904 00:12:44.809558 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxq2m\" (UniqueName: \"kubernetes.io/projected/09d87aba-88fe-4de6-bdf3-1647db944452-kube-api-access-mxq2m\") pod \"09d87aba-88fe-4de6-bdf3-1647db944452\" (UID: \"09d87aba-88fe-4de6-bdf3-1647db944452\") " Sep 4 00:12:44.810339 kubelet[2754]: I0904 00:12:44.809611 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-cilium-run\") pod \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " Sep 4 00:12:44.810339 kubelet[2754]: I0904 00:12:44.809633 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-lib-modules\") pod \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " Sep 4 00:12:44.810339 kubelet[2754]: I0904 00:12:44.809686 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-etc-cni-netd\") pod \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " Sep 4 00:12:44.810339 kubelet[2754]: I0904 00:12:44.809708 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fstsz\" (UniqueName: \"kubernetes.io/projected/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-kube-api-access-fstsz\") pod \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " Sep 4 00:12:44.810339 kubelet[2754]: I0904 00:12:44.809726 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-bpf-maps\") pod \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " Sep 4 00:12:44.810691 kubelet[2754]: I0904 00:12:44.809742 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-hostproc\") pod \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " Sep 4 00:12:44.810691 kubelet[2754]: I0904 00:12:44.809758 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-host-proc-sys-kernel\") pod \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " Sep 4 00:12:44.810691 kubelet[2754]: I0904 00:12:44.809778 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-host-proc-sys-net\") pod \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " Sep 4 00:12:44.810691 kubelet[2754]: I0904 00:12:44.809807 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-cilium-config-path\") pod \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " Sep 4 00:12:44.810691 kubelet[2754]: I0904 00:12:44.809826 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09d87aba-88fe-4de6-bdf3-1647db944452-cilium-config-path\") pod \"09d87aba-88fe-4de6-bdf3-1647db944452\" (UID: \"09d87aba-88fe-4de6-bdf3-1647db944452\") " Sep 4 00:12:44.810691 kubelet[2754]: I0904 00:12:44.809848 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-cni-path\") pod \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " Sep 4 00:12:44.810870 kubelet[2754]: I0904 00:12:44.809865 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-xtables-lock\") pod \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " Sep 4 00:12:44.810870 kubelet[2754]: I0904 00:12:44.809886 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-cilium-cgroup\") pod \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\" (UID: \"c2ccef98-f022-42dc-9bb4-5ff35b8600fc\") " Sep 4 00:12:44.810870 kubelet[2754]: I0904 00:12:44.809996 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c2ccef98-f022-42dc-9bb4-5ff35b8600fc" (UID: "c2ccef98-f022-42dc-9bb4-5ff35b8600fc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 00:12:44.812605 kubelet[2754]: I0904 00:12:44.812538 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-hostproc" (OuterVolumeSpecName: "hostproc") pod "c2ccef98-f022-42dc-9bb4-5ff35b8600fc" (UID: "c2ccef98-f022-42dc-9bb4-5ff35b8600fc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 00:12:44.812605 kubelet[2754]: I0904 00:12:44.812610 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c2ccef98-f022-42dc-9bb4-5ff35b8600fc" (UID: "c2ccef98-f022-42dc-9bb4-5ff35b8600fc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 00:12:44.812862 kubelet[2754]: I0904 00:12:44.812669 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c2ccef98-f022-42dc-9bb4-5ff35b8600fc" (UID: "c2ccef98-f022-42dc-9bb4-5ff35b8600fc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 00:12:44.812862 kubelet[2754]: I0904 00:12:44.812704 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c2ccef98-f022-42dc-9bb4-5ff35b8600fc" (UID: "c2ccef98-f022-42dc-9bb4-5ff35b8600fc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 00:12:44.819998 kubelet[2754]: I0904 00:12:44.818819 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c2ccef98-f022-42dc-9bb4-5ff35b8600fc" (UID: "c2ccef98-f022-42dc-9bb4-5ff35b8600fc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 00:12:44.819998 kubelet[2754]: I0904 00:12:44.819317 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c2ccef98-f022-42dc-9bb4-5ff35b8600fc" (UID: "c2ccef98-f022-42dc-9bb4-5ff35b8600fc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 00:12:44.819998 kubelet[2754]: I0904 00:12:44.819328 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-cni-path" (OuterVolumeSpecName: "cni-path") pod "c2ccef98-f022-42dc-9bb4-5ff35b8600fc" (UID: "c2ccef98-f022-42dc-9bb4-5ff35b8600fc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 00:12:44.821978 kubelet[2754]: I0904 00:12:44.820520 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c2ccef98-f022-42dc-9bb4-5ff35b8600fc" (UID: "c2ccef98-f022-42dc-9bb4-5ff35b8600fc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 00:12:44.822313 kubelet[2754]: I0904 00:12:44.819531 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c2ccef98-f022-42dc-9bb4-5ff35b8600fc" (UID: "c2ccef98-f022-42dc-9bb4-5ff35b8600fc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 00:12:44.823821 kubelet[2754]: I0904 00:12:44.823775 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c2ccef98-f022-42dc-9bb4-5ff35b8600fc" (UID: "c2ccef98-f022-42dc-9bb4-5ff35b8600fc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 00:12:44.826258 kubelet[2754]: I0904 00:12:44.826199 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c2ccef98-f022-42dc-9bb4-5ff35b8600fc" (UID: "c2ccef98-f022-42dc-9bb4-5ff35b8600fc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 00:12:44.831779 kubelet[2754]: I0904 00:12:44.831719 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d87aba-88fe-4de6-bdf3-1647db944452-kube-api-access-mxq2m" (OuterVolumeSpecName: "kube-api-access-mxq2m") pod "09d87aba-88fe-4de6-bdf3-1647db944452" (UID: "09d87aba-88fe-4de6-bdf3-1647db944452"). InnerVolumeSpecName "kube-api-access-mxq2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 00:12:44.831992 kubelet[2754]: I0904 00:12:44.831719 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c2ccef98-f022-42dc-9bb4-5ff35b8600fc" (UID: "c2ccef98-f022-42dc-9bb4-5ff35b8600fc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 00:12:44.832114 kubelet[2754]: I0904 00:12:44.832039 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-kube-api-access-fstsz" (OuterVolumeSpecName: "kube-api-access-fstsz") pod "c2ccef98-f022-42dc-9bb4-5ff35b8600fc" (UID: "c2ccef98-f022-42dc-9bb4-5ff35b8600fc"). InnerVolumeSpecName "kube-api-access-fstsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 00:12:44.832297 kubelet[2754]: I0904 00:12:44.832251 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09d87aba-88fe-4de6-bdf3-1647db944452-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "09d87aba-88fe-4de6-bdf3-1647db944452" (UID: "09d87aba-88fe-4de6-bdf3-1647db944452"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 00:12:44.911460 kubelet[2754]: I0904 00:12:44.911235 2754 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 00:12:44.911460 kubelet[2754]: I0904 00:12:44.911300 2754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxq2m\" (UniqueName: \"kubernetes.io/projected/09d87aba-88fe-4de6-bdf3-1647db944452-kube-api-access-mxq2m\") on node \"localhost\" DevicePath \"\"" Sep 4 00:12:44.911460 kubelet[2754]: I0904 00:12:44.911344 2754 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 00:12:44.911460 kubelet[2754]: I0904 00:12:44.911360 2754 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 00:12:44.911460 kubelet[2754]: I0904 00:12:44.911374 2754 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 00:12:44.911460 kubelet[2754]: I0904 00:12:44.911385 2754 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 00:12:44.911460 kubelet[2754]: I0904 00:12:44.911397 2754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fstsz\" (UniqueName: \"kubernetes.io/projected/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-kube-api-access-fstsz\") on node \"localhost\" DevicePath \"\"" Sep 4 00:12:44.911460 kubelet[2754]: I0904 00:12:44.911410 2754 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 00:12:44.912015 kubelet[2754]: I0904 00:12:44.911432 2754 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 00:12:44.912015 kubelet[2754]: I0904 00:12:44.911444 2754 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 00:12:44.912015 kubelet[2754]: I0904 00:12:44.911455 2754 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 00:12:44.912015 kubelet[2754]: I0904 00:12:44.911468 2754 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09d87aba-88fe-4de6-bdf3-1647db944452-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 00:12:44.912015 kubelet[2754]: I0904 00:12:44.911479 2754 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 00:12:44.912015 kubelet[2754]: I0904 00:12:44.911490 2754 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 00:12:44.912015 kubelet[2754]: I0904 00:12:44.911502 2754 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 00:12:44.912015 kubelet[2754]: I0904 00:12:44.911514 2754 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2ccef98-f022-42dc-9bb4-5ff35b8600fc-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 00:12:45.093737 kubelet[2754]: I0904 00:12:45.091941 2754 scope.go:117] "RemoveContainer" containerID="03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e" Sep 4 00:12:45.109831 containerd[1569]: time="2025-09-04T00:12:45.108775062Z" level=info msg="RemoveContainer for \"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\"" Sep 4 00:12:45.121385 systemd[1]: Removed slice kubepods-besteffort-pod09d87aba_88fe_4de6_bdf3_1647db944452.slice - libcontainer container kubepods-besteffort-pod09d87aba_88fe_4de6_bdf3_1647db944452.slice. Sep 4 00:12:45.169556 containerd[1569]: time="2025-09-04T00:12:45.169205099Z" level=info msg="RemoveContainer for \"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\" returns successfully" Sep 4 00:12:45.171548 systemd[1]: Removed slice kubepods-burstable-podc2ccef98_f022_42dc_9bb4_5ff35b8600fc.slice - libcontainer container kubepods-burstable-podc2ccef98_f022_42dc_9bb4_5ff35b8600fc.slice. Sep 4 00:12:45.175390 systemd[1]: kubepods-burstable-podc2ccef98_f022_42dc_9bb4_5ff35b8600fc.slice: Consumed 9.514s CPU time, 126.6M memory peak, 260K read from disk, 13.3M written to disk. Sep 4 00:12:45.193128 kubelet[2754]: I0904 00:12:45.192816 2754 scope.go:117] "RemoveContainer" containerID="03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e" Sep 4 00:12:45.193701 containerd[1569]: time="2025-09-04T00:12:45.193386791Z" level=error msg="ContainerStatus for \"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\": not found" Sep 4 00:12:45.198776 kubelet[2754]: E0904 00:12:45.198173 2754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\": not found" containerID="03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e" Sep 4 00:12:45.198776 kubelet[2754]: I0904 00:12:45.198254 2754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e"} err="failed to get container status \"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\": rpc error: code = NotFound desc = an error occurred when try to find container \"03898e7ff0233c5a1dd60c14c2e17d8c5359ca2d89ab4f8d5e11a68e4062ef7e\": not found" Sep 4 00:12:45.198776 kubelet[2754]: I0904 00:12:45.198376 2754 scope.go:117] "RemoveContainer" containerID="73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee" Sep 4 00:12:45.207471 containerd[1569]: time="2025-09-04T00:12:45.207072220Z" level=info msg="RemoveContainer for \"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\"" Sep 4 00:12:45.227546 containerd[1569]: time="2025-09-04T00:12:45.227478376Z" level=info msg="RemoveContainer for \"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\" returns successfully" Sep 4 00:12:45.228131 kubelet[2754]: I0904 00:12:45.228094 2754 scope.go:117] "RemoveContainer" containerID="100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4" Sep 4 00:12:45.244017 containerd[1569]: time="2025-09-04T00:12:45.243892255Z" level=info msg="RemoveContainer for \"100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4\"" Sep 4 00:12:45.244695 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb-shm.mount: Deactivated successfully. Sep 4 00:12:45.245004 systemd[1]: var-lib-kubelet-pods-09d87aba\x2d88fe\x2d4de6\x2dbdf3\x2d1647db944452-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmxq2m.mount: Deactivated successfully. Sep 4 00:12:45.245189 systemd[1]: var-lib-kubelet-pods-c2ccef98\x2df022\x2d42dc\x2d9bb4\x2d5ff35b8600fc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfstsz.mount: Deactivated successfully. Sep 4 00:12:45.245285 systemd[1]: var-lib-kubelet-pods-c2ccef98\x2df022\x2d42dc\x2d9bb4\x2d5ff35b8600fc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 00:12:45.245391 systemd[1]: var-lib-kubelet-pods-c2ccef98\x2df022\x2d42dc\x2d9bb4\x2d5ff35b8600fc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 00:12:45.274022 containerd[1569]: time="2025-09-04T00:12:45.271417848Z" level=info msg="RemoveContainer for \"100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4\" returns successfully" Sep 4 00:12:45.274182 kubelet[2754]: I0904 00:12:45.271907 2754 scope.go:117] "RemoveContainer" containerID="57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6" Sep 4 00:12:45.281453 containerd[1569]: time="2025-09-04T00:12:45.281324065Z" level=info msg="RemoveContainer for \"57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6\"" Sep 4 00:12:45.343603 containerd[1569]: time="2025-09-04T00:12:45.342544874Z" level=info msg="RemoveContainer for \"57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6\" returns successfully" Sep 4 00:12:45.346013 kubelet[2754]: I0904 00:12:45.343259 2754 scope.go:117] "RemoveContainer" containerID="33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13" Sep 4 00:12:45.355143 containerd[1569]: time="2025-09-04T00:12:45.355060827Z" level=info msg="RemoveContainer for \"33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13\"" Sep 4 00:12:45.443725 containerd[1569]: time="2025-09-04T00:12:45.438275338Z" level=info msg="RemoveContainer for \"33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13\" returns successfully" Sep 4 00:12:45.447095 kubelet[2754]: I0904 00:12:45.439931 2754 scope.go:117] "RemoveContainer" containerID="a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f" Sep 4 00:12:45.458901 containerd[1569]: time="2025-09-04T00:12:45.458849179Z" level=info msg="RemoveContainer for \"a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f\"" Sep 4 00:12:45.468428 containerd[1569]: time="2025-09-04T00:12:45.468253831Z" level=info msg="RemoveContainer for \"a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f\" returns successfully" Sep 4 00:12:45.475416 kubelet[2754]: I0904 00:12:45.473758 2754 scope.go:117] "RemoveContainer" containerID="73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee" Sep 4 00:12:45.480277 containerd[1569]: time="2025-09-04T00:12:45.479565740Z" level=error msg="ContainerStatus for \"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\": not found" Sep 4 00:12:45.489808 kubelet[2754]: E0904 00:12:45.486879 2754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\": not found" containerID="73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee" Sep 4 00:12:45.489808 kubelet[2754]: I0904 00:12:45.486959 2754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee"} err="failed to get container status \"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\": rpc error: code = NotFound desc = an error occurred when try to find container \"73e53c8d265ee02740b514e8798b0ae8b81e3aace6614a07e727acbadbbc0dee\": not found" Sep 4 00:12:45.489808 kubelet[2754]: I0904 00:12:45.487001 2754 scope.go:117] "RemoveContainer" containerID="100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4" Sep 4 00:12:45.490825 containerd[1569]: time="2025-09-04T00:12:45.490761913Z" level=error msg="ContainerStatus for \"100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4\": not found" Sep 4 00:12:45.491614 kubelet[2754]: E0904 00:12:45.491430 2754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4\": not found" containerID="100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4" Sep 4 00:12:45.491614 kubelet[2754]: I0904 00:12:45.491489 2754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4"} err="failed to get container status \"100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"100c050a4d63316ee68974318639a0bb01d3ff85f998bdf1c9ad45f202bf08b4\": not found" Sep 4 00:12:45.491614 kubelet[2754]: I0904 00:12:45.491535 2754 scope.go:117] "RemoveContainer" containerID="57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6" Sep 4 00:12:45.492121 containerd[1569]: time="2025-09-04T00:12:45.492044834Z" level=error msg="ContainerStatus for \"57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6\": not found" Sep 4 00:12:45.492343 kubelet[2754]: E0904 00:12:45.492237 2754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6\": not found" containerID="57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6" Sep 4 00:12:45.492343 kubelet[2754]: I0904 00:12:45.492266 2754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6"} err="failed to get container status \"57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"57ccff14d98c63d6d82164d46adf409ebfe94b631bab77bf696278c4d5b8e6b6\": not found" Sep 4 00:12:45.492343 kubelet[2754]: I0904 00:12:45.492284 2754 scope.go:117] "RemoveContainer" containerID="33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13" Sep 4 00:12:45.492607 containerd[1569]: time="2025-09-04T00:12:45.492574133Z" level=error msg="ContainerStatus for \"33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13\": not found" Sep 4 00:12:45.498290 kubelet[2754]: E0904 00:12:45.497822 2754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13\": not found" containerID="33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13" Sep 4 00:12:45.498290 kubelet[2754]: I0904 00:12:45.497892 2754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13"} err="failed to get container status \"33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13\": rpc error: code = NotFound desc = an error occurred when try to find container \"33767d77063f8b3443dd608130a9538052493e77d03f5b56bda2c4232e414d13\": not found" Sep 4 00:12:45.498290 kubelet[2754]: I0904 00:12:45.497930 2754 scope.go:117] "RemoveContainer" containerID="a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f" Sep 4 00:12:45.501508 containerd[1569]: time="2025-09-04T00:12:45.500004277Z" level=error msg="ContainerStatus for \"a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f\": not found" Sep 4 00:12:45.506330 kubelet[2754]: E0904 00:12:45.506274 2754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f\": not found" containerID="a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f" Sep 4 00:12:45.506611 kubelet[2754]: I0904 00:12:45.506559 2754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f"} err="failed to get container status \"a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f\": rpc error: code = NotFound desc = an error occurred when try to find container \"a5d273be53a9d032dd63919d115c23b2966b0a1e93ef3de9318a48967fd1894f\": not found" Sep 4 00:12:45.639290 sshd[4430]: Connection closed by 10.0.0.1 port 52748 Sep 4 00:12:45.639061 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Sep 4 00:12:45.674584 systemd[1]: sshd@30-10.0.0.134:22-10.0.0.1:52748.service: Deactivated successfully. Sep 4 00:12:45.685000 systemd[1]: session-31.scope: Deactivated successfully. Sep 4 00:12:45.685424 systemd[1]: session-31.scope: Consumed 1.104s CPU time, 26.2M memory peak. Sep 4 00:12:45.688900 systemd-logind[1539]: Session 31 logged out. Waiting for processes to exit. Sep 4 00:12:45.700166 systemd[1]: Started sshd@31-10.0.0.134:22-10.0.0.1:52758.service - OpenSSH per-connection server daemon (10.0.0.1:52758). Sep 4 00:12:45.702383 systemd-logind[1539]: Removed session 31. Sep 4 00:12:45.796217 sshd[4582]: Accepted publickey for core from 10.0.0.1 port 52758 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:12:45.798613 sshd-session[4582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:12:45.812532 systemd-logind[1539]: New session 32 of user core. Sep 4 00:12:45.820584 containerd[1569]: time="2025-09-04T00:12:45.820485309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\" id:\"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\" pid:3010 exit_status:137 exited_at:{seconds:1756944764 nanos:331919337}" Sep 4 00:12:45.821951 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 4 00:12:46.191144 kubelet[2754]: I0904 00:12:46.191073 2754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09d87aba-88fe-4de6-bdf3-1647db944452" path="/var/lib/kubelet/pods/09d87aba-88fe-4de6-bdf3-1647db944452/volumes" Sep 4 00:12:46.192046 kubelet[2754]: I0904 00:12:46.191826 2754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2ccef98-f022-42dc-9bb4-5ff35b8600fc" path="/var/lib/kubelet/pods/c2ccef98-f022-42dc-9bb4-5ff35b8600fc/volumes" Sep 4 00:12:47.007307 sshd[4584]: Connection closed by 10.0.0.1 port 52758 Sep 4 00:12:47.008536 sshd-session[4582]: pam_unix(sshd:session): session closed for user core Sep 4 00:12:47.037941 systemd[1]: sshd@31-10.0.0.134:22-10.0.0.1:52758.service: Deactivated successfully. Sep 4 00:12:47.046546 systemd[1]: session-32.scope: Deactivated successfully. Sep 4 00:12:47.053892 systemd-logind[1539]: Session 32 logged out. Waiting for processes to exit. Sep 4 00:12:47.065907 systemd-logind[1539]: Removed session 32. Sep 4 00:12:47.071102 systemd[1]: Started sshd@32-10.0.0.134:22-10.0.0.1:52774.service - OpenSSH per-connection server daemon (10.0.0.1:52774). Sep 4 00:12:47.207022 sshd[4596]: Accepted publickey for core from 10.0.0.1 port 52774 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:12:47.210368 sshd-session[4596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:12:47.227666 systemd-logind[1539]: New session 33 of user core. Sep 4 00:12:47.238043 systemd[1]: Started session-33.scope - Session 33 of User core. Sep 4 00:12:47.298555 kubelet[2754]: E0904 00:12:47.298261 2754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2ccef98-f022-42dc-9bb4-5ff35b8600fc" containerName="mount-cgroup" Sep 4 00:12:47.303188 kubelet[2754]: E0904 00:12:47.302071 2754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="09d87aba-88fe-4de6-bdf3-1647db944452" containerName="cilium-operator" Sep 4 00:12:47.303188 kubelet[2754]: E0904 00:12:47.302132 2754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2ccef98-f022-42dc-9bb4-5ff35b8600fc" containerName="clean-cilium-state" Sep 4 00:12:47.303188 kubelet[2754]: E0904 00:12:47.302144 2754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2ccef98-f022-42dc-9bb4-5ff35b8600fc" containerName="apply-sysctl-overwrites" Sep 4 00:12:47.303188 kubelet[2754]: E0904 00:12:47.302152 2754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2ccef98-f022-42dc-9bb4-5ff35b8600fc" containerName="mount-bpf-fs" Sep 4 00:12:47.303188 kubelet[2754]: E0904 00:12:47.302160 2754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2ccef98-f022-42dc-9bb4-5ff35b8600fc" containerName="cilium-agent" Sep 4 00:12:47.303188 kubelet[2754]: I0904 00:12:47.302262 2754 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2ccef98-f022-42dc-9bb4-5ff35b8600fc" containerName="cilium-agent" Sep 4 00:12:47.303188 kubelet[2754]: I0904 00:12:47.302303 2754 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d87aba-88fe-4de6-bdf3-1647db944452" containerName="cilium-operator" Sep 4 00:12:47.320512 systemd[1]: Created slice kubepods-burstable-pod14d7b300_8599_4d1f_a3bd_729a76e340f0.slice - libcontainer container kubepods-burstable-pod14d7b300_8599_4d1f_a3bd_729a76e340f0.slice. Sep 4 00:12:47.332350 sshd[4598]: Connection closed by 10.0.0.1 port 52774 Sep 4 00:12:47.328808 sshd-session[4596]: pam_unix(sshd:session): session closed for user core Sep 4 00:12:47.361590 systemd[1]: sshd@32-10.0.0.134:22-10.0.0.1:52774.service: Deactivated successfully. Sep 4 00:12:47.376108 systemd[1]: session-33.scope: Deactivated successfully. Sep 4 00:12:47.385950 systemd-logind[1539]: Session 33 logged out. Waiting for processes to exit. Sep 4 00:12:47.386870 systemd[1]: Started sshd@33-10.0.0.134:22-10.0.0.1:52778.service - OpenSSH per-connection server daemon (10.0.0.1:52778). Sep 4 00:12:47.393797 systemd-logind[1539]: Removed session 33. Sep 4 00:12:47.456284 kubelet[2754]: I0904 00:12:47.455873 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14d7b300-8599-4d1f-a3bd-729a76e340f0-xtables-lock\") pod \"cilium-9mk9v\" (UID: \"14d7b300-8599-4d1f-a3bd-729a76e340f0\") " pod="kube-system/cilium-9mk9v" Sep 4 00:12:47.457026 kubelet[2754]: I0904 00:12:47.456575 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/14d7b300-8599-4d1f-a3bd-729a76e340f0-cilium-ipsec-secrets\") pod \"cilium-9mk9v\" (UID: \"14d7b300-8599-4d1f-a3bd-729a76e340f0\") " pod="kube-system/cilium-9mk9v" Sep 4 00:12:47.457026 kubelet[2754]: I0904 00:12:47.456628 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/14d7b300-8599-4d1f-a3bd-729a76e340f0-hostproc\") pod \"cilium-9mk9v\" (UID: \"14d7b300-8599-4d1f-a3bd-729a76e340f0\") " pod="kube-system/cilium-9mk9v" Sep 4 00:12:47.457026 kubelet[2754]: I0904 00:12:47.456672 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/14d7b300-8599-4d1f-a3bd-729a76e340f0-host-proc-sys-net\") pod \"cilium-9mk9v\" (UID: \"14d7b300-8599-4d1f-a3bd-729a76e340f0\") " pod="kube-system/cilium-9mk9v" Sep 4 00:12:47.457026 kubelet[2754]: I0904 00:12:47.456699 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/14d7b300-8599-4d1f-a3bd-729a76e340f0-host-proc-sys-kernel\") pod \"cilium-9mk9v\" (UID: \"14d7b300-8599-4d1f-a3bd-729a76e340f0\") " pod="kube-system/cilium-9mk9v" Sep 4 00:12:47.457026 kubelet[2754]: I0904 00:12:47.456722 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14d7b300-8599-4d1f-a3bd-729a76e340f0-lib-modules\") pod \"cilium-9mk9v\" (UID: \"14d7b300-8599-4d1f-a3bd-729a76e340f0\") " pod="kube-system/cilium-9mk9v" Sep 4 00:12:47.457989 kubelet[2754]: I0904 00:12:47.456742 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rvs4\" (UniqueName: \"kubernetes.io/projected/14d7b300-8599-4d1f-a3bd-729a76e340f0-kube-api-access-2rvs4\") pod \"cilium-9mk9v\" (UID: \"14d7b300-8599-4d1f-a3bd-729a76e340f0\") " pod="kube-system/cilium-9mk9v" Sep 4 00:12:47.457989 kubelet[2754]: I0904 00:12:47.456764 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/14d7b300-8599-4d1f-a3bd-729a76e340f0-cilium-cgroup\") pod \"cilium-9mk9v\" (UID: \"14d7b300-8599-4d1f-a3bd-729a76e340f0\") " pod="kube-system/cilium-9mk9v" Sep 4 00:12:47.457989 kubelet[2754]: I0904 00:12:47.456782 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/14d7b300-8599-4d1f-a3bd-729a76e340f0-hubble-tls\") pod \"cilium-9mk9v\" (UID: \"14d7b300-8599-4d1f-a3bd-729a76e340f0\") " pod="kube-system/cilium-9mk9v" Sep 4 00:12:47.457989 kubelet[2754]: I0904 00:12:47.456802 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/14d7b300-8599-4d1f-a3bd-729a76e340f0-clustermesh-secrets\") pod \"cilium-9mk9v\" (UID: \"14d7b300-8599-4d1f-a3bd-729a76e340f0\") " pod="kube-system/cilium-9mk9v" Sep 4 00:12:47.457989 kubelet[2754]: I0904 00:12:47.456828 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/14d7b300-8599-4d1f-a3bd-729a76e340f0-bpf-maps\") pod \"cilium-9mk9v\" (UID: \"14d7b300-8599-4d1f-a3bd-729a76e340f0\") " pod="kube-system/cilium-9mk9v" Sep 4 00:12:47.457989 kubelet[2754]: I0904 00:12:47.456862 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/14d7b300-8599-4d1f-a3bd-729a76e340f0-cni-path\") pod \"cilium-9mk9v\" (UID: \"14d7b300-8599-4d1f-a3bd-729a76e340f0\") " pod="kube-system/cilium-9mk9v" Sep 4 00:12:47.458212 kubelet[2754]: I0904 00:12:47.456882 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14d7b300-8599-4d1f-a3bd-729a76e340f0-etc-cni-netd\") pod \"cilium-9mk9v\" (UID: \"14d7b300-8599-4d1f-a3bd-729a76e340f0\") " pod="kube-system/cilium-9mk9v" Sep 4 00:12:47.458212 kubelet[2754]: I0904 00:12:47.456902 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14d7b300-8599-4d1f-a3bd-729a76e340f0-cilium-config-path\") pod \"cilium-9mk9v\" (UID: \"14d7b300-8599-4d1f-a3bd-729a76e340f0\") " pod="kube-system/cilium-9mk9v" Sep 4 00:12:47.458212 kubelet[2754]: I0904 00:12:47.456929 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/14d7b300-8599-4d1f-a3bd-729a76e340f0-cilium-run\") pod \"cilium-9mk9v\" (UID: \"14d7b300-8599-4d1f-a3bd-729a76e340f0\") " pod="kube-system/cilium-9mk9v" Sep 4 00:12:47.522353 sshd[4605]: Accepted publickey for core from 10.0.0.1 port 52778 ssh2: RSA SHA256:1o0Rn/iFE2HG+o4C2c8UWdMz6TCxmTa3FwGAPCIw01A Sep 4 00:12:47.525286 sshd-session[4605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:12:47.541815 systemd-logind[1539]: New session 34 of user core. Sep 4 00:12:47.550994 systemd[1]: Started session-34.scope - Session 34 of User core. Sep 4 00:12:47.688667 kubelet[2754]: E0904 00:12:47.688384 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:12:47.689607 containerd[1569]: time="2025-09-04T00:12:47.689545165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9mk9v,Uid:14d7b300-8599-4d1f-a3bd-729a76e340f0,Namespace:kube-system,Attempt:0,}" Sep 4 00:12:47.760713 containerd[1569]: time="2025-09-04T00:12:47.760486781Z" level=info msg="connecting to shim 2340d4576dd7a239106906130a6ee6c1d82279fb3557c0c00c081c5c5e8f7623" address="unix:///run/containerd/s/db5f7b9f7af8c88442b012a7f6f7ddbfd545eead792dabf2136f50732968114d" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:12:47.832066 systemd[1]: Started cri-containerd-2340d4576dd7a239106906130a6ee6c1d82279fb3557c0c00c081c5c5e8f7623.scope - libcontainer container 2340d4576dd7a239106906130a6ee6c1d82279fb3557c0c00c081c5c5e8f7623. Sep 4 00:12:47.958163 containerd[1569]: time="2025-09-04T00:12:47.958108819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9mk9v,Uid:14d7b300-8599-4d1f-a3bd-729a76e340f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2340d4576dd7a239106906130a6ee6c1d82279fb3557c0c00c081c5c5e8f7623\"" Sep 4 00:12:47.959562 kubelet[2754]: E0904 00:12:47.959502 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:12:47.965258 containerd[1569]: time="2025-09-04T00:12:47.965182608Z" level=info msg="CreateContainer within sandbox \"2340d4576dd7a239106906130a6ee6c1d82279fb3557c0c00c081c5c5e8f7623\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 00:12:47.991018 containerd[1569]: time="2025-09-04T00:12:47.990837673Z" level=info msg="Container 310dfbace2abdc661d920c78bb92b597f7696b0c39bc9b76f82251f0a370b29f: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:12:48.015673 containerd[1569]: time="2025-09-04T00:12:48.015500593Z" level=info msg="CreateContainer within sandbox \"2340d4576dd7a239106906130a6ee6c1d82279fb3557c0c00c081c5c5e8f7623\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"310dfbace2abdc661d920c78bb92b597f7696b0c39bc9b76f82251f0a370b29f\"" Sep 4 00:12:48.016528 containerd[1569]: time="2025-09-04T00:12:48.016487255Z" level=info msg="StartContainer for \"310dfbace2abdc661d920c78bb92b597f7696b0c39bc9b76f82251f0a370b29f\"" Sep 4 00:12:48.020568 containerd[1569]: time="2025-09-04T00:12:48.020428702Z" level=info msg="connecting to shim 310dfbace2abdc661d920c78bb92b597f7696b0c39bc9b76f82251f0a370b29f" address="unix:///run/containerd/s/db5f7b9f7af8c88442b012a7f6f7ddbfd545eead792dabf2136f50732968114d" protocol=ttrpc version=3 Sep 4 00:12:48.081081 systemd[1]: Started cri-containerd-310dfbace2abdc661d920c78bb92b597f7696b0c39bc9b76f82251f0a370b29f.scope - libcontainer container 310dfbace2abdc661d920c78bb92b597f7696b0c39bc9b76f82251f0a370b29f. Sep 4 00:12:48.136835 containerd[1569]: time="2025-09-04T00:12:48.136630996Z" level=info msg="StartContainer for \"310dfbace2abdc661d920c78bb92b597f7696b0c39bc9b76f82251f0a370b29f\" returns successfully" Sep 4 00:12:48.152821 systemd[1]: cri-containerd-310dfbace2abdc661d920c78bb92b597f7696b0c39bc9b76f82251f0a370b29f.scope: Deactivated successfully. Sep 4 00:12:48.154941 containerd[1569]: time="2025-09-04T00:12:48.154875954Z" level=info msg="received exit event container_id:\"310dfbace2abdc661d920c78bb92b597f7696b0c39bc9b76f82251f0a370b29f\" id:\"310dfbace2abdc661d920c78bb92b597f7696b0c39bc9b76f82251f0a370b29f\" pid:4677 exited_at:{seconds:1756944768 nanos:154179690}" Sep 4 00:12:48.155089 containerd[1569]: time="2025-09-04T00:12:48.155064209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"310dfbace2abdc661d920c78bb92b597f7696b0c39bc9b76f82251f0a370b29f\" id:\"310dfbace2abdc661d920c78bb92b597f7696b0c39bc9b76f82251f0a370b29f\" pid:4677 exited_at:{seconds:1756944768 nanos:154179690}" Sep 4 00:12:48.160620 kubelet[2754]: E0904 00:12:48.160504 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:12:48.192473 containerd[1569]: time="2025-09-04T00:12:48.192065400Z" level=info msg="StopPodSandbox for \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\"" Sep 4 00:12:48.192473 containerd[1569]: time="2025-09-04T00:12:48.192415009Z" level=info msg="TearDown network for sandbox \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" successfully" Sep 4 00:12:48.192717 containerd[1569]: time="2025-09-04T00:12:48.192677915Z" level=info msg="StopPodSandbox for \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" returns successfully" Sep 4 00:12:48.200179 containerd[1569]: time="2025-09-04T00:12:48.196826163Z" level=info msg="RemovePodSandbox for \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\"" Sep 4 00:12:48.200179 containerd[1569]: time="2025-09-04T00:12:48.197463175Z" level=info msg="Forcibly stopping sandbox \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\"" Sep 4 00:12:48.205283 containerd[1569]: time="2025-09-04T00:12:48.202288381Z" level=info msg="TearDown network for sandbox \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" successfully" Sep 4 00:12:48.205283 containerd[1569]: time="2025-09-04T00:12:48.204911069Z" level=info msg="Ensure that sandbox 97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a in task-service has been cleanup successfully" Sep 4 00:12:48.246485 containerd[1569]: time="2025-09-04T00:12:48.246117866Z" level=info msg="RemovePodSandbox \"97c844126a6b5b0743304d2db2247a7aff297470292ca17ce4944d058797362a\" returns successfully" Sep 4 00:12:48.248861 containerd[1569]: time="2025-09-04T00:12:48.248817079Z" level=info msg="StopPodSandbox for \"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\"" Sep 4 00:12:48.249215 containerd[1569]: time="2025-09-04T00:12:48.249186146Z" level=info msg="TearDown network for sandbox \"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\" successfully" Sep 4 00:12:48.249313 containerd[1569]: time="2025-09-04T00:12:48.249290612Z" level=info msg="StopPodSandbox for \"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\" returns successfully" Sep 4 00:12:48.254542 containerd[1569]: time="2025-09-04T00:12:48.251793967Z" level=info msg="RemovePodSandbox for \"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\"" Sep 4 00:12:48.254542 containerd[1569]: time="2025-09-04T00:12:48.251841356Z" level=info msg="Forcibly stopping sandbox \"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\"" Sep 4 00:12:48.254542 containerd[1569]: time="2025-09-04T00:12:48.251963788Z" level=info msg="TearDown network for sandbox \"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\" successfully" Sep 4 00:12:48.255360 containerd[1569]: time="2025-09-04T00:12:48.255328917Z" level=info msg="Ensure that sandbox 0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb in task-service has been cleanup successfully" Sep 4 00:12:48.273203 containerd[1569]: time="2025-09-04T00:12:48.273086225Z" level=info msg="RemovePodSandbox \"0b6053d24fc6ccd27c6443a1543c43fc3684909eb3d7f0ae3a7d1fc4146179bb\" returns successfully" Sep 4 00:12:48.313997 kubelet[2754]: E0904 00:12:48.313430 2754 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 00:12:49.183989 kubelet[2754]: E0904 00:12:49.180498 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:12:49.193109 containerd[1569]: time="2025-09-04T00:12:49.193059493Z" level=info msg="CreateContainer within sandbox \"2340d4576dd7a239106906130a6ee6c1d82279fb3557c0c00c081c5c5e8f7623\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 00:12:49.263212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1344110630.mount: Deactivated successfully. Sep 4 00:12:49.270817 containerd[1569]: time="2025-09-04T00:12:49.270755643Z" level=info msg="Container 4d10a776b39a95d10fd2d8ce2f1fc0214c0018c46582fcdb6cf3cc079c2485d5: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:12:49.315504 containerd[1569]: time="2025-09-04T00:12:49.314551000Z" level=info msg="CreateContainer within sandbox \"2340d4576dd7a239106906130a6ee6c1d82279fb3557c0c00c081c5c5e8f7623\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4d10a776b39a95d10fd2d8ce2f1fc0214c0018c46582fcdb6cf3cc079c2485d5\"" Sep 4 00:12:49.316508 containerd[1569]: time="2025-09-04T00:12:49.316438702Z" level=info msg="StartContainer for \"4d10a776b39a95d10fd2d8ce2f1fc0214c0018c46582fcdb6cf3cc079c2485d5\"" Sep 4 00:12:49.323237 containerd[1569]: time="2025-09-04T00:12:49.321976381Z" level=info msg="connecting to shim 4d10a776b39a95d10fd2d8ce2f1fc0214c0018c46582fcdb6cf3cc079c2485d5" address="unix:///run/containerd/s/db5f7b9f7af8c88442b012a7f6f7ddbfd545eead792dabf2136f50732968114d" protocol=ttrpc version=3 Sep 4 00:12:49.407993 systemd[1]: Started cri-containerd-4d10a776b39a95d10fd2d8ce2f1fc0214c0018c46582fcdb6cf3cc079c2485d5.scope - libcontainer container 4d10a776b39a95d10fd2d8ce2f1fc0214c0018c46582fcdb6cf3cc079c2485d5. Sep 4 00:12:49.504031 containerd[1569]: time="2025-09-04T00:12:49.500513645Z" level=info msg="StartContainer for \"4d10a776b39a95d10fd2d8ce2f1fc0214c0018c46582fcdb6cf3cc079c2485d5\" returns successfully" Sep 4 00:12:49.528673 systemd[1]: cri-containerd-4d10a776b39a95d10fd2d8ce2f1fc0214c0018c46582fcdb6cf3cc079c2485d5.scope: Deactivated successfully. Sep 4 00:12:49.532240 containerd[1569]: time="2025-09-04T00:12:49.531462541Z" level=info msg="received exit event container_id:\"4d10a776b39a95d10fd2d8ce2f1fc0214c0018c46582fcdb6cf3cc079c2485d5\" id:\"4d10a776b39a95d10fd2d8ce2f1fc0214c0018c46582fcdb6cf3cc079c2485d5\" pid:4726 exited_at:{seconds:1756944769 nanos:530625081}" Sep 4 00:12:49.532240 containerd[1569]: time="2025-09-04T00:12:49.531998423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d10a776b39a95d10fd2d8ce2f1fc0214c0018c46582fcdb6cf3cc079c2485d5\" id:\"4d10a776b39a95d10fd2d8ce2f1fc0214c0018c46582fcdb6cf3cc079c2485d5\" pid:4726 exited_at:{seconds:1756944769 nanos:530625081}" Sep 4 00:12:49.605009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d10a776b39a95d10fd2d8ce2f1fc0214c0018c46582fcdb6cf3cc079c2485d5-rootfs.mount: Deactivated successfully. Sep 4 00:12:50.198970 kubelet[2754]: E0904 00:12:50.197331 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:12:50.214460 containerd[1569]: time="2025-09-04T00:12:50.210002133Z" level=info msg="CreateContainer within sandbox \"2340d4576dd7a239106906130a6ee6c1d82279fb3557c0c00c081c5c5e8f7623\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 00:12:50.285276 containerd[1569]: time="2025-09-04T00:12:50.285215079Z" level=info msg="Container f246b02003b3d3b4b660970b7d3879d6f928d5c6022a5ba736bf7d6018443a41: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:12:50.485164 containerd[1569]: time="2025-09-04T00:12:50.484987375Z" level=info msg="CreateContainer within sandbox \"2340d4576dd7a239106906130a6ee6c1d82279fb3557c0c00c081c5c5e8f7623\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f246b02003b3d3b4b660970b7d3879d6f928d5c6022a5ba736bf7d6018443a41\"" Sep 4 00:12:50.496574 containerd[1569]: time="2025-09-04T00:12:50.490682710Z" level=info msg="StartContainer for \"f246b02003b3d3b4b660970b7d3879d6f928d5c6022a5ba736bf7d6018443a41\"" Sep 4 00:12:50.500699 containerd[1569]: time="2025-09-04T00:12:50.497913764Z" level=info msg="connecting to shim f246b02003b3d3b4b660970b7d3879d6f928d5c6022a5ba736bf7d6018443a41" address="unix:///run/containerd/s/db5f7b9f7af8c88442b012a7f6f7ddbfd545eead792dabf2136f50732968114d" protocol=ttrpc version=3 Sep 4 00:12:50.586988 systemd[1]: Started cri-containerd-f246b02003b3d3b4b660970b7d3879d6f928d5c6022a5ba736bf7d6018443a41.scope - libcontainer container f246b02003b3d3b4b660970b7d3879d6f928d5c6022a5ba736bf7d6018443a41. Sep 4 00:12:50.779832 systemd[1]: cri-containerd-f246b02003b3d3b4b660970b7d3879d6f928d5c6022a5ba736bf7d6018443a41.scope: Deactivated successfully. Sep 4 00:12:50.792176 containerd[1569]: time="2025-09-04T00:12:50.791496174Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f246b02003b3d3b4b660970b7d3879d6f928d5c6022a5ba736bf7d6018443a41\" id:\"f246b02003b3d3b4b660970b7d3879d6f928d5c6022a5ba736bf7d6018443a41\" pid:4772 exited_at:{seconds:1756944770 nanos:789487083}" Sep 4 00:12:50.806352 containerd[1569]: time="2025-09-04T00:12:50.805860115Z" level=info msg="received exit event container_id:\"f246b02003b3d3b4b660970b7d3879d6f928d5c6022a5ba736bf7d6018443a41\" id:\"f246b02003b3d3b4b660970b7d3879d6f928d5c6022a5ba736bf7d6018443a41\" pid:4772 exited_at:{seconds:1756944770 nanos:789487083}" Sep 4 00:12:50.816621 containerd[1569]: time="2025-09-04T00:12:50.815227559Z" level=info msg="StartContainer for \"f246b02003b3d3b4b660970b7d3879d6f928d5c6022a5ba736bf7d6018443a41\" returns successfully" Sep 4 00:12:50.830586 kubelet[2754]: I0904 00:12:50.830342 2754 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T00:12:50Z","lastTransitionTime":"2025-09-04T00:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 00:12:50.912566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f246b02003b3d3b4b660970b7d3879d6f928d5c6022a5ba736bf7d6018443a41-rootfs.mount: Deactivated successfully. Sep 4 00:12:51.220318 kubelet[2754]: E0904 00:12:51.220277 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:12:51.241678 containerd[1569]: time="2025-09-04T00:12:51.239369519Z" level=info msg="CreateContainer within sandbox \"2340d4576dd7a239106906130a6ee6c1d82279fb3557c0c00c081c5c5e8f7623\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 00:12:51.795289 containerd[1569]: time="2025-09-04T00:12:51.788816579Z" level=info msg="Container 6ceb0bbc348bce490076ad875ff57c181d9ce66289f244b15f9a5e70cde8723a: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:12:52.209613 containerd[1569]: time="2025-09-04T00:12:52.209514173Z" level=info msg="CreateContainer within sandbox \"2340d4576dd7a239106906130a6ee6c1d82279fb3557c0c00c081c5c5e8f7623\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6ceb0bbc348bce490076ad875ff57c181d9ce66289f244b15f9a5e70cde8723a\"" Sep 4 00:12:52.214413 containerd[1569]: time="2025-09-04T00:12:52.214351308Z" level=info msg="StartContainer for \"6ceb0bbc348bce490076ad875ff57c181d9ce66289f244b15f9a5e70cde8723a\"" Sep 4 00:12:52.217512 containerd[1569]: time="2025-09-04T00:12:52.217458971Z" level=info msg="connecting to shim 6ceb0bbc348bce490076ad875ff57c181d9ce66289f244b15f9a5e70cde8723a" address="unix:///run/containerd/s/db5f7b9f7af8c88442b012a7f6f7ddbfd545eead792dabf2136f50732968114d" protocol=ttrpc version=3 Sep 4 00:12:52.293044 systemd[1]: Started cri-containerd-6ceb0bbc348bce490076ad875ff57c181d9ce66289f244b15f9a5e70cde8723a.scope - libcontainer container 6ceb0bbc348bce490076ad875ff57c181d9ce66289f244b15f9a5e70cde8723a. Sep 4 00:12:52.372046 systemd[1]: cri-containerd-6ceb0bbc348bce490076ad875ff57c181d9ce66289f244b15f9a5e70cde8723a.scope: Deactivated successfully. Sep 4 00:12:52.372831 containerd[1569]: time="2025-09-04T00:12:52.372727071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ceb0bbc348bce490076ad875ff57c181d9ce66289f244b15f9a5e70cde8723a\" id:\"6ceb0bbc348bce490076ad875ff57c181d9ce66289f244b15f9a5e70cde8723a\" pid:4812 exited_at:{seconds:1756944772 nanos:372418048}" Sep 4 00:12:52.462320 containerd[1569]: time="2025-09-04T00:12:52.462034428Z" level=info msg="received exit event container_id:\"6ceb0bbc348bce490076ad875ff57c181d9ce66289f244b15f9a5e70cde8723a\" id:\"6ceb0bbc348bce490076ad875ff57c181d9ce66289f244b15f9a5e70cde8723a\" pid:4812 exited_at:{seconds:1756944772 nanos:372418048}" Sep 4 00:12:52.494602 containerd[1569]: time="2025-09-04T00:12:52.494406908Z" level=info msg="StartContainer for \"6ceb0bbc348bce490076ad875ff57c181d9ce66289f244b15f9a5e70cde8723a\" returns successfully" Sep 4 00:12:52.555200 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ceb0bbc348bce490076ad875ff57c181d9ce66289f244b15f9a5e70cde8723a-rootfs.mount: Deactivated successfully. Sep 4 00:12:53.283057 kubelet[2754]: E0904 00:12:53.283008 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:12:53.296717 containerd[1569]: time="2025-09-04T00:12:53.293815454Z" level=info msg="CreateContainer within sandbox \"2340d4576dd7a239106906130a6ee6c1d82279fb3557c0c00c081c5c5e8f7623\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 00:12:53.314787 kubelet[2754]: E0904 00:12:53.314738 2754 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 00:12:53.354676 containerd[1569]: time="2025-09-04T00:12:53.353701611Z" level=info msg="Container ed3d7fdf9c751a3697a9b1dc19bbe58615d0ea3f48ca2aebd85347b423aee2a5: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:12:53.383845 containerd[1569]: time="2025-09-04T00:12:53.383743080Z" level=info msg="CreateContainer within sandbox \"2340d4576dd7a239106906130a6ee6c1d82279fb3557c0c00c081c5c5e8f7623\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ed3d7fdf9c751a3697a9b1dc19bbe58615d0ea3f48ca2aebd85347b423aee2a5\"" Sep 4 00:12:53.384881 containerd[1569]: time="2025-09-04T00:12:53.384840290Z" level=info msg="StartContainer for \"ed3d7fdf9c751a3697a9b1dc19bbe58615d0ea3f48ca2aebd85347b423aee2a5\"" Sep 4 00:12:53.386537 containerd[1569]: time="2025-09-04T00:12:53.386472318Z" level=info msg="connecting to shim ed3d7fdf9c751a3697a9b1dc19bbe58615d0ea3f48ca2aebd85347b423aee2a5" address="unix:///run/containerd/s/db5f7b9f7af8c88442b012a7f6f7ddbfd545eead792dabf2136f50732968114d" protocol=ttrpc version=3 Sep 4 00:12:53.456599 systemd[1]: Started cri-containerd-ed3d7fdf9c751a3697a9b1dc19bbe58615d0ea3f48ca2aebd85347b423aee2a5.scope - libcontainer container ed3d7fdf9c751a3697a9b1dc19bbe58615d0ea3f48ca2aebd85347b423aee2a5. Sep 4 00:12:53.595133 containerd[1569]: time="2025-09-04T00:12:53.594807795Z" level=info msg="StartContainer for \"ed3d7fdf9c751a3697a9b1dc19bbe58615d0ea3f48ca2aebd85347b423aee2a5\" returns successfully" Sep 4 00:12:53.767296 containerd[1569]: time="2025-09-04T00:12:53.765988865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed3d7fdf9c751a3697a9b1dc19bbe58615d0ea3f48ca2aebd85347b423aee2a5\" id:\"5d3050b6713419cfc1deb06de833c9c057e249877fdde89c79f7d9bac4e8ff32\" pid:4879 exited_at:{seconds:1756944773 nanos:765483712}" Sep 4 00:12:54.303896 kubelet[2754]: E0904 00:12:54.303814 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:12:54.754189 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 4 00:12:55.691207 kubelet[2754]: E0904 00:12:55.690157 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:12:57.018506 containerd[1569]: time="2025-09-04T00:12:57.015476225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed3d7fdf9c751a3697a9b1dc19bbe58615d0ea3f48ca2aebd85347b423aee2a5\" id:\"2659bd4e45d54b5e39b78c7562fff0ae7cad6c8845c9fa2adf11f3d86f82ecd2\" pid:5026 exit_status:1 exited_at:{seconds:1756944777 nanos:14859522}" Sep 4 00:12:59.350680 containerd[1569]: time="2025-09-04T00:12:59.349205514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed3d7fdf9c751a3697a9b1dc19bbe58615d0ea3f48ca2aebd85347b423aee2a5\" id:\"cd9391520b5e8af93d75de5cdea5b29399ffb7e3f592485b27a78aa5e23cf65c\" pid:5274 exit_status:1 exited_at:{seconds:1756944779 nanos:347325359}" Sep 4 00:13:00.185325 kubelet[2754]: E0904 00:13:00.184885 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:13:00.529052 systemd-networkd[1467]: lxc_health: Link UP Sep 4 00:13:00.582889 systemd-networkd[1467]: lxc_health: Gained carrier Sep 4 00:13:01.687533 containerd[1569]: time="2025-09-04T00:13:01.685580497Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed3d7fdf9c751a3697a9b1dc19bbe58615d0ea3f48ca2aebd85347b423aee2a5\" id:\"554a44fe66c0bb639e36324cf7aef5754087e285f4eb8395badf9115be54fbfa\" pid:5432 exited_at:{seconds:1756944781 nanos:684949137}" Sep 4 00:13:01.693132 kubelet[2754]: E0904 00:13:01.693090 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:13:01.978727 kubelet[2754]: I0904 00:13:01.977522 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9mk9v" podStartSLOduration=14.977499635000001 podStartE2EDuration="14.977499635s" podCreationTimestamp="2025-09-04 00:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:12:54.363060739 +0000 UTC m=+126.471120803" watchObservedRunningTime="2025-09-04 00:13:01.977499635 +0000 UTC m=+134.085559669" Sep 4 00:13:02.342482 kubelet[2754]: E0904 00:13:02.342304 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:13:02.362104 systemd-networkd[1467]: lxc_health: Gained IPv6LL Sep 4 00:13:03.349584 kubelet[2754]: E0904 00:13:03.348946 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:13:03.935385 containerd[1569]: time="2025-09-04T00:13:03.935142519Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed3d7fdf9c751a3697a9b1dc19bbe58615d0ea3f48ca2aebd85347b423aee2a5\" id:\"9d4960b76e3d1e512ee9db249303b132d65214529c85773cfa584456c01ffd3d\" pid:5471 exited_at:{seconds:1756944783 nanos:934159224}" Sep 4 00:13:03.957697 kubelet[2754]: E0904 00:13:03.957461 2754 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44392->127.0.0.1:38389: write tcp 127.0.0.1:44392->127.0.0.1:38389: write: connection reset by peer Sep 4 00:13:06.260350 containerd[1569]: time="2025-09-04T00:13:06.259974231Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed3d7fdf9c751a3697a9b1dc19bbe58615d0ea3f48ca2aebd85347b423aee2a5\" id:\"1d16a964229e4df284d9fc181033c3a3cb6bc71859a8efff0c4d8dbe21361308\" pid:5498 exited_at:{seconds:1756944786 nanos:255094180}" Sep 4 00:13:06.472334 sshd[4607]: Connection closed by 10.0.0.1 port 52778 Sep 4 00:13:06.476550 sshd-session[4605]: pam_unix(sshd:session): session closed for user core Sep 4 00:13:06.500283 systemd[1]: sshd@33-10.0.0.134:22-10.0.0.1:52778.service: Deactivated successfully. Sep 4 00:13:06.507526 systemd[1]: session-34.scope: Deactivated successfully. Sep 4 00:13:06.516683 systemd-logind[1539]: Session 34 logged out. Waiting for processes to exit. Sep 4 00:13:06.530425 systemd-logind[1539]: Removed session 34.