Sep 16 05:02:34.813357 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 16 03:05:42 -00 2025 Sep 16 05:02:34.813377 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 05:02:34.813388 kernel: BIOS-provided physical RAM map: Sep 16 05:02:34.813395 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 16 05:02:34.813401 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 16 05:02:34.813408 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Sep 16 05:02:34.813416 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 16 05:02:34.813422 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Sep 16 05:02:34.813429 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 16 05:02:34.813435 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 16 05:02:34.813442 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 16 05:02:34.813450 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 16 05:02:34.813457 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 16 05:02:34.813463 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 16 05:02:34.813481 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 16 05:02:34.813488 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 16 05:02:34.813497 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 16 05:02:34.813504 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 16 05:02:34.813511 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 16 05:02:34.813518 kernel: NX (Execute Disable) protection: active Sep 16 05:02:34.813524 kernel: APIC: Static calls initialized Sep 16 05:02:34.813531 kernel: e820: update [mem 0x9a13e018-0x9a147c57] usable ==> usable Sep 16 05:02:34.813539 kernel: e820: update [mem 0x9a101018-0x9a13de57] usable ==> usable Sep 16 05:02:34.813546 kernel: extended physical RAM map: Sep 16 05:02:34.813553 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 16 05:02:34.813560 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 16 05:02:34.813567 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Sep 16 05:02:34.813576 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 16 05:02:34.813583 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a101017] usable Sep 16 05:02:34.813589 kernel: reserve setup_data: [mem 0x000000009a101018-0x000000009a13de57] usable Sep 16 05:02:34.813596 kernel: reserve setup_data: [mem 0x000000009a13de58-0x000000009a13e017] usable Sep 16 05:02:34.813603 kernel: reserve setup_data: [mem 0x000000009a13e018-0x000000009a147c57] usable Sep 16 05:02:34.813610 kernel: reserve setup_data: [mem 0x000000009a147c58-0x000000009b8ecfff] usable Sep 16 05:02:34.813617 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 16 05:02:34.813624 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 16 05:02:34.813631 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 16 05:02:34.813638 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 16 05:02:34.813645 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 16 05:02:34.813654 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 16 05:02:34.813661 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 16 05:02:34.813671 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 16 05:02:34.813678 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 16 05:02:34.813685 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 16 05:02:34.813693 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 16 05:02:34.813702 kernel: efi: EFI v2.7 by EDK II Sep 16 05:02:34.813709 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Sep 16 05:02:34.813716 kernel: random: crng init done Sep 16 05:02:34.813723 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 16 05:02:34.813731 kernel: secureboot: Secure boot enabled Sep 16 05:02:34.813738 kernel: SMBIOS 2.8 present. Sep 16 05:02:34.813745 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 16 05:02:34.813753 kernel: DMI: Memory slots populated: 1/1 Sep 16 05:02:34.813760 kernel: Hypervisor detected: KVM Sep 16 05:02:34.813767 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 16 05:02:34.813774 kernel: kvm-clock: using sched offset of 4767395302 cycles Sep 16 05:02:34.813783 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 16 05:02:34.813791 kernel: tsc: Detected 2794.748 MHz processor Sep 16 05:02:34.813799 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 16 05:02:34.813806 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 16 05:02:34.813814 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Sep 16 05:02:34.813821 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 16 05:02:34.813829 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 16 05:02:34.813836 kernel: Using GB pages for direct mapping Sep 16 05:02:34.813844 kernel: ACPI: Early table checksum verification disabled Sep 16 05:02:34.813853 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Sep 16 05:02:34.813860 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 16 05:02:34.813868 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 05:02:34.813876 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 05:02:34.813883 kernel: ACPI: FACS 0x000000009BBDD000 000040 Sep 16 05:02:34.813890 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 05:02:34.813898 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 05:02:34.813905 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 05:02:34.813913 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 05:02:34.813922 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 16 05:02:34.813929 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Sep 16 05:02:34.813937 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Sep 16 05:02:34.813944 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Sep 16 05:02:34.813951 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Sep 16 05:02:34.813959 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Sep 16 05:02:34.813966 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Sep 16 05:02:34.813973 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Sep 16 05:02:34.813980 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Sep 16 05:02:34.813990 kernel: No NUMA configuration found Sep 16 05:02:34.813997 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Sep 16 05:02:34.814004 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Sep 16 05:02:34.814012 kernel: Zone ranges: Sep 16 05:02:34.814019 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 16 05:02:34.814027 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Sep 16 05:02:34.814034 kernel: Normal empty Sep 16 05:02:34.814041 kernel: Device empty Sep 16 05:02:34.814049 kernel: Movable zone start for each node Sep 16 05:02:34.814058 kernel: Early memory node ranges Sep 16 05:02:34.814077 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Sep 16 05:02:34.814085 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Sep 16 05:02:34.814092 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Sep 16 05:02:34.814099 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Sep 16 05:02:34.814107 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Sep 16 05:02:34.814114 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Sep 16 05:02:34.814121 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 16 05:02:34.814129 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Sep 16 05:02:34.814136 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 16 05:02:34.814146 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 16 05:02:34.814153 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 16 05:02:34.814160 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Sep 16 05:02:34.814168 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 16 05:02:34.814175 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 16 05:02:34.814182 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 16 05:02:34.814190 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 16 05:02:34.814197 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 16 05:02:34.814205 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 16 05:02:34.814214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 16 05:02:34.814221 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 16 05:02:34.814229 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 16 05:02:34.814236 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 16 05:02:34.814243 kernel: TSC deadline timer available Sep 16 05:02:34.814251 kernel: CPU topo: Max. logical packages: 1 Sep 16 05:02:34.814258 kernel: CPU topo: Max. logical dies: 1 Sep 16 05:02:34.814266 kernel: CPU topo: Max. dies per package: 1 Sep 16 05:02:34.814281 kernel: CPU topo: Max. threads per core: 1 Sep 16 05:02:34.814289 kernel: CPU topo: Num. cores per package: 4 Sep 16 05:02:34.814297 kernel: CPU topo: Num. threads per package: 4 Sep 16 05:02:34.814304 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 16 05:02:34.814313 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 16 05:02:34.814321 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 16 05:02:34.814328 kernel: kvm-guest: setup PV sched yield Sep 16 05:02:34.814336 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 16 05:02:34.814344 kernel: Booting paravirtualized kernel on KVM Sep 16 05:02:34.814354 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 16 05:02:34.814361 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 16 05:02:34.814369 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 16 05:02:34.814377 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 16 05:02:34.814384 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 16 05:02:34.814392 kernel: kvm-guest: PV spinlocks enabled Sep 16 05:02:34.814400 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 16 05:02:34.814409 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 05:02:34.814419 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 05:02:34.814426 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 16 05:02:34.814434 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 16 05:02:34.814442 kernel: Fallback order for Node 0: 0 Sep 16 05:02:34.814449 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Sep 16 05:02:34.814457 kernel: Policy zone: DMA32 Sep 16 05:02:34.814465 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 05:02:34.814481 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 16 05:02:34.814488 kernel: ftrace: allocating 40125 entries in 157 pages Sep 16 05:02:34.814498 kernel: ftrace: allocated 157 pages with 5 groups Sep 16 05:02:34.814505 kernel: Dynamic Preempt: voluntary Sep 16 05:02:34.814513 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 05:02:34.814521 kernel: rcu: RCU event tracing is enabled. Sep 16 05:02:34.814529 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 16 05:02:34.814537 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 05:02:34.814545 kernel: Rude variant of Tasks RCU enabled. Sep 16 05:02:34.814552 kernel: Tracing variant of Tasks RCU enabled. Sep 16 05:02:34.814560 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 05:02:34.814568 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 16 05:02:34.814578 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 05:02:34.814586 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 05:02:34.814593 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 05:02:34.814601 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 16 05:02:34.814609 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 05:02:34.814616 kernel: Console: colour dummy device 80x25 Sep 16 05:02:34.814624 kernel: printk: legacy console [ttyS0] enabled Sep 16 05:02:34.814632 kernel: ACPI: Core revision 20240827 Sep 16 05:02:34.814642 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 16 05:02:34.814649 kernel: APIC: Switch to symmetric I/O mode setup Sep 16 05:02:34.814657 kernel: x2apic enabled Sep 16 05:02:34.814665 kernel: APIC: Switched APIC routing to: physical x2apic Sep 16 05:02:34.814672 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 16 05:02:34.814680 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 16 05:02:34.814688 kernel: kvm-guest: setup PV IPIs Sep 16 05:02:34.814695 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 16 05:02:34.814703 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 16 05:02:34.814713 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 16 05:02:34.814720 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 16 05:02:34.814728 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 16 05:02:34.814736 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 16 05:02:34.814743 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 16 05:02:34.814751 kernel: Spectre V2 : Mitigation: Retpolines Sep 16 05:02:34.814759 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 16 05:02:34.814766 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 16 05:02:34.814774 kernel: active return thunk: retbleed_return_thunk Sep 16 05:02:34.814783 kernel: RETBleed: Mitigation: untrained return thunk Sep 16 05:02:34.814791 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 16 05:02:34.814799 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 16 05:02:34.814807 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 16 05:02:34.814815 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 16 05:02:34.814823 kernel: active return thunk: srso_return_thunk Sep 16 05:02:34.814830 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 16 05:02:34.814838 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 16 05:02:34.814846 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 16 05:02:34.814855 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 16 05:02:34.814863 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 16 05:02:34.814870 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 16 05:02:34.814878 kernel: Freeing SMP alternatives memory: 32K Sep 16 05:02:34.814886 kernel: pid_max: default: 32768 minimum: 301 Sep 16 05:02:34.814893 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 05:02:34.814901 kernel: landlock: Up and running. Sep 16 05:02:34.814908 kernel: SELinux: Initializing. Sep 16 05:02:34.814916 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 05:02:34.814926 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 05:02:34.814933 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 16 05:02:34.814941 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 16 05:02:34.814949 kernel: ... version: 0 Sep 16 05:02:34.814956 kernel: ... bit width: 48 Sep 16 05:02:34.814964 kernel: ... generic registers: 6 Sep 16 05:02:34.814971 kernel: ... value mask: 0000ffffffffffff Sep 16 05:02:34.814979 kernel: ... max period: 00007fffffffffff Sep 16 05:02:34.814986 kernel: ... fixed-purpose events: 0 Sep 16 05:02:34.814996 kernel: ... event mask: 000000000000003f Sep 16 05:02:34.815004 kernel: signal: max sigframe size: 1776 Sep 16 05:02:34.815011 kernel: rcu: Hierarchical SRCU implementation. Sep 16 05:02:34.815019 kernel: rcu: Max phase no-delay instances is 400. Sep 16 05:02:34.815027 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 05:02:34.815034 kernel: smp: Bringing up secondary CPUs ... Sep 16 05:02:34.815042 kernel: smpboot: x86: Booting SMP configuration: Sep 16 05:02:34.815049 kernel: .... node #0, CPUs: #1 #2 #3 Sep 16 05:02:34.815057 kernel: smp: Brought up 1 node, 4 CPUs Sep 16 05:02:34.815080 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 16 05:02:34.815088 kernel: Memory: 2409228K/2552216K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54096K init, 2868K bss, 137064K reserved, 0K cma-reserved) Sep 16 05:02:34.815096 kernel: devtmpfs: initialized Sep 16 05:02:34.815103 kernel: x86/mm: Memory block size: 128MB Sep 16 05:02:34.815111 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Sep 16 05:02:34.815119 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Sep 16 05:02:34.815126 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 05:02:34.815134 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 16 05:02:34.815142 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 05:02:34.815151 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 05:02:34.815159 kernel: audit: initializing netlink subsys (disabled) Sep 16 05:02:34.815167 kernel: audit: type=2000 audit(1757998952.889:1): state=initialized audit_enabled=0 res=1 Sep 16 05:02:34.815174 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 05:02:34.815182 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 16 05:02:34.815190 kernel: cpuidle: using governor menu Sep 16 05:02:34.815197 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 05:02:34.815205 kernel: dca service started, version 1.12.1 Sep 16 05:02:34.815215 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 16 05:02:34.815223 kernel: PCI: Using configuration type 1 for base access Sep 16 05:02:34.815230 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 16 05:02:34.815238 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 05:02:34.815246 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 05:02:34.815253 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 05:02:34.815261 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 05:02:34.815269 kernel: ACPI: Added _OSI(Module Device) Sep 16 05:02:34.815276 kernel: ACPI: Added _OSI(Processor Device) Sep 16 05:02:34.815286 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 05:02:34.815293 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 16 05:02:34.815301 kernel: ACPI: Interpreter enabled Sep 16 05:02:34.815308 kernel: ACPI: PM: (supports S0 S5) Sep 16 05:02:34.815316 kernel: ACPI: Using IOAPIC for interrupt routing Sep 16 05:02:34.815324 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 16 05:02:34.815331 kernel: PCI: Using E820 reservations for host bridge windows Sep 16 05:02:34.815339 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 16 05:02:34.815347 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 16 05:02:34.815523 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 16 05:02:34.815644 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 16 05:02:34.815760 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 16 05:02:34.815770 kernel: PCI host bridge to bus 0000:00 Sep 16 05:02:34.815896 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 16 05:02:34.816018 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 16 05:02:34.816151 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 16 05:02:34.816257 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 16 05:02:34.816361 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 16 05:02:34.816501 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 16 05:02:34.816642 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 16 05:02:34.816784 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 16 05:02:34.816908 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 16 05:02:34.817028 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 16 05:02:34.817161 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 16 05:02:34.817276 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 16 05:02:34.817390 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 16 05:02:34.817542 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 16 05:02:34.817678 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 16 05:02:34.817826 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 16 05:02:34.817966 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 16 05:02:34.818116 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 16 05:02:34.818235 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 16 05:02:34.818350 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 16 05:02:34.818464 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 16 05:02:34.818598 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 16 05:02:34.818719 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 16 05:02:34.818834 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 16 05:02:34.818949 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 16 05:02:34.819146 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 16 05:02:34.819273 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 16 05:02:34.819388 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 16 05:02:34.819522 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 16 05:02:34.819641 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 16 05:02:34.819756 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 16 05:02:34.819899 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 16 05:02:34.820144 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 16 05:02:34.820177 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 16 05:02:34.820186 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 16 05:02:34.820195 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 16 05:02:34.820203 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 16 05:02:34.820217 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 16 05:02:34.820225 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 16 05:02:34.820233 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 16 05:02:34.820241 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 16 05:02:34.820249 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 16 05:02:34.820257 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 16 05:02:34.820265 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 16 05:02:34.820273 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 16 05:02:34.820282 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 16 05:02:34.820292 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 16 05:02:34.820300 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 16 05:02:34.820308 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 16 05:02:34.820316 kernel: iommu: Default domain type: Translated Sep 16 05:02:34.820324 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 16 05:02:34.820332 kernel: efivars: Registered efivars operations Sep 16 05:02:34.820340 kernel: PCI: Using ACPI for IRQ routing Sep 16 05:02:34.820348 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 16 05:02:34.820356 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Sep 16 05:02:34.820367 kernel: e820: reserve RAM buffer [mem 0x9a101018-0x9bffffff] Sep 16 05:02:34.820375 kernel: e820: reserve RAM buffer [mem 0x9a13e018-0x9bffffff] Sep 16 05:02:34.820382 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Sep 16 05:02:34.820390 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Sep 16 05:02:34.820532 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 16 05:02:34.820657 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 16 05:02:34.820780 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 16 05:02:34.820791 kernel: vgaarb: loaded Sep 16 05:02:34.820803 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 16 05:02:34.820812 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 16 05:02:34.820820 kernel: clocksource: Switched to clocksource kvm-clock Sep 16 05:02:34.820828 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 05:02:34.820836 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 05:02:34.820844 kernel: pnp: PnP ACPI init Sep 16 05:02:34.820979 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 16 05:02:34.820992 kernel: pnp: PnP ACPI: found 6 devices Sep 16 05:02:34.821001 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 16 05:02:34.821012 kernel: NET: Registered PF_INET protocol family Sep 16 05:02:34.821020 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 16 05:02:34.821028 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 16 05:02:34.821036 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 05:02:34.821044 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 16 05:02:34.821053 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 16 05:02:34.821061 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 16 05:02:34.821743 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 05:02:34.821758 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 05:02:34.821766 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 05:02:34.821774 kernel: NET: Registered PF_XDP protocol family Sep 16 05:02:34.821915 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 16 05:02:34.822036 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 16 05:02:34.822165 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 16 05:02:34.822272 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 16 05:02:34.822377 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 16 05:02:34.822495 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 16 05:02:34.822605 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 16 05:02:34.822710 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 16 05:02:34.822721 kernel: PCI: CLS 0 bytes, default 64 Sep 16 05:02:34.822729 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 16 05:02:34.822737 kernel: Initialise system trusted keyrings Sep 16 05:02:34.822745 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 16 05:02:34.822754 kernel: Key type asymmetric registered Sep 16 05:02:34.822762 kernel: Asymmetric key parser 'x509' registered Sep 16 05:02:34.822784 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 16 05:02:34.822794 kernel: io scheduler mq-deadline registered Sep 16 05:02:34.822802 kernel: io scheduler kyber registered Sep 16 05:02:34.822809 kernel: io scheduler bfq registered Sep 16 05:02:34.822817 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 16 05:02:34.822826 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 16 05:02:34.822834 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 16 05:02:34.822843 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 16 05:02:34.822851 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 05:02:34.822860 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 16 05:02:34.822869 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 16 05:02:34.822879 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 16 05:02:34.822887 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 16 05:02:34.823011 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 16 05:02:34.823023 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 16 05:02:34.823177 kernel: rtc_cmos 00:04: registered as rtc0 Sep 16 05:02:34.823293 kernel: rtc_cmos 00:04: setting system clock to 2025-09-16T05:02:34 UTC (1757998954) Sep 16 05:02:34.823407 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 16 05:02:34.823417 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 16 05:02:34.823425 kernel: efifb: probing for efifb Sep 16 05:02:34.823433 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 16 05:02:34.823442 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 16 05:02:34.823450 kernel: efifb: scrolling: redraw Sep 16 05:02:34.823458 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 16 05:02:34.823466 kernel: Console: switching to colour frame buffer device 160x50 Sep 16 05:02:34.823483 kernel: fb0: EFI VGA frame buffer device Sep 16 05:02:34.823493 kernel: pstore: Using crash dump compression: deflate Sep 16 05:02:34.823501 kernel: pstore: Registered efi_pstore as persistent store backend Sep 16 05:02:34.823509 kernel: NET: Registered PF_INET6 protocol family Sep 16 05:02:34.823519 kernel: Segment Routing with IPv6 Sep 16 05:02:34.823527 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 05:02:34.823537 kernel: NET: Registered PF_PACKET protocol family Sep 16 05:02:34.823545 kernel: Key type dns_resolver registered Sep 16 05:02:34.823553 kernel: IPI shorthand broadcast: enabled Sep 16 05:02:34.823561 kernel: sched_clock: Marking stable (2668002881, 134431719)->(2817364805, -14930205) Sep 16 05:02:34.823569 kernel: registered taskstats version 1 Sep 16 05:02:34.823577 kernel: Loading compiled-in X.509 certificates Sep 16 05:02:34.823585 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: d1d5b0d56b9b23dabf19e645632ff93bf659b3bf' Sep 16 05:02:34.823593 kernel: Demotion targets for Node 0: null Sep 16 05:02:34.823601 kernel: Key type .fscrypt registered Sep 16 05:02:34.823624 kernel: Key type fscrypt-provisioning registered Sep 16 05:02:34.823632 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 05:02:34.823640 kernel: ima: Allocated hash algorithm: sha1 Sep 16 05:02:34.823647 kernel: ima: No architecture policies found Sep 16 05:02:34.823655 kernel: clk: Disabling unused clocks Sep 16 05:02:34.823663 kernel: Warning: unable to open an initial console. Sep 16 05:02:34.823672 kernel: Freeing unused kernel image (initmem) memory: 54096K Sep 16 05:02:34.823680 kernel: Write protecting the kernel read-only data: 24576k Sep 16 05:02:34.823688 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 16 05:02:34.823698 kernel: Run /init as init process Sep 16 05:02:34.823706 kernel: with arguments: Sep 16 05:02:34.823714 kernel: /init Sep 16 05:02:34.823722 kernel: with environment: Sep 16 05:02:34.823730 kernel: HOME=/ Sep 16 05:02:34.823737 kernel: TERM=linux Sep 16 05:02:34.823745 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 05:02:34.823754 systemd[1]: Successfully made /usr/ read-only. Sep 16 05:02:34.823769 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 05:02:34.823778 systemd[1]: Detected virtualization kvm. Sep 16 05:02:34.823786 systemd[1]: Detected architecture x86-64. Sep 16 05:02:34.823794 systemd[1]: Running in initrd. Sep 16 05:02:34.823803 systemd[1]: No hostname configured, using default hostname. Sep 16 05:02:34.823812 systemd[1]: Hostname set to . Sep 16 05:02:34.823820 systemd[1]: Initializing machine ID from VM UUID. Sep 16 05:02:34.823828 systemd[1]: Queued start job for default target initrd.target. Sep 16 05:02:34.823839 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 05:02:34.823848 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 05:02:34.823857 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 05:02:34.823866 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 05:02:34.823874 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 05:02:34.823883 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 05:02:34.823895 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 05:02:34.823904 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 05:02:34.823913 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 05:02:34.823921 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 05:02:34.823930 systemd[1]: Reached target paths.target - Path Units. Sep 16 05:02:34.823938 systemd[1]: Reached target slices.target - Slice Units. Sep 16 05:02:34.823946 systemd[1]: Reached target swap.target - Swaps. Sep 16 05:02:34.823955 systemd[1]: Reached target timers.target - Timer Units. Sep 16 05:02:34.823963 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 05:02:34.823974 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 05:02:34.823983 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 05:02:34.823991 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 05:02:34.824000 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 05:02:34.824008 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 05:02:34.824017 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 05:02:34.824025 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 05:02:34.824034 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 05:02:34.824045 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 05:02:34.824053 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 05:02:34.824083 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 05:02:34.824092 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 05:02:34.824101 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 05:02:34.824109 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 05:02:34.824119 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:02:34.824127 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 05:02:34.824139 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 05:02:34.824147 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 05:02:34.824179 systemd-journald[219]: Collecting audit messages is disabled. Sep 16 05:02:34.824202 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 05:02:34.824212 systemd-journald[219]: Journal started Sep 16 05:02:34.824231 systemd-journald[219]: Runtime Journal (/run/log/journal/23789e8f8e4c4e3e94e10d85eac22249) is 6M, max 48.2M, 42.2M free. Sep 16 05:02:34.824271 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:02:34.817893 systemd-modules-load[221]: Inserted module 'overlay' Sep 16 05:02:34.828951 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 05:02:34.829800 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 05:02:34.833618 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 05:02:34.835310 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 05:02:34.845131 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 05:02:34.847455 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 16 05:02:34.848381 kernel: Bridge firewalling registered Sep 16 05:02:34.857664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 05:02:34.859355 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 05:02:34.861624 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 05:02:34.868926 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 05:02:34.869614 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 05:02:34.874343 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 05:02:34.874955 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:02:34.877461 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 05:02:34.878780 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 05:02:34.881277 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 05:02:34.903839 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 05:02:34.922410 systemd-resolved[260]: Positive Trust Anchors: Sep 16 05:02:34.922423 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 05:02:34.922453 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 05:02:34.924946 systemd-resolved[260]: Defaulting to hostname 'linux'. Sep 16 05:02:34.925954 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 05:02:34.930893 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 05:02:35.014103 kernel: SCSI subsystem initialized Sep 16 05:02:35.023100 kernel: Loading iSCSI transport class v2.0-870. Sep 16 05:02:35.033093 kernel: iscsi: registered transport (tcp) Sep 16 05:02:35.054104 kernel: iscsi: registered transport (qla4xxx) Sep 16 05:02:35.054154 kernel: QLogic iSCSI HBA Driver Sep 16 05:02:35.073101 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 05:02:35.099215 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 05:02:35.102645 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 05:02:35.157495 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 05:02:35.159880 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 05:02:35.220107 kernel: raid6: avx2x4 gen() 28930 MB/s Sep 16 05:02:35.237091 kernel: raid6: avx2x2 gen() 30609 MB/s Sep 16 05:02:35.254115 kernel: raid6: avx2x1 gen() 25790 MB/s Sep 16 05:02:35.254130 kernel: raid6: using algorithm avx2x2 gen() 30609 MB/s Sep 16 05:02:35.272122 kernel: raid6: .... xor() 19669 MB/s, rmw enabled Sep 16 05:02:35.272145 kernel: raid6: using avx2x2 recovery algorithm Sep 16 05:02:35.292097 kernel: xor: automatically using best checksumming function avx Sep 16 05:02:35.453116 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 05:02:35.461711 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 05:02:35.464339 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 05:02:35.492641 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 16 05:02:35.497886 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 05:02:35.502738 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 05:02:35.534693 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Sep 16 05:02:35.563050 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 05:02:35.565546 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 05:02:35.632143 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 05:02:35.636611 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 05:02:35.667193 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 16 05:02:35.671199 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 16 05:02:35.676154 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 05:02:35.676168 kernel: GPT:9289727 != 19775487 Sep 16 05:02:35.676178 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 05:02:35.676189 kernel: GPT:9289727 != 19775487 Sep 16 05:02:35.676198 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 05:02:35.676208 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 05:02:35.680122 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 16 05:02:35.692184 kernel: cryptd: max_cpu_qlen set to 1000 Sep 16 05:02:35.696082 kernel: libata version 3.00 loaded. Sep 16 05:02:35.703791 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 05:02:35.704926 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:02:35.706090 kernel: ahci 0000:00:1f.2: version 3.0 Sep 16 05:02:35.707233 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:02:35.711297 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 16 05:02:35.711313 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 16 05:02:35.711478 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 16 05:02:35.711359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:02:35.714861 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 16 05:02:35.718088 kernel: AES CTR mode by8 optimization enabled Sep 16 05:02:35.727835 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 05:02:35.730736 kernel: scsi host0: ahci Sep 16 05:02:35.730910 kernel: scsi host1: ahci Sep 16 05:02:35.727986 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:02:35.733144 kernel: scsi host2: ahci Sep 16 05:02:35.744559 kernel: scsi host3: ahci Sep 16 05:02:35.744760 kernel: scsi host4: ahci Sep 16 05:02:35.744912 kernel: scsi host5: ahci Sep 16 05:02:35.745055 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 16 05:02:35.747083 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 16 05:02:35.747105 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 16 05:02:35.749086 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 16 05:02:35.749103 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 16 05:02:35.750626 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 16 05:02:35.752408 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 16 05:02:35.768763 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 16 05:02:35.788338 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 16 05:02:35.795263 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 16 05:02:35.795700 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 16 05:02:35.796795 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 05:02:35.817383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:02:35.819908 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 05:02:35.826248 disk-uuid[633]: Primary Header is updated. Sep 16 05:02:35.826248 disk-uuid[633]: Secondary Entries is updated. Sep 16 05:02:35.826248 disk-uuid[633]: Secondary Header is updated. Sep 16 05:02:35.829567 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 05:02:35.834100 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 05:02:35.846594 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:02:36.062090 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 16 05:02:36.062134 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 16 05:02:36.063096 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 16 05:02:36.063147 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 16 05:02:36.064101 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 16 05:02:36.065102 kernel: ata3.00: LPM support broken, forcing max_power Sep 16 05:02:36.065160 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 16 05:02:36.065638 kernel: ata3.00: applying bridge limits Sep 16 05:02:36.067106 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 16 05:02:36.067122 kernel: ata3.00: LPM support broken, forcing max_power Sep 16 05:02:36.068086 kernel: ata3.00: configured for UDMA/100 Sep 16 05:02:36.070106 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 16 05:02:36.118560 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 16 05:02:36.118774 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 16 05:02:36.139096 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 16 05:02:36.564743 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 05:02:36.566398 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 05:02:36.567910 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 05:02:36.569225 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 05:02:36.572004 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 05:02:36.595305 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 05:02:36.835109 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 05:02:36.835734 disk-uuid[634]: The operation has completed successfully. Sep 16 05:02:36.865142 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 05:02:36.865266 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 05:02:36.895321 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 05:02:36.925156 sh[668]: Success Sep 16 05:02:36.942099 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 05:02:36.942129 kernel: device-mapper: uevent: version 1.0.3 Sep 16 05:02:36.943630 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 05:02:36.952111 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 16 05:02:36.978604 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 05:02:36.980621 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 05:02:36.992385 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 05:02:36.997539 kernel: BTRFS: device fsid f1b91845-3914-4d21-a370-6d760ee45b2e devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (680) Sep 16 05:02:36.997563 kernel: BTRFS info (device dm-0): first mount of filesystem f1b91845-3914-4d21-a370-6d760ee45b2e Sep 16 05:02:36.997574 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 16 05:02:37.002683 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 05:02:37.002718 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 05:02:37.003836 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 05:02:37.004524 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 05:02:37.005859 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 05:02:37.006554 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 05:02:37.008892 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 05:02:37.036691 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (713) Sep 16 05:02:37.036722 kernel: BTRFS info (device vda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:02:37.036733 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 05:02:37.040310 kernel: BTRFS info (device vda6): turning on async discard Sep 16 05:02:37.040344 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 05:02:37.046090 kernel: BTRFS info (device vda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:02:37.046231 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 05:02:37.048564 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 05:02:37.137203 ignition[754]: Ignition 2.22.0 Sep 16 05:02:37.137217 ignition[754]: Stage: fetch-offline Sep 16 05:02:37.137245 ignition[754]: no configs at "/usr/lib/ignition/base.d" Sep 16 05:02:37.137254 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 05:02:37.137345 ignition[754]: parsed url from cmdline: "" Sep 16 05:02:37.137349 ignition[754]: no config URL provided Sep 16 05:02:37.137354 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 05:02:37.137362 ignition[754]: no config at "/usr/lib/ignition/user.ign" Sep 16 05:02:37.137384 ignition[754]: op(1): [started] loading QEMU firmware config module Sep 16 05:02:37.137389 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 16 05:02:37.145058 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 05:02:37.149752 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 05:02:37.153839 ignition[754]: op(1): [finished] loading QEMU firmware config module Sep 16 05:02:37.190522 ignition[754]: parsing config with SHA512: 674d4a62d6fd71ce4283bb377d4ae9ee7c536f3789273ca88880e18ec1c56c1a24a63fc787feb71875687952c7239d4363c90ff4b2189b4f5c22852cc19c62fb Sep 16 05:02:37.195308 systemd-networkd[857]: lo: Link UP Sep 16 05:02:37.195316 systemd-networkd[857]: lo: Gained carrier Sep 16 05:02:37.196791 systemd-networkd[857]: Enumeration completed Sep 16 05:02:37.196933 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 05:02:37.198245 ignition[754]: fetch-offline: fetch-offline passed Sep 16 05:02:37.197417 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:02:37.198299 ignition[754]: Ignition finished successfully Sep 16 05:02:37.197422 systemd-networkd[857]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 05:02:37.197869 unknown[754]: fetched base config from "system" Sep 16 05:02:37.197877 unknown[754]: fetched user config from "qemu" Sep 16 05:02:37.199180 systemd-networkd[857]: eth0: Link UP Sep 16 05:02:37.199203 systemd[1]: Reached target network.target - Network. Sep 16 05:02:37.199316 systemd-networkd[857]: eth0: Gained carrier Sep 16 05:02:37.199325 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:02:37.201308 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 05:02:37.203183 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 16 05:02:37.203908 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 05:02:37.209105 systemd-networkd[857]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 16 05:02:37.240346 ignition[861]: Ignition 2.22.0 Sep 16 05:02:37.240358 ignition[861]: Stage: kargs Sep 16 05:02:37.240485 ignition[861]: no configs at "/usr/lib/ignition/base.d" Sep 16 05:02:37.240496 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 05:02:37.241183 ignition[861]: kargs: kargs passed Sep 16 05:02:37.241222 ignition[861]: Ignition finished successfully Sep 16 05:02:37.245912 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 05:02:37.247870 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 05:02:37.287149 ignition[870]: Ignition 2.22.0 Sep 16 05:02:37.287161 ignition[870]: Stage: disks Sep 16 05:02:37.287285 ignition[870]: no configs at "/usr/lib/ignition/base.d" Sep 16 05:02:37.287295 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 05:02:37.287958 ignition[870]: disks: disks passed Sep 16 05:02:37.288002 ignition[870]: Ignition finished successfully Sep 16 05:02:37.292270 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 05:02:37.293199 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 05:02:37.293505 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 05:02:37.293818 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 05:02:37.294298 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 05:02:37.294614 systemd[1]: Reached target basic.target - Basic System. Sep 16 05:02:37.295765 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 05:02:37.322506 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 16 05:02:37.330019 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 05:02:37.333442 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 05:02:37.438102 kernel: EXT4-fs (vda9): mounted filesystem fb1cb44f-955b-4cd0-8849-33ce3640d547 r/w with ordered data mode. Quota mode: none. Sep 16 05:02:37.438916 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 05:02:37.440220 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 05:02:37.442373 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 05:02:37.444120 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 05:02:37.445270 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 16 05:02:37.445308 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 05:02:37.445330 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 05:02:37.454993 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 05:02:37.456718 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 05:02:37.461735 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (888) Sep 16 05:02:37.461757 kernel: BTRFS info (device vda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:02:37.461768 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 05:02:37.464650 kernel: BTRFS info (device vda6): turning on async discard Sep 16 05:02:37.464702 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 05:02:37.466821 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 05:02:37.492340 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 05:02:37.497366 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory Sep 16 05:02:37.502112 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 05:02:37.506531 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 05:02:37.591782 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 05:02:37.592986 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 05:02:37.594704 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 05:02:37.617093 kernel: BTRFS info (device vda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:02:37.632260 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 05:02:37.649009 ignition[1002]: INFO : Ignition 2.22.0 Sep 16 05:02:37.649009 ignition[1002]: INFO : Stage: mount Sep 16 05:02:37.650589 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 05:02:37.650589 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 05:02:37.650589 ignition[1002]: INFO : mount: mount passed Sep 16 05:02:37.650589 ignition[1002]: INFO : Ignition finished successfully Sep 16 05:02:37.652364 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 05:02:37.654618 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 05:02:37.996357 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 05:02:37.997743 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 05:02:38.023957 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Sep 16 05:02:38.023984 kernel: BTRFS info (device vda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:02:38.024001 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 05:02:38.027602 kernel: BTRFS info (device vda6): turning on async discard Sep 16 05:02:38.027621 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 05:02:38.029101 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 05:02:38.060213 ignition[1031]: INFO : Ignition 2.22.0 Sep 16 05:02:38.060213 ignition[1031]: INFO : Stage: files Sep 16 05:02:38.061845 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 05:02:38.061845 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 05:02:38.064476 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Sep 16 05:02:38.065862 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 05:02:38.065862 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 05:02:38.070186 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 05:02:38.071536 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 05:02:38.071536 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 05:02:38.070826 unknown[1031]: wrote ssh authorized keys file for user: core Sep 16 05:02:38.075229 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 16 05:02:38.075229 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 16 05:02:38.113162 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 05:02:38.297513 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 16 05:02:38.299488 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 05:02:38.299488 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 16 05:02:38.376206 systemd-networkd[857]: eth0: Gained IPv6LL Sep 16 05:02:38.404657 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 16 05:02:38.539084 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 05:02:38.540861 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 16 05:02:38.540861 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 05:02:38.540861 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 05:02:38.540861 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 05:02:38.540861 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 05:02:38.540861 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 05:02:38.540861 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 05:02:38.540861 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 05:02:38.554453 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 05:02:38.554453 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 05:02:38.554453 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 05:02:38.554453 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 05:02:38.554453 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 05:02:38.554453 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 16 05:02:38.810241 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 16 05:02:39.276021 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 05:02:39.276021 ignition[1031]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 16 05:02:39.279743 ignition[1031]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 05:02:39.285710 ignition[1031]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 05:02:39.285710 ignition[1031]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 16 05:02:39.285710 ignition[1031]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 16 05:02:39.289868 ignition[1031]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 16 05:02:39.291680 ignition[1031]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 16 05:02:39.291680 ignition[1031]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 16 05:02:39.291680 ignition[1031]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 16 05:02:39.311596 ignition[1031]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 16 05:02:39.316348 ignition[1031]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 16 05:02:39.317959 ignition[1031]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 16 05:02:39.317959 ignition[1031]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 16 05:02:39.317959 ignition[1031]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 05:02:39.317959 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 05:02:39.317959 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 05:02:39.317959 ignition[1031]: INFO : files: files passed Sep 16 05:02:39.317959 ignition[1031]: INFO : Ignition finished successfully Sep 16 05:02:39.322491 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 05:02:39.325789 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 05:02:39.338833 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 05:02:39.344478 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 05:02:39.350190 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 05:02:39.355557 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory Sep 16 05:02:39.359559 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 05:02:39.359559 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 05:02:39.362677 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 05:02:39.365626 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 05:02:39.366272 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 05:02:39.367308 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 05:02:39.401641 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 05:02:39.401767 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 05:02:39.402448 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 05:02:39.405054 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 05:02:39.407546 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 05:02:39.408426 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 05:02:39.439123 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 05:02:39.440596 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 05:02:39.464769 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 05:02:39.465086 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 05:02:39.467455 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 05:02:39.469480 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 05:02:39.469582 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 05:02:39.472441 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 05:02:39.474390 systemd[1]: Stopped target basic.target - Basic System. Sep 16 05:02:39.474895 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 05:02:39.475367 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 05:02:39.478948 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 05:02:39.480975 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 05:02:39.483077 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 05:02:39.485538 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 05:02:39.486061 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 05:02:39.486378 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 05:02:39.486682 systemd[1]: Stopped target swap.target - Swaps. Sep 16 05:02:39.492383 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 05:02:39.492483 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 05:02:39.495030 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 05:02:39.495592 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 05:02:39.495865 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 05:02:39.499802 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 05:02:39.501762 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 05:02:39.501862 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 05:02:39.504446 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 05:02:39.504545 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 05:02:39.506061 systemd[1]: Stopped target paths.target - Path Units. Sep 16 05:02:39.508335 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 05:02:39.513144 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 05:02:39.513657 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 05:02:39.516155 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 05:02:39.517734 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 05:02:39.517833 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 05:02:39.519384 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 05:02:39.519486 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 05:02:39.521003 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 05:02:39.521145 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 05:02:39.522723 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 05:02:39.522841 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 05:02:39.526044 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 05:02:39.527195 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 05:02:39.529336 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 05:02:39.529464 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 05:02:39.534113 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 05:02:39.535115 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 05:02:39.542254 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 05:02:39.544257 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 05:02:39.560085 ignition[1086]: INFO : Ignition 2.22.0 Sep 16 05:02:39.560085 ignition[1086]: INFO : Stage: umount Sep 16 05:02:39.561883 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 05:02:39.561883 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 05:02:39.561883 ignition[1086]: INFO : umount: umount passed Sep 16 05:02:39.561883 ignition[1086]: INFO : Ignition finished successfully Sep 16 05:02:39.566736 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 05:02:39.567351 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 05:02:39.567464 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 05:02:39.569399 systemd[1]: Stopped target network.target - Network. Sep 16 05:02:39.569922 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 05:02:39.569986 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 05:02:39.570732 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 05:02:39.570777 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 05:02:39.571025 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 05:02:39.571085 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 05:02:39.571351 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 05:02:39.571392 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 05:02:39.572559 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 05:02:39.579490 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 05:02:39.588114 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 05:02:39.588242 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 05:02:39.591961 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 05:02:39.592238 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 05:02:39.592283 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 05:02:39.595849 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 05:02:39.599412 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 05:02:39.599563 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 05:02:39.603302 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 05:02:39.603502 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 05:02:39.605634 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 05:02:39.605673 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 05:02:39.609752 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 05:02:39.611653 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 05:02:39.611725 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 05:02:39.612433 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 05:02:39.612484 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:02:39.616561 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 05:02:39.616618 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 05:02:39.617092 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 05:02:39.618865 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 05:02:39.634912 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 05:02:39.636215 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 05:02:39.636951 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 05:02:39.636995 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 05:02:39.638800 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 05:02:39.638834 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 05:02:39.640878 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 05:02:39.640929 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 05:02:39.641858 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 05:02:39.641901 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 05:02:39.642647 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 05:02:39.642692 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 05:02:39.651244 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 05:02:39.651791 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 05:02:39.651839 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 05:02:39.656477 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 05:02:39.656548 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 05:02:39.659839 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 16 05:02:39.659899 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 05:02:39.663093 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 05:02:39.663143 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 05:02:39.663577 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 05:02:39.663621 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:02:39.668959 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 05:02:39.683197 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 05:02:39.690247 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 05:02:39.690367 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 05:02:39.734556 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 05:02:39.734682 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 05:02:39.736938 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 05:02:39.737469 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 05:02:39.737536 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 05:02:39.738551 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 05:02:39.755800 systemd[1]: Switching root. Sep 16 05:02:39.784346 systemd-journald[219]: Journal stopped Sep 16 05:02:40.999145 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Sep 16 05:02:40.999218 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 05:02:40.999233 kernel: SELinux: policy capability open_perms=1 Sep 16 05:02:40.999244 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 05:02:40.999255 kernel: SELinux: policy capability always_check_network=0 Sep 16 05:02:40.999267 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 05:02:40.999290 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 05:02:40.999301 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 05:02:40.999312 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 05:02:40.999323 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 05:02:40.999335 kernel: audit: type=1403 audit(1757998960.244:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 05:02:40.999352 systemd[1]: Successfully loaded SELinux policy in 59.204ms. Sep 16 05:02:40.999371 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.040ms. Sep 16 05:02:40.999384 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 05:02:40.999396 systemd[1]: Detected virtualization kvm. Sep 16 05:02:40.999410 systemd[1]: Detected architecture x86-64. Sep 16 05:02:40.999422 systemd[1]: Detected first boot. Sep 16 05:02:40.999435 systemd[1]: Initializing machine ID from VM UUID. Sep 16 05:02:40.999446 zram_generator::config[1131]: No configuration found. Sep 16 05:02:40.999459 kernel: Guest personality initialized and is inactive Sep 16 05:02:40.999471 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 16 05:02:40.999482 kernel: Initialized host personality Sep 16 05:02:40.999493 kernel: NET: Registered PF_VSOCK protocol family Sep 16 05:02:40.999504 systemd[1]: Populated /etc with preset unit settings. Sep 16 05:02:40.999519 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 05:02:40.999531 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 05:02:40.999542 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 05:02:40.999554 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 05:02:40.999566 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 05:02:40.999578 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 05:02:40.999590 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 05:02:40.999601 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 05:02:40.999615 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 05:02:40.999628 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 05:02:40.999639 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 05:02:40.999651 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 05:02:40.999663 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 05:02:40.999675 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 05:02:40.999687 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 05:02:40.999699 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 05:02:40.999711 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 05:02:40.999725 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 05:02:40.999737 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 16 05:02:40.999749 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 05:02:40.999761 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 05:02:40.999786 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 05:02:40.999798 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 05:02:40.999810 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 05:02:40.999824 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 05:02:40.999836 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 05:02:40.999848 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 05:02:40.999860 systemd[1]: Reached target slices.target - Slice Units. Sep 16 05:02:40.999871 systemd[1]: Reached target swap.target - Swaps. Sep 16 05:02:40.999883 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 05:02:40.999894 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 05:02:40.999910 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 05:02:40.999927 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 05:02:40.999938 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 05:02:40.999952 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 05:02:40.999964 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 05:02:40.999976 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 05:02:40.999988 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 05:02:40.999999 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 05:02:41.000011 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:02:41.000023 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 05:02:41.000034 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 05:02:41.000053 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 05:02:41.000089 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 05:02:41.000112 systemd[1]: Reached target machines.target - Containers. Sep 16 05:02:41.000124 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 05:02:41.000136 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:02:41.000147 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 05:02:41.000159 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 05:02:41.000171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 05:02:41.000183 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 05:02:41.000198 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 05:02:41.000210 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 05:02:41.000221 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 05:02:41.000234 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 05:02:41.000245 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 05:02:41.000259 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 05:02:41.000271 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 05:02:41.000291 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 05:02:41.000306 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:02:41.000320 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 05:02:41.000332 kernel: fuse: init (API version 7.41) Sep 16 05:02:41.000344 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 05:02:41.000356 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 05:02:41.000369 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 05:02:41.000380 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 05:02:41.000392 kernel: loop: module loaded Sep 16 05:02:41.000405 kernel: ACPI: bus type drm_connector registered Sep 16 05:02:41.000418 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 05:02:41.000432 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 05:02:41.000446 systemd[1]: Stopped verity-setup.service. Sep 16 05:02:41.000458 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:02:41.000470 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 05:02:41.000485 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 05:02:41.000497 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 05:02:41.000508 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 05:02:41.000520 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 05:02:41.000532 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 05:02:41.000566 systemd-journald[1209]: Collecting audit messages is disabled. Sep 16 05:02:41.000589 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 05:02:41.000601 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 05:02:41.000614 systemd-journald[1209]: Journal started Sep 16 05:02:41.000636 systemd-journald[1209]: Runtime Journal (/run/log/journal/23789e8f8e4c4e3e94e10d85eac22249) is 6M, max 48.2M, 42.2M free. Sep 16 05:02:40.752525 systemd[1]: Queued start job for default target multi-user.target. Sep 16 05:02:40.778901 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 16 05:02:40.779363 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 05:02:41.003271 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 05:02:41.004873 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 05:02:41.005212 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 05:02:41.006652 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 05:02:41.006862 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 05:02:41.008250 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 05:02:41.008468 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 05:02:41.009772 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 05:02:41.009981 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 05:02:41.011450 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 05:02:41.011659 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 05:02:41.013032 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 05:02:41.013288 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 05:02:41.014662 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 05:02:41.016044 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 05:02:41.017590 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 05:02:41.019317 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 05:02:41.034649 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 05:02:41.037339 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 05:02:41.039468 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 05:02:41.040558 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 05:02:41.040644 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 05:02:41.042005 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 05:02:41.060175 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 05:02:41.061435 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:02:41.064131 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 05:02:41.067313 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 05:02:41.068667 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 05:02:41.070299 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 05:02:41.071512 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 05:02:41.080194 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 05:02:41.082564 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 05:02:41.084596 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 05:02:41.087866 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 05:02:41.089121 systemd-journald[1209]: Time spent on flushing to /var/log/journal/23789e8f8e4c4e3e94e10d85eac22249 is 20.233ms for 1047 entries. Sep 16 05:02:41.089121 systemd-journald[1209]: System Journal (/var/log/journal/23789e8f8e4c4e3e94e10d85eac22249) is 8M, max 195.6M, 187.6M free. Sep 16 05:02:41.119302 systemd-journald[1209]: Received client request to flush runtime journal. Sep 16 05:02:41.119334 kernel: loop0: detected capacity change from 0 to 128016 Sep 16 05:02:41.090906 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 05:02:41.092882 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 05:02:41.094650 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 05:02:41.104860 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 05:02:41.108288 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 05:02:41.123626 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 05:02:41.124600 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 05:02:41.125700 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:02:41.128770 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Sep 16 05:02:41.128789 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Sep 16 05:02:41.133685 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 05:02:41.137371 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 05:02:41.150264 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 05:02:41.153100 kernel: loop1: detected capacity change from 0 to 229808 Sep 16 05:02:41.174115 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 05:02:41.176517 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 05:02:41.179122 kernel: loop2: detected capacity change from 0 to 110984 Sep 16 05:02:41.202773 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Sep 16 05:02:41.203134 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Sep 16 05:02:41.207497 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 05:02:41.211085 kernel: loop3: detected capacity change from 0 to 128016 Sep 16 05:02:41.220215 kernel: loop4: detected capacity change from 0 to 229808 Sep 16 05:02:41.227089 kernel: loop5: detected capacity change from 0 to 110984 Sep 16 05:02:41.238390 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 16 05:02:41.238918 (sd-merge)[1275]: Merged extensions into '/usr'. Sep 16 05:02:41.243909 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 05:02:41.243930 systemd[1]: Reloading... Sep 16 05:02:41.290153 zram_generator::config[1299]: No configuration found. Sep 16 05:02:41.401495 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 05:02:41.492126 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 05:02:41.492458 systemd[1]: Reloading finished in 248 ms. Sep 16 05:02:41.520810 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 05:02:41.522468 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 05:02:41.539386 systemd[1]: Starting ensure-sysext.service... Sep 16 05:02:41.541113 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 05:02:41.551076 systemd[1]: Reload requested from client PID 1339 ('systemctl') (unit ensure-sysext.service)... Sep 16 05:02:41.551093 systemd[1]: Reloading... Sep 16 05:02:41.561176 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 05:02:41.561209 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 05:02:41.561824 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 05:02:41.562165 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 05:02:41.563110 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 05:02:41.563388 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 16 05:02:41.563528 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 16 05:02:41.567551 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 05:02:41.567564 systemd-tmpfiles[1340]: Skipping /boot Sep 16 05:02:41.578037 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 05:02:41.578050 systemd-tmpfiles[1340]: Skipping /boot Sep 16 05:02:41.598101 zram_generator::config[1367]: No configuration found. Sep 16 05:02:41.776487 systemd[1]: Reloading finished in 225 ms. Sep 16 05:02:41.800590 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 05:02:41.813762 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 05:02:41.822274 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 05:02:41.824732 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 05:02:41.826987 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 05:02:41.838030 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 05:02:41.842295 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 05:02:41.845140 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 05:02:41.849684 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:02:41.849851 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:02:41.853663 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 05:02:41.855906 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 05:02:41.860332 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 05:02:41.861500 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:02:41.861591 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:02:41.863280 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 05:02:41.864421 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:02:41.866507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 05:02:41.866723 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 05:02:41.868542 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 05:02:41.868742 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 05:02:41.874768 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 05:02:41.875421 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 05:02:41.877165 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 05:02:41.881879 systemd-udevd[1410]: Using default interface naming scheme 'v255'. Sep 16 05:02:41.887644 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:02:41.887825 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:02:41.889489 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 05:02:41.893252 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 05:02:41.895959 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 05:02:41.897311 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:02:41.897440 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:02:41.907477 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 05:02:41.908609 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:02:41.910401 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 05:02:41.912678 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 05:02:41.914588 augenrules[1444]: No rules Sep 16 05:02:41.918326 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 05:02:41.920464 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 05:02:41.920769 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 05:02:41.923588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 05:02:41.923800 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 05:02:41.925338 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 05:02:41.925542 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 05:02:41.926993 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 05:02:41.928493 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 05:02:41.928695 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 05:02:41.930585 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 05:02:41.947588 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:02:41.951248 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 05:02:41.952347 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:02:41.953423 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 05:02:41.959217 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 05:02:41.965220 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 05:02:41.968483 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 05:02:41.969825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:02:41.969940 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:02:41.974235 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 05:02:41.975250 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 05:02:41.975363 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:02:41.978628 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 05:02:41.979171 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 05:02:41.980792 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 05:02:41.981014 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 05:02:41.982677 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 05:02:41.982895 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 05:02:41.986920 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 05:02:41.987156 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 05:02:41.990980 systemd[1]: Finished ensure-sysext.service. Sep 16 05:02:41.995770 augenrules[1486]: /sbin/augenrules: No change Sep 16 05:02:42.004496 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 05:02:42.004559 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 05:02:42.005414 augenrules[1514]: No rules Sep 16 05:02:42.008203 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 16 05:02:42.009698 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 05:02:42.009963 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 05:02:42.037920 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 16 05:02:42.068589 systemd-resolved[1409]: Positive Trust Anchors: Sep 16 05:02:42.068611 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 05:02:42.068649 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 05:02:42.073766 systemd-resolved[1409]: Defaulting to hostname 'linux'. Sep 16 05:02:42.075376 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 05:02:42.076648 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 05:02:42.080100 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 05:02:42.081475 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 16 05:02:42.085617 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 05:02:42.093093 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 16 05:02:42.101108 kernel: ACPI: button: Power Button [PWRF] Sep 16 05:02:42.105290 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 05:02:42.126542 systemd-networkd[1493]: lo: Link UP Sep 16 05:02:42.126553 systemd-networkd[1493]: lo: Gained carrier Sep 16 05:02:42.128114 systemd-networkd[1493]: Enumeration completed Sep 16 05:02:42.128202 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 05:02:42.129386 systemd[1]: Reached target network.target - Network. Sep 16 05:02:42.130350 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:02:42.130362 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 05:02:42.131183 systemd-networkd[1493]: eth0: Link UP Sep 16 05:02:42.131371 systemd-networkd[1493]: eth0: Gained carrier Sep 16 05:02:42.131393 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:02:42.133335 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 05:02:42.136624 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 05:02:42.145115 systemd-networkd[1493]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 16 05:02:42.155230 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 16 05:02:42.155513 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 16 05:02:42.157111 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 16 05:02:42.158580 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 05:02:42.176013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:02:42.193596 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 16 05:02:42.195029 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 05:02:43.273244 systemd-resolved[1409]: Clock change detected. Flushing caches. Sep 16 05:02:43.273367 systemd-timesyncd[1520]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 16 05:02:43.273855 systemd-timesyncd[1520]: Initial clock synchronization to Tue 2025-09-16 05:02:43.273192 UTC. Sep 16 05:02:43.324833 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:02:43.326250 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 05:02:43.327405 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 05:02:43.328623 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 05:02:43.329827 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 16 05:02:43.331073 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 05:02:43.332225 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 05:02:43.334553 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 05:02:43.335770 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 05:02:43.335806 systemd[1]: Reached target paths.target - Path Units. Sep 16 05:02:43.336688 systemd[1]: Reached target timers.target - Timer Units. Sep 16 05:02:43.338941 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 05:02:43.343232 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 05:02:43.351844 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 05:02:43.353470 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 05:02:43.355592 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 05:02:43.361879 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 05:02:43.363187 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 05:02:43.364887 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 05:02:43.366627 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 05:02:43.367555 systemd[1]: Reached target basic.target - Basic System. Sep 16 05:02:43.368481 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 05:02:43.368521 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 05:02:43.369383 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 05:02:43.373655 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 05:02:43.378649 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 05:02:43.387328 kernel: kvm_amd: TSC scaling supported Sep 16 05:02:43.387389 kernel: kvm_amd: Nested Virtualization enabled Sep 16 05:02:43.387426 kernel: kvm_amd: Nested Paging enabled Sep 16 05:02:43.387438 kernel: kvm_amd: LBR virtualization supported Sep 16 05:02:43.388892 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 05:02:43.393286 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 05:02:43.393518 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 16 05:02:43.393539 kernel: kvm_amd: Virtual GIF supported Sep 16 05:02:43.394389 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 05:02:43.395828 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 16 05:02:43.398715 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 05:02:43.400573 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 05:02:43.404348 jq[1567]: false Sep 16 05:02:43.410409 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 05:02:43.413145 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 05:02:43.419614 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 05:02:43.421458 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 05:02:43.421996 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 05:02:43.422744 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 05:02:43.428601 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 05:02:43.435794 jq[1576]: true Sep 16 05:02:43.436192 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing passwd entry cache Sep 16 05:02:43.436396 oslogin_cache_refresh[1569]: Refreshing passwd entry cache Sep 16 05:02:43.436773 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 05:02:43.438261 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 05:02:43.438521 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 05:02:43.439667 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 05:02:43.439919 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 05:02:43.447823 extend-filesystems[1568]: Found /dev/vda6 Sep 16 05:02:43.450893 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting users, quitting Sep 16 05:02:43.450886 oslogin_cache_refresh[1569]: Failure getting users, quitting Sep 16 05:02:43.450958 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 05:02:43.450911 oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 05:02:43.451006 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing group entry cache Sep 16 05:02:43.450973 oslogin_cache_refresh[1569]: Refreshing group entry cache Sep 16 05:02:43.455184 extend-filesystems[1568]: Found /dev/vda9 Sep 16 05:02:43.457402 update_engine[1575]: I20250916 05:02:43.456663 1575 main.cc:92] Flatcar Update Engine starting Sep 16 05:02:43.456961 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 05:02:43.457222 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 05:02:43.459344 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting groups, quitting Sep 16 05:02:43.459344 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 05:02:43.459179 oslogin_cache_refresh[1569]: Failure getting groups, quitting Sep 16 05:02:43.459190 oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 05:02:43.460558 extend-filesystems[1568]: Checking size of /dev/vda9 Sep 16 05:02:43.461945 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 16 05:02:43.462195 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 16 05:02:43.471312 jq[1589]: true Sep 16 05:02:43.472666 extend-filesystems[1568]: Resized partition /dev/vda9 Sep 16 05:02:43.474579 (ntainerd)[1596]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 05:02:43.475765 extend-filesystems[1609]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 05:02:43.480528 tar[1587]: linux-amd64/LICENSE Sep 16 05:02:43.481525 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 16 05:02:43.481683 tar[1587]: linux-amd64/helm Sep 16 05:02:43.499536 kernel: EDAC MC: Ver: 3.0.0 Sep 16 05:02:43.509033 systemd-logind[1574]: Watching system buttons on /dev/input/event2 (Power Button) Sep 16 05:02:43.509061 systemd-logind[1574]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 16 05:02:43.511625 systemd-logind[1574]: New seat seat0. Sep 16 05:02:43.513331 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 05:02:43.518526 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 16 05:02:43.531188 dbus-daemon[1565]: [system] SELinux support is enabled Sep 16 05:02:43.539951 dbus-daemon[1565]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 16 05:02:43.545434 update_engine[1575]: I20250916 05:02:43.542379 1575 update_check_scheduler.cc:74] Next update check in 4m49s Sep 16 05:02:43.531585 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 05:02:43.535931 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 05:02:43.535954 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 05:02:43.537254 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 05:02:43.537269 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 05:02:43.541800 systemd[1]: Started update-engine.service - Update Engine. Sep 16 05:02:43.547618 extend-filesystems[1609]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 16 05:02:43.547618 extend-filesystems[1609]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 16 05:02:43.547618 extend-filesystems[1609]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 16 05:02:43.555711 extend-filesystems[1568]: Resized filesystem in /dev/vda9 Sep 16 05:02:43.550727 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 05:02:43.563788 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Sep 16 05:02:43.557200 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 05:02:43.557479 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 05:02:43.559001 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 05:02:43.590710 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 16 05:02:43.633548 locksmithd[1630]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 05:02:43.700922 containerd[1596]: time="2025-09-16T05:02:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 05:02:43.701950 containerd[1596]: time="2025-09-16T05:02:43.701895965Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 05:02:43.713941 containerd[1596]: time="2025-09-16T05:02:43.713907470Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.512µs" Sep 16 05:02:43.714020 containerd[1596]: time="2025-09-16T05:02:43.714004202Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 05:02:43.714083 containerd[1596]: time="2025-09-16T05:02:43.714067571Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 05:02:43.714306 containerd[1596]: time="2025-09-16T05:02:43.714288245Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 05:02:43.714372 containerd[1596]: time="2025-09-16T05:02:43.714357464Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 05:02:43.714440 containerd[1596]: time="2025-09-16T05:02:43.714427967Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 05:02:43.714578 containerd[1596]: time="2025-09-16T05:02:43.714559984Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 05:02:43.714632 containerd[1596]: time="2025-09-16T05:02:43.714618113Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 05:02:43.715122 containerd[1596]: time="2025-09-16T05:02:43.715098344Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 05:02:43.715187 containerd[1596]: time="2025-09-16T05:02:43.715175198Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 05:02:43.715889 containerd[1596]: time="2025-09-16T05:02:43.715232575Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 05:02:43.715889 containerd[1596]: time="2025-09-16T05:02:43.715880310Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 05:02:43.716051 containerd[1596]: time="2025-09-16T05:02:43.716027857Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 05:02:43.716294 containerd[1596]: time="2025-09-16T05:02:43.716264431Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 05:02:43.716353 containerd[1596]: time="2025-09-16T05:02:43.716310417Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 05:02:43.716353 containerd[1596]: time="2025-09-16T05:02:43.716323311Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 05:02:43.716396 containerd[1596]: time="2025-09-16T05:02:43.716357535Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 05:02:43.716679 containerd[1596]: time="2025-09-16T05:02:43.716647389Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 05:02:43.716761 containerd[1596]: time="2025-09-16T05:02:43.716728551Z" level=info msg="metadata content store policy set" policy=shared Sep 16 05:02:43.721527 containerd[1596]: time="2025-09-16T05:02:43.721470784Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 05:02:43.721564 containerd[1596]: time="2025-09-16T05:02:43.721546867Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 05:02:43.721604 containerd[1596]: time="2025-09-16T05:02:43.721563679Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 05:02:43.721604 containerd[1596]: time="2025-09-16T05:02:43.721576603Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 05:02:43.721604 containerd[1596]: time="2025-09-16T05:02:43.721597071Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 05:02:43.721678 containerd[1596]: time="2025-09-16T05:02:43.721610697Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 05:02:43.721678 containerd[1596]: time="2025-09-16T05:02:43.721623511Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 05:02:43.721678 containerd[1596]: time="2025-09-16T05:02:43.721641014Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 05:02:43.721678 containerd[1596]: time="2025-09-16T05:02:43.721652305Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 05:02:43.721678 containerd[1596]: time="2025-09-16T05:02:43.721661833Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 05:02:43.721678 containerd[1596]: time="2025-09-16T05:02:43.721680247Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 05:02:43.721806 containerd[1596]: time="2025-09-16T05:02:43.721694093Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 05:02:43.721806 containerd[1596]: time="2025-09-16T05:02:43.721800443Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 05:02:43.721843 containerd[1596]: time="2025-09-16T05:02:43.721818567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 05:02:43.721843 containerd[1596]: time="2025-09-16T05:02:43.721838774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 05:02:43.721891 containerd[1596]: time="2025-09-16T05:02:43.721858441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 05:02:43.721891 containerd[1596]: time="2025-09-16T05:02:43.721869542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 05:02:43.721891 containerd[1596]: time="2025-09-16T05:02:43.721880232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 05:02:43.721891 containerd[1596]: time="2025-09-16T05:02:43.721891083Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 05:02:43.721982 containerd[1596]: time="2025-09-16T05:02:43.721901552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 05:02:43.721982 containerd[1596]: time="2025-09-16T05:02:43.721912413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 05:02:43.721982 containerd[1596]: time="2025-09-16T05:02:43.721922962Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 05:02:43.721982 containerd[1596]: time="2025-09-16T05:02:43.721933753Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 05:02:43.722067 containerd[1596]: time="2025-09-16T05:02:43.721990910Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 05:02:43.722067 containerd[1596]: time="2025-09-16T05:02:43.722003213Z" level=info msg="Start snapshots syncer" Sep 16 05:02:43.722067 containerd[1596]: time="2025-09-16T05:02:43.722025475Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 05:02:43.722323 containerd[1596]: time="2025-09-16T05:02:43.722277547Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 05:02:43.722536 containerd[1596]: time="2025-09-16T05:02:43.722334073Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 05:02:43.722536 containerd[1596]: time="2025-09-16T05:02:43.722395268Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 05:02:43.722586 containerd[1596]: time="2025-09-16T05:02:43.722573893Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 05:02:43.722628 containerd[1596]: time="2025-09-16T05:02:43.722615701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 05:02:43.722651 containerd[1596]: time="2025-09-16T05:02:43.722629517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 05:02:43.722651 containerd[1596]: time="2025-09-16T05:02:43.722640298Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 05:02:43.722698 containerd[1596]: time="2025-09-16T05:02:43.722652641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 05:02:43.722698 containerd[1596]: time="2025-09-16T05:02:43.722674612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 05:02:43.722698 containerd[1596]: time="2025-09-16T05:02:43.722685723Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 05:02:43.722765 containerd[1596]: time="2025-09-16T05:02:43.722705410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 05:02:43.722765 containerd[1596]: time="2025-09-16T05:02:43.722715879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 05:02:43.722765 containerd[1596]: time="2025-09-16T05:02:43.722726269Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 05:02:43.722765 containerd[1596]: time="2025-09-16T05:02:43.722756766Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 05:02:43.722845 containerd[1596]: time="2025-09-16T05:02:43.722768117Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 05:02:43.722845 containerd[1596]: time="2025-09-16T05:02:43.722780701Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 05:02:43.722845 containerd[1596]: time="2025-09-16T05:02:43.722789988Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 05:02:43.722845 containerd[1596]: time="2025-09-16T05:02:43.722798424Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 05:02:43.722845 containerd[1596]: time="2025-09-16T05:02:43.722810657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 05:02:43.722845 containerd[1596]: time="2025-09-16T05:02:43.722820716Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 05:02:43.722845 containerd[1596]: time="2025-09-16T05:02:43.722838940Z" level=info msg="runtime interface created" Sep 16 05:02:43.722845 containerd[1596]: time="2025-09-16T05:02:43.722844430Z" level=info msg="created NRI interface" Sep 16 05:02:43.723009 containerd[1596]: time="2025-09-16T05:02:43.722853377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 05:02:43.723009 containerd[1596]: time="2025-09-16T05:02:43.722863636Z" level=info msg="Connect containerd service" Sep 16 05:02:43.723009 containerd[1596]: time="2025-09-16T05:02:43.722884756Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 05:02:43.723734 containerd[1596]: time="2025-09-16T05:02:43.723708100Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 05:02:43.725250 sshd_keygen[1598]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 05:02:43.751028 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 05:02:43.755743 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 05:02:43.778652 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 05:02:43.779167 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 05:02:43.783574 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 05:02:43.803635 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 05:02:43.807658 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 05:02:43.810778 containerd[1596]: time="2025-09-16T05:02:43.810718957Z" level=info msg="Start subscribing containerd event" Sep 16 05:02:43.812992 containerd[1596]: time="2025-09-16T05:02:43.811045198Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 05:02:43.812992 containerd[1596]: time="2025-09-16T05:02:43.811556567Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 05:02:43.812992 containerd[1596]: time="2025-09-16T05:02:43.811647508Z" level=info msg="Start recovering state" Sep 16 05:02:43.812992 containerd[1596]: time="2025-09-16T05:02:43.812175398Z" level=info msg="Start event monitor" Sep 16 05:02:43.812992 containerd[1596]: time="2025-09-16T05:02:43.812211856Z" level=info msg="Start cni network conf syncer for default" Sep 16 05:02:43.812992 containerd[1596]: time="2025-09-16T05:02:43.812220452Z" level=info msg="Start streaming server" Sep 16 05:02:43.812992 containerd[1596]: time="2025-09-16T05:02:43.812237765Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 05:02:43.812992 containerd[1596]: time="2025-09-16T05:02:43.812244808Z" level=info msg="runtime interface starting up..." Sep 16 05:02:43.811872 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 16 05:02:43.814072 containerd[1596]: time="2025-09-16T05:02:43.812250689Z" level=info msg="starting plugins..." Sep 16 05:02:43.814072 containerd[1596]: time="2025-09-16T05:02:43.813022276Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 05:02:43.814072 containerd[1596]: time="2025-09-16T05:02:43.813366782Z" level=info msg="containerd successfully booted in 0.112960s" Sep 16 05:02:43.813159 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 05:02:43.814380 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 05:02:43.843295 tar[1587]: linux-amd64/README.md Sep 16 05:02:43.867588 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 05:02:44.572633 systemd-networkd[1493]: eth0: Gained IPv6LL Sep 16 05:02:44.575310 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 05:02:44.577022 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 05:02:44.579471 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 16 05:02:44.581732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:02:44.583743 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 05:02:44.608734 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 05:02:44.610321 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 16 05:02:44.610584 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 16 05:02:44.612698 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 05:02:45.298140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:02:45.299751 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 05:02:45.300935 systemd[1]: Startup finished in 2.720s (kernel) + 5.614s (initrd) + 4.037s (userspace) = 12.372s. Sep 16 05:02:45.303471 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 05:02:45.708677 kubelet[1702]: E0916 05:02:45.708540 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 05:02:45.712895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 05:02:45.713104 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 05:02:45.713469 systemd[1]: kubelet.service: Consumed 968ms CPU time, 266.8M memory peak. Sep 16 05:02:49.233686 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 05:02:49.234823 systemd[1]: Started sshd@0-10.0.0.151:22-10.0.0.1:33880.service - OpenSSH per-connection server daemon (10.0.0.1:33880). Sep 16 05:02:49.304665 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 33880 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:02:49.306582 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:49.312707 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 05:02:49.313710 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 05:02:49.319584 systemd-logind[1574]: New session 1 of user core. Sep 16 05:02:49.333628 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 05:02:49.336395 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 05:02:49.352658 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 05:02:49.354806 systemd-logind[1574]: New session c1 of user core. Sep 16 05:02:49.498165 systemd[1720]: Queued start job for default target default.target. Sep 16 05:02:49.520710 systemd[1720]: Created slice app.slice - User Application Slice. Sep 16 05:02:49.520732 systemd[1720]: Reached target paths.target - Paths. Sep 16 05:02:49.520771 systemd[1720]: Reached target timers.target - Timers. Sep 16 05:02:49.522144 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 05:02:49.533467 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 05:02:49.533637 systemd[1720]: Reached target sockets.target - Sockets. Sep 16 05:02:49.533684 systemd[1720]: Reached target basic.target - Basic System. Sep 16 05:02:49.533723 systemd[1720]: Reached target default.target - Main User Target. Sep 16 05:02:49.533759 systemd[1720]: Startup finished in 172ms. Sep 16 05:02:49.533921 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 05:02:49.535403 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 05:02:49.601608 systemd[1]: Started sshd@1-10.0.0.151:22-10.0.0.1:33888.service - OpenSSH per-connection server daemon (10.0.0.1:33888). Sep 16 05:02:49.662414 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 33888 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:02:49.663885 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:49.668299 systemd-logind[1574]: New session 2 of user core. Sep 16 05:02:49.681625 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 05:02:49.733191 sshd[1734]: Connection closed by 10.0.0.1 port 33888 Sep 16 05:02:49.733617 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:49.745909 systemd[1]: sshd@1-10.0.0.151:22-10.0.0.1:33888.service: Deactivated successfully. Sep 16 05:02:49.747636 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 05:02:49.748326 systemd-logind[1574]: Session 2 logged out. Waiting for processes to exit. Sep 16 05:02:49.750837 systemd[1]: Started sshd@2-10.0.0.151:22-10.0.0.1:33890.service - OpenSSH per-connection server daemon (10.0.0.1:33890). Sep 16 05:02:49.751352 systemd-logind[1574]: Removed session 2. Sep 16 05:02:49.808930 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 33890 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:02:49.810056 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:49.814017 systemd-logind[1574]: New session 3 of user core. Sep 16 05:02:49.823637 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 05:02:49.874428 sshd[1743]: Connection closed by 10.0.0.1 port 33890 Sep 16 05:02:49.874832 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:49.887246 systemd[1]: sshd@2-10.0.0.151:22-10.0.0.1:33890.service: Deactivated successfully. Sep 16 05:02:49.889149 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 05:02:49.890083 systemd-logind[1574]: Session 3 logged out. Waiting for processes to exit. Sep 16 05:02:49.892963 systemd[1]: Started sshd@3-10.0.0.151:22-10.0.0.1:33898.service - OpenSSH per-connection server daemon (10.0.0.1:33898). Sep 16 05:02:49.893470 systemd-logind[1574]: Removed session 3. Sep 16 05:02:49.950121 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 33898 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:02:49.951306 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:49.955251 systemd-logind[1574]: New session 4 of user core. Sep 16 05:02:49.964637 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 05:02:50.016801 sshd[1753]: Connection closed by 10.0.0.1 port 33898 Sep 16 05:02:50.017131 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:50.034040 systemd[1]: sshd@3-10.0.0.151:22-10.0.0.1:33898.service: Deactivated successfully. Sep 16 05:02:50.036199 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 05:02:50.037101 systemd-logind[1574]: Session 4 logged out. Waiting for processes to exit. Sep 16 05:02:50.040148 systemd[1]: Started sshd@4-10.0.0.151:22-10.0.0.1:55604.service - OpenSSH per-connection server daemon (10.0.0.1:55604). Sep 16 05:02:50.041108 systemd-logind[1574]: Removed session 4. Sep 16 05:02:50.097842 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 55604 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:02:50.099267 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:50.103898 systemd-logind[1574]: New session 5 of user core. Sep 16 05:02:50.113634 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 05:02:50.171799 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 05:02:50.172108 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:02:50.192921 sudo[1763]: pam_unix(sudo:session): session closed for user root Sep 16 05:02:50.195107 sshd[1762]: Connection closed by 10.0.0.1 port 55604 Sep 16 05:02:50.195587 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:50.209289 systemd[1]: sshd@4-10.0.0.151:22-10.0.0.1:55604.service: Deactivated successfully. Sep 16 05:02:50.211054 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 05:02:50.211802 systemd-logind[1574]: Session 5 logged out. Waiting for processes to exit. Sep 16 05:02:50.214293 systemd[1]: Started sshd@5-10.0.0.151:22-10.0.0.1:55620.service - OpenSSH per-connection server daemon (10.0.0.1:55620). Sep 16 05:02:50.214996 systemd-logind[1574]: Removed session 5. Sep 16 05:02:50.277649 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 55620 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:02:50.278804 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:50.282804 systemd-logind[1574]: New session 6 of user core. Sep 16 05:02:50.297605 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 05:02:50.350553 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 05:02:50.350846 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:02:50.720420 sudo[1774]: pam_unix(sudo:session): session closed for user root Sep 16 05:02:50.726415 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 05:02:50.726741 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:02:50.735722 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 05:02:50.780825 augenrules[1796]: No rules Sep 16 05:02:50.782418 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 05:02:50.782693 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 05:02:50.783827 sudo[1773]: pam_unix(sudo:session): session closed for user root Sep 16 05:02:50.785240 sshd[1772]: Connection closed by 10.0.0.1 port 55620 Sep 16 05:02:50.785584 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:50.793840 systemd[1]: sshd@5-10.0.0.151:22-10.0.0.1:55620.service: Deactivated successfully. Sep 16 05:02:50.795411 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 05:02:50.796141 systemd-logind[1574]: Session 6 logged out. Waiting for processes to exit. Sep 16 05:02:50.798790 systemd[1]: Started sshd@6-10.0.0.151:22-10.0.0.1:55632.service - OpenSSH per-connection server daemon (10.0.0.1:55632). Sep 16 05:02:50.799466 systemd-logind[1574]: Removed session 6. Sep 16 05:02:50.855448 sshd[1805]: Accepted publickey for core from 10.0.0.1 port 55632 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:02:50.856640 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:50.860515 systemd-logind[1574]: New session 7 of user core. Sep 16 05:02:50.869620 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 05:02:50.920708 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 05:02:50.921003 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:02:51.206355 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 05:02:51.224794 (dockerd)[1830]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 05:02:51.486478 dockerd[1830]: time="2025-09-16T05:02:51.486333514Z" level=info msg="Starting up" Sep 16 05:02:51.487306 dockerd[1830]: time="2025-09-16T05:02:51.487283625Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 05:02:51.499340 dockerd[1830]: time="2025-09-16T05:02:51.499308096Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 05:02:51.563202 dockerd[1830]: time="2025-09-16T05:02:51.563145203Z" level=info msg="Loading containers: start." Sep 16 05:02:51.573542 kernel: Initializing XFRM netlink socket Sep 16 05:02:51.843312 systemd-networkd[1493]: docker0: Link UP Sep 16 05:02:51.847713 dockerd[1830]: time="2025-09-16T05:02:51.847670570Z" level=info msg="Loading containers: done." Sep 16 05:02:51.865636 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3510136861-merged.mount: Deactivated successfully. Sep 16 05:02:51.866050 dockerd[1830]: time="2025-09-16T05:02:51.865805649Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 05:02:51.866050 dockerd[1830]: time="2025-09-16T05:02:51.865909624Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 05:02:51.866050 dockerd[1830]: time="2025-09-16T05:02:51.866032034Z" level=info msg="Initializing buildkit" Sep 16 05:02:51.894927 dockerd[1830]: time="2025-09-16T05:02:51.894887999Z" level=info msg="Completed buildkit initialization" Sep 16 05:02:51.899103 dockerd[1830]: time="2025-09-16T05:02:51.899041859Z" level=info msg="Daemon has completed initialization" Sep 16 05:02:51.899229 dockerd[1830]: time="2025-09-16T05:02:51.899163497Z" level=info msg="API listen on /run/docker.sock" Sep 16 05:02:51.899295 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 05:02:52.926685 containerd[1596]: time="2025-09-16T05:02:52.926639251Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 16 05:02:53.542115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1712030246.mount: Deactivated successfully. Sep 16 05:02:55.009371 containerd[1596]: time="2025-09-16T05:02:55.009305864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:02:55.010116 containerd[1596]: time="2025-09-16T05:02:55.010076860Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 16 05:02:55.011510 containerd[1596]: time="2025-09-16T05:02:55.011448342Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:02:55.014086 containerd[1596]: time="2025-09-16T05:02:55.014048308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:02:55.017008 containerd[1596]: time="2025-09-16T05:02:55.016161040Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.089472427s" Sep 16 05:02:55.017008 containerd[1596]: time="2025-09-16T05:02:55.016214460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 16 05:02:55.018011 containerd[1596]: time="2025-09-16T05:02:55.017568810Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 16 05:02:55.963597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 05:02:55.965254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:02:56.265149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:02:56.277794 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 05:02:56.402546 kubelet[2116]: E0916 05:02:56.402473 2116 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 05:02:56.411338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 05:02:56.411548 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 05:02:56.411930 systemd[1]: kubelet.service: Consumed 406ms CPU time, 112.2M memory peak. Sep 16 05:02:56.766792 containerd[1596]: time="2025-09-16T05:02:56.766741303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:02:56.767627 containerd[1596]: time="2025-09-16T05:02:56.767594814Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 16 05:02:56.768959 containerd[1596]: time="2025-09-16T05:02:56.768885484Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:02:56.771387 containerd[1596]: time="2025-09-16T05:02:56.771325680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:02:56.772289 containerd[1596]: time="2025-09-16T05:02:56.772251306Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.754419222s" Sep 16 05:02:56.772321 containerd[1596]: time="2025-09-16T05:02:56.772291021Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 16 05:02:56.772845 containerd[1596]: time="2025-09-16T05:02:56.772819031Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 16 05:02:58.541301 containerd[1596]: time="2025-09-16T05:02:58.541237547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:02:58.542020 containerd[1596]: time="2025-09-16T05:02:58.541995649Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 16 05:02:58.543814 containerd[1596]: time="2025-09-16T05:02:58.543757032Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:02:58.546287 containerd[1596]: time="2025-09-16T05:02:58.546244888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:02:58.548517 containerd[1596]: time="2025-09-16T05:02:58.548179105Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.77533167s" Sep 16 05:02:58.548517 containerd[1596]: time="2025-09-16T05:02:58.548237474Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 16 05:02:58.549361 containerd[1596]: time="2025-09-16T05:02:58.549329502Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 16 05:02:59.640398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount953056901.mount: Deactivated successfully. Sep 16 05:03:00.476716 containerd[1596]: time="2025-09-16T05:03:00.476666297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:00.477552 containerd[1596]: time="2025-09-16T05:03:00.477529756Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 16 05:03:00.478840 containerd[1596]: time="2025-09-16T05:03:00.478786663Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:00.480452 containerd[1596]: time="2025-09-16T05:03:00.480421850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:00.480958 containerd[1596]: time="2025-09-16T05:03:00.480912810Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.931492017s" Sep 16 05:03:00.480958 containerd[1596]: time="2025-09-16T05:03:00.480954779Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 16 05:03:00.481524 containerd[1596]: time="2025-09-16T05:03:00.481439268Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 16 05:03:01.067576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3129415597.mount: Deactivated successfully. Sep 16 05:03:01.922873 containerd[1596]: time="2025-09-16T05:03:01.922813369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:01.923678 containerd[1596]: time="2025-09-16T05:03:01.923607358Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 16 05:03:01.924992 containerd[1596]: time="2025-09-16T05:03:01.924943804Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:01.928094 containerd[1596]: time="2025-09-16T05:03:01.928054698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:01.928961 containerd[1596]: time="2025-09-16T05:03:01.928921153Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.447450045s" Sep 16 05:03:01.928961 containerd[1596]: time="2025-09-16T05:03:01.928952231Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 16 05:03:01.929534 containerd[1596]: time="2025-09-16T05:03:01.929463149Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 05:03:02.491722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3742229663.mount: Deactivated successfully. Sep 16 05:03:02.496497 containerd[1596]: time="2025-09-16T05:03:02.496448680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 05:03:02.497119 containerd[1596]: time="2025-09-16T05:03:02.497082929Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 16 05:03:02.498197 containerd[1596]: time="2025-09-16T05:03:02.498154008Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 05:03:02.500014 containerd[1596]: time="2025-09-16T05:03:02.499980633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 05:03:02.500530 containerd[1596]: time="2025-09-16T05:03:02.500480581Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 570.987455ms" Sep 16 05:03:02.500563 containerd[1596]: time="2025-09-16T05:03:02.500531877Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 16 05:03:02.501063 containerd[1596]: time="2025-09-16T05:03:02.501021485Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 16 05:03:03.101624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3697753632.mount: Deactivated successfully. Sep 16 05:03:05.059793 containerd[1596]: time="2025-09-16T05:03:05.059736409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:05.060762 containerd[1596]: time="2025-09-16T05:03:05.060353086Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 16 05:03:05.061526 containerd[1596]: time="2025-09-16T05:03:05.061492943Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:05.064052 containerd[1596]: time="2025-09-16T05:03:05.064033117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:05.064899 containerd[1596]: time="2025-09-16T05:03:05.064875326Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.563828313s" Sep 16 05:03:05.064957 containerd[1596]: time="2025-09-16T05:03:05.064902588Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 16 05:03:06.662282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 05:03:06.663702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:06.851954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:06.870824 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 05:03:07.203058 kubelet[2279]: E0916 05:03:07.202956 2279 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 05:03:07.207246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 05:03:07.207443 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 05:03:07.207828 systemd[1]: kubelet.service: Consumed 503ms CPU time, 110.5M memory peak. Sep 16 05:03:07.920712 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:07.920870 systemd[1]: kubelet.service: Consumed 503ms CPU time, 110.5M memory peak. Sep 16 05:03:07.922920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:07.945452 systemd[1]: Reload requested from client PID 2295 ('systemctl') (unit session-7.scope)... Sep 16 05:03:07.945465 systemd[1]: Reloading... Sep 16 05:03:08.024621 zram_generator::config[2340]: No configuration found. Sep 16 05:03:08.608109 systemd[1]: Reloading finished in 662 ms. Sep 16 05:03:08.680141 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 05:03:08.680233 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 05:03:08.680561 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:08.680609 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.2M memory peak. Sep 16 05:03:08.682135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:08.841647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:08.845564 (kubelet)[2385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 05:03:08.880667 kubelet[2385]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:03:08.880667 kubelet[2385]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 05:03:08.880667 kubelet[2385]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:03:08.880963 kubelet[2385]: I0916 05:03:08.880655 2385 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 05:03:09.198678 kubelet[2385]: I0916 05:03:09.198608 2385 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 16 05:03:09.198678 kubelet[2385]: I0916 05:03:09.198635 2385 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 05:03:09.198925 kubelet[2385]: I0916 05:03:09.198891 2385 server.go:956] "Client rotation is on, will bootstrap in background" Sep 16 05:03:09.222996 kubelet[2385]: E0916 05:03:09.222956 2385 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 16 05:03:09.223473 kubelet[2385]: I0916 05:03:09.223435 2385 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 05:03:09.229228 kubelet[2385]: I0916 05:03:09.229205 2385 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 05:03:09.236149 kubelet[2385]: I0916 05:03:09.236110 2385 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 05:03:09.236360 kubelet[2385]: I0916 05:03:09.236324 2385 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 05:03:09.236530 kubelet[2385]: I0916 05:03:09.236352 2385 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 05:03:09.236649 kubelet[2385]: I0916 05:03:09.236534 2385 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 05:03:09.236649 kubelet[2385]: I0916 05:03:09.236545 2385 container_manager_linux.go:303] "Creating device plugin manager" Sep 16 05:03:09.236701 kubelet[2385]: I0916 05:03:09.236668 2385 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:03:09.238746 kubelet[2385]: I0916 05:03:09.238710 2385 kubelet.go:480] "Attempting to sync node with API server" Sep 16 05:03:09.238746 kubelet[2385]: I0916 05:03:09.238731 2385 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 05:03:09.238746 kubelet[2385]: I0916 05:03:09.238759 2385 kubelet.go:386] "Adding apiserver pod source" Sep 16 05:03:09.238923 kubelet[2385]: I0916 05:03:09.238781 2385 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 05:03:09.244001 kubelet[2385]: I0916 05:03:09.243964 2385 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 05:03:09.245296 kubelet[2385]: I0916 05:03:09.244377 2385 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 16 05:03:09.246043 kubelet[2385]: W0916 05:03:09.246019 2385 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 05:03:09.248081 kubelet[2385]: E0916 05:03:09.248053 2385 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 16 05:03:09.248146 kubelet[2385]: E0916 05:03:09.248054 2385 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 16 05:03:09.248776 kubelet[2385]: I0916 05:03:09.248747 2385 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 05:03:09.248811 kubelet[2385]: I0916 05:03:09.248794 2385 server.go:1289] "Started kubelet" Sep 16 05:03:09.249389 kubelet[2385]: I0916 05:03:09.249307 2385 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 05:03:09.250436 kubelet[2385]: I0916 05:03:09.250415 2385 server.go:317] "Adding debug handlers to kubelet server" Sep 16 05:03:09.251063 kubelet[2385]: I0916 05:03:09.251041 2385 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 05:03:09.251825 kubelet[2385]: I0916 05:03:09.251765 2385 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 05:03:09.252086 kubelet[2385]: I0916 05:03:09.252032 2385 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 05:03:09.252198 kubelet[2385]: I0916 05:03:09.252192 2385 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 05:03:09.252943 kubelet[2385]: E0916 05:03:09.251882 2385 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.151:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.151:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1865aac435ab25ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-16 05:03:09.248767438 +0000 UTC m=+0.399562784,LastTimestamp:2025-09-16 05:03:09.248767438 +0000 UTC m=+0.399562784,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 16 05:03:09.253472 kubelet[2385]: E0916 05:03:09.253444 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 05:03:09.253472 kubelet[2385]: I0916 05:03:09.253473 2385 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 05:03:09.253661 kubelet[2385]: I0916 05:03:09.253638 2385 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 05:03:09.253707 kubelet[2385]: I0916 05:03:09.253694 2385 reconciler.go:26] "Reconciler: start to sync state" Sep 16 05:03:09.253994 kubelet[2385]: E0916 05:03:09.253968 2385 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 16 05:03:09.255491 kubelet[2385]: E0916 05:03:09.254327 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="200ms" Sep 16 05:03:09.255491 kubelet[2385]: E0916 05:03:09.254554 2385 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 05:03:09.255737 kubelet[2385]: I0916 05:03:09.255709 2385 factory.go:223] Registration of the systemd container factory successfully Sep 16 05:03:09.255811 kubelet[2385]: I0916 05:03:09.255794 2385 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 05:03:09.257518 kubelet[2385]: I0916 05:03:09.256775 2385 factory.go:223] Registration of the containerd container factory successfully Sep 16 05:03:09.267160 kubelet[2385]: I0916 05:03:09.266858 2385 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 05:03:09.267160 kubelet[2385]: I0916 05:03:09.267150 2385 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 05:03:09.267160 kubelet[2385]: I0916 05:03:09.267165 2385 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:03:09.353650 kubelet[2385]: E0916 05:03:09.353601 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 05:03:09.454614 kubelet[2385]: E0916 05:03:09.454465 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 05:03:09.454761 kubelet[2385]: E0916 05:03:09.454717 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="400ms" Sep 16 05:03:09.554901 kubelet[2385]: E0916 05:03:09.554851 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 05:03:09.655355 kubelet[2385]: E0916 05:03:09.655313 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 05:03:09.745729 kubelet[2385]: I0916 05:03:09.745639 2385 policy_none.go:49] "None policy: Start" Sep 16 05:03:09.745729 kubelet[2385]: I0916 05:03:09.745664 2385 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 05:03:09.745729 kubelet[2385]: I0916 05:03:09.745676 2385 state_mem.go:35] "Initializing new in-memory state store" Sep 16 05:03:09.748611 kubelet[2385]: I0916 05:03:09.748564 2385 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 16 05:03:09.751364 kubelet[2385]: I0916 05:03:09.751342 2385 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 16 05:03:09.751403 kubelet[2385]: I0916 05:03:09.751369 2385 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 16 05:03:09.751403 kubelet[2385]: I0916 05:03:09.751390 2385 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 05:03:09.751403 kubelet[2385]: I0916 05:03:09.751397 2385 kubelet.go:2436] "Starting kubelet main sync loop" Sep 16 05:03:09.751491 kubelet[2385]: E0916 05:03:09.751434 2385 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 05:03:09.751898 kubelet[2385]: E0916 05:03:09.751865 2385 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 16 05:03:09.753916 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 05:03:09.755927 kubelet[2385]: E0916 05:03:09.755898 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 05:03:09.764338 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 05:03:09.767426 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 05:03:09.778316 kubelet[2385]: E0916 05:03:09.778288 2385 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 16 05:03:09.778513 kubelet[2385]: I0916 05:03:09.778476 2385 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 05:03:09.778573 kubelet[2385]: I0916 05:03:09.778491 2385 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 05:03:09.778737 kubelet[2385]: I0916 05:03:09.778686 2385 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 05:03:09.779278 kubelet[2385]: E0916 05:03:09.779240 2385 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 05:03:09.779361 kubelet[2385]: E0916 05:03:09.779344 2385 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 16 05:03:09.855383 kubelet[2385]: E0916 05:03:09.855344 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="800ms" Sep 16 05:03:09.856418 kubelet[2385]: I0916 05:03:09.856388 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e513f2f8dab02aa8d01b48d40dd7a7d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e513f2f8dab02aa8d01b48d40dd7a7d\") " pod="kube-system/kube-apiserver-localhost" Sep 16 05:03:09.856418 kubelet[2385]: I0916 05:03:09.856415 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e513f2f8dab02aa8d01b48d40dd7a7d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e513f2f8dab02aa8d01b48d40dd7a7d\") " pod="kube-system/kube-apiserver-localhost" Sep 16 05:03:09.856568 kubelet[2385]: I0916 05:03:09.856436 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e513f2f8dab02aa8d01b48d40dd7a7d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1e513f2f8dab02aa8d01b48d40dd7a7d\") " pod="kube-system/kube-apiserver-localhost" Sep 16 05:03:09.880399 kubelet[2385]: I0916 05:03:09.880363 2385 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 16 05:03:09.880795 kubelet[2385]: E0916 05:03:09.880758 2385 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Sep 16 05:03:10.082745 kubelet[2385]: I0916 05:03:10.082668 2385 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 16 05:03:10.082969 kubelet[2385]: E0916 05:03:10.082933 2385 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Sep 16 05:03:10.123128 systemd[1]: Created slice kubepods-burstable-pod1e513f2f8dab02aa8d01b48d40dd7a7d.slice - libcontainer container kubepods-burstable-pod1e513f2f8dab02aa8d01b48d40dd7a7d.slice. Sep 16 05:03:10.142012 kubelet[2385]: E0916 05:03:10.141984 2385 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 05:03:10.142784 containerd[1596]: time="2025-09-16T05:03:10.142744894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1e513f2f8dab02aa8d01b48d40dd7a7d,Namespace:kube-system,Attempt:0,}" Sep 16 05:03:10.145537 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 16 05:03:10.147442 kubelet[2385]: E0916 05:03:10.147420 2385 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 05:03:10.149756 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 16 05:03:10.151257 kubelet[2385]: E0916 05:03:10.151237 2385 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 05:03:10.158534 kubelet[2385]: I0916 05:03:10.158491 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 05:03:10.158574 kubelet[2385]: I0916 05:03:10.158538 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 05:03:10.158574 kubelet[2385]: I0916 05:03:10.158565 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 05:03:10.158634 kubelet[2385]: I0916 05:03:10.158581 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 16 05:03:10.158634 kubelet[2385]: I0916 05:03:10.158599 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 05:03:10.158634 kubelet[2385]: I0916 05:03:10.158628 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 05:03:10.161727 containerd[1596]: time="2025-09-16T05:03:10.161690675Z" level=info msg="connecting to shim 20c50e980843936be6172d7811c8f9fe4fd8dbe556c33c8e95cd5662094e65fd" address="unix:///run/containerd/s/5100587828481f47e70d456698b88f58d1d70e4a575f685980f905856be15ea3" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:10.162095 kubelet[2385]: E0916 05:03:10.162059 2385 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 16 05:03:10.188650 systemd[1]: Started cri-containerd-20c50e980843936be6172d7811c8f9fe4fd8dbe556c33c8e95cd5662094e65fd.scope - libcontainer container 20c50e980843936be6172d7811c8f9fe4fd8dbe556c33c8e95cd5662094e65fd. Sep 16 05:03:10.227573 containerd[1596]: time="2025-09-16T05:03:10.227531389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1e513f2f8dab02aa8d01b48d40dd7a7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"20c50e980843936be6172d7811c8f9fe4fd8dbe556c33c8e95cd5662094e65fd\"" Sep 16 05:03:10.232830 containerd[1596]: time="2025-09-16T05:03:10.232789721Z" level=info msg="CreateContainer within sandbox \"20c50e980843936be6172d7811c8f9fe4fd8dbe556c33c8e95cd5662094e65fd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 05:03:10.242614 containerd[1596]: time="2025-09-16T05:03:10.242569621Z" level=info msg="Container b6f081ff5b25b19106766c6eea2e54ffec1f4e8c0ed7d82089b7e6759d63a1a0: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:10.250194 containerd[1596]: time="2025-09-16T05:03:10.250169885Z" level=info msg="CreateContainer within sandbox \"20c50e980843936be6172d7811c8f9fe4fd8dbe556c33c8e95cd5662094e65fd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b6f081ff5b25b19106766c6eea2e54ffec1f4e8c0ed7d82089b7e6759d63a1a0\"" Sep 16 05:03:10.250659 containerd[1596]: time="2025-09-16T05:03:10.250627773Z" level=info msg="StartContainer for \"b6f081ff5b25b19106766c6eea2e54ffec1f4e8c0ed7d82089b7e6759d63a1a0\"" Sep 16 05:03:10.251881 containerd[1596]: time="2025-09-16T05:03:10.251848553Z" level=info msg="connecting to shim b6f081ff5b25b19106766c6eea2e54ffec1f4e8c0ed7d82089b7e6759d63a1a0" address="unix:///run/containerd/s/5100587828481f47e70d456698b88f58d1d70e4a575f685980f905856be15ea3" protocol=ttrpc version=3 Sep 16 05:03:10.270640 systemd[1]: Started cri-containerd-b6f081ff5b25b19106766c6eea2e54ffec1f4e8c0ed7d82089b7e6759d63a1a0.scope - libcontainer container b6f081ff5b25b19106766c6eea2e54ffec1f4e8c0ed7d82089b7e6759d63a1a0. Sep 16 05:03:10.314826 containerd[1596]: time="2025-09-16T05:03:10.314802824Z" level=info msg="StartContainer for \"b6f081ff5b25b19106766c6eea2e54ffec1f4e8c0ed7d82089b7e6759d63a1a0\" returns successfully" Sep 16 05:03:10.449238 containerd[1596]: time="2025-09-16T05:03:10.449121390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 16 05:03:10.452828 containerd[1596]: time="2025-09-16T05:03:10.452795610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 16 05:03:10.476619 containerd[1596]: time="2025-09-16T05:03:10.476583180Z" level=info msg="connecting to shim 13a948b7b4fb39645b4ab6dc23b5934c4995407da6db85cf59d6ef33f9264268" address="unix:///run/containerd/s/803190fd91d4901c0b93c5e4bf058bef09729196c7dc5e1fcde06407f2a70c72" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:10.484437 kubelet[2385]: I0916 05:03:10.484417 2385 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 16 05:03:10.485070 containerd[1596]: time="2025-09-16T05:03:10.485029310Z" level=info msg="connecting to shim 63530bc89389e21991c69c1356b8283339fbfca9dba0c0fe99e837d3781ce60e" address="unix:///run/containerd/s/62010af079187d547495e3a7bf6aaf96d6036b0bb2d3179d0f5d93653850552f" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:10.503711 systemd[1]: Started cri-containerd-13a948b7b4fb39645b4ab6dc23b5934c4995407da6db85cf59d6ef33f9264268.scope - libcontainer container 13a948b7b4fb39645b4ab6dc23b5934c4995407da6db85cf59d6ef33f9264268. Sep 16 05:03:10.507594 systemd[1]: Started cri-containerd-63530bc89389e21991c69c1356b8283339fbfca9dba0c0fe99e837d3781ce60e.scope - libcontainer container 63530bc89389e21991c69c1356b8283339fbfca9dba0c0fe99e837d3781ce60e. Sep 16 05:03:10.553147 containerd[1596]: time="2025-09-16T05:03:10.553101119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"63530bc89389e21991c69c1356b8283339fbfca9dba0c0fe99e837d3781ce60e\"" Sep 16 05:03:10.558775 containerd[1596]: time="2025-09-16T05:03:10.558751054Z" level=info msg="CreateContainer within sandbox \"63530bc89389e21991c69c1356b8283339fbfca9dba0c0fe99e837d3781ce60e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 05:03:10.560350 containerd[1596]: time="2025-09-16T05:03:10.560285322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"13a948b7b4fb39645b4ab6dc23b5934c4995407da6db85cf59d6ef33f9264268\"" Sep 16 05:03:10.564845 containerd[1596]: time="2025-09-16T05:03:10.564827280Z" level=info msg="CreateContainer within sandbox \"13a948b7b4fb39645b4ab6dc23b5934c4995407da6db85cf59d6ef33f9264268\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 05:03:10.571748 containerd[1596]: time="2025-09-16T05:03:10.571705970Z" level=info msg="Container 93335e40c6072659eff78978bf391f386463d1a3b597198da5c3831bf18f1783: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:10.575706 containerd[1596]: time="2025-09-16T05:03:10.575682457Z" level=info msg="Container cc4349574629b788cfbe5e28881cfaf9e72f55e592285ed8f17808fdfcb6cecc: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:10.580450 containerd[1596]: time="2025-09-16T05:03:10.580417086Z" level=info msg="CreateContainer within sandbox \"63530bc89389e21991c69c1356b8283339fbfca9dba0c0fe99e837d3781ce60e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"93335e40c6072659eff78978bf391f386463d1a3b597198da5c3831bf18f1783\"" Sep 16 05:03:10.581814 containerd[1596]: time="2025-09-16T05:03:10.580886276Z" level=info msg="StartContainer for \"93335e40c6072659eff78978bf391f386463d1a3b597198da5c3831bf18f1783\"" Sep 16 05:03:10.581814 containerd[1596]: time="2025-09-16T05:03:10.581751478Z" level=info msg="connecting to shim 93335e40c6072659eff78978bf391f386463d1a3b597198da5c3831bf18f1783" address="unix:///run/containerd/s/62010af079187d547495e3a7bf6aaf96d6036b0bb2d3179d0f5d93653850552f" protocol=ttrpc version=3 Sep 16 05:03:10.583603 containerd[1596]: time="2025-09-16T05:03:10.583553518Z" level=info msg="CreateContainer within sandbox \"13a948b7b4fb39645b4ab6dc23b5934c4995407da6db85cf59d6ef33f9264268\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cc4349574629b788cfbe5e28881cfaf9e72f55e592285ed8f17808fdfcb6cecc\"" Sep 16 05:03:10.583981 containerd[1596]: time="2025-09-16T05:03:10.583950913Z" level=info msg="StartContainer for \"cc4349574629b788cfbe5e28881cfaf9e72f55e592285ed8f17808fdfcb6cecc\"" Sep 16 05:03:10.584989 containerd[1596]: time="2025-09-16T05:03:10.584958292Z" level=info msg="connecting to shim cc4349574629b788cfbe5e28881cfaf9e72f55e592285ed8f17808fdfcb6cecc" address="unix:///run/containerd/s/803190fd91d4901c0b93c5e4bf058bef09729196c7dc5e1fcde06407f2a70c72" protocol=ttrpc version=3 Sep 16 05:03:10.605640 systemd[1]: Started cri-containerd-93335e40c6072659eff78978bf391f386463d1a3b597198da5c3831bf18f1783.scope - libcontainer container 93335e40c6072659eff78978bf391f386463d1a3b597198da5c3831bf18f1783. Sep 16 05:03:10.609489 systemd[1]: Started cri-containerd-cc4349574629b788cfbe5e28881cfaf9e72f55e592285ed8f17808fdfcb6cecc.scope - libcontainer container cc4349574629b788cfbe5e28881cfaf9e72f55e592285ed8f17808fdfcb6cecc. Sep 16 05:03:10.656567 containerd[1596]: time="2025-09-16T05:03:10.656530987Z" level=info msg="StartContainer for \"93335e40c6072659eff78978bf391f386463d1a3b597198da5c3831bf18f1783\" returns successfully" Sep 16 05:03:10.665764 containerd[1596]: time="2025-09-16T05:03:10.665738163Z" level=info msg="StartContainer for \"cc4349574629b788cfbe5e28881cfaf9e72f55e592285ed8f17808fdfcb6cecc\" returns successfully" Sep 16 05:03:10.768809 kubelet[2385]: E0916 05:03:10.766399 2385 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 05:03:10.771684 kubelet[2385]: E0916 05:03:10.771553 2385 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 05:03:10.776558 kubelet[2385]: E0916 05:03:10.776262 2385 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 05:03:11.419379 kubelet[2385]: E0916 05:03:11.419342 2385 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 16 05:03:11.469316 kubelet[2385]: I0916 05:03:11.469286 2385 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 16 05:03:11.555222 kubelet[2385]: I0916 05:03:11.555182 2385 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 16 05:03:11.559738 kubelet[2385]: E0916 05:03:11.559299 2385 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 16 05:03:11.559738 kubelet[2385]: I0916 05:03:11.559324 2385 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 16 05:03:11.560881 kubelet[2385]: E0916 05:03:11.560868 2385 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 16 05:03:11.560938 kubelet[2385]: I0916 05:03:11.560930 2385 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 16 05:03:11.561965 kubelet[2385]: E0916 05:03:11.561937 2385 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 16 05:03:11.773416 kubelet[2385]: I0916 05:03:11.773321 2385 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 16 05:03:11.773883 kubelet[2385]: I0916 05:03:11.773858 2385 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 16 05:03:11.775891 kubelet[2385]: E0916 05:03:11.775843 2385 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 16 05:03:11.775957 kubelet[2385]: E0916 05:03:11.775909 2385 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 16 05:03:12.245480 kubelet[2385]: I0916 05:03:12.245368 2385 apiserver.go:52] "Watching apiserver" Sep 16 05:03:12.254447 kubelet[2385]: I0916 05:03:12.254411 2385 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 05:03:13.285888 systemd[1]: Reload requested from client PID 2675 ('systemctl') (unit session-7.scope)... Sep 16 05:03:13.285904 systemd[1]: Reloading... Sep 16 05:03:13.376573 zram_generator::config[2721]: No configuration found. Sep 16 05:03:13.593149 systemd[1]: Reloading finished in 306 ms. Sep 16 05:03:13.601314 kubelet[2385]: I0916 05:03:13.601291 2385 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 16 05:03:13.620436 kubelet[2385]: I0916 05:03:13.620388 2385 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 05:03:13.620724 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:13.637614 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 05:03:13.637935 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:13.637983 systemd[1]: kubelet.service: Consumed 829ms CPU time, 131.2M memory peak. Sep 16 05:03:13.639658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:13.866209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:13.877902 (kubelet)[2763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 05:03:13.910846 kubelet[2763]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:03:13.910846 kubelet[2763]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 05:03:13.910846 kubelet[2763]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:03:13.911182 kubelet[2763]: I0916 05:03:13.910892 2763 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 05:03:13.918230 kubelet[2763]: I0916 05:03:13.918202 2763 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 16 05:03:13.918230 kubelet[2763]: I0916 05:03:13.918222 2763 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 05:03:13.918429 kubelet[2763]: I0916 05:03:13.918407 2763 server.go:956] "Client rotation is on, will bootstrap in background" Sep 16 05:03:13.919445 kubelet[2763]: I0916 05:03:13.919422 2763 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 16 05:03:13.921912 kubelet[2763]: I0916 05:03:13.921868 2763 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 05:03:13.925271 kubelet[2763]: I0916 05:03:13.925246 2763 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 05:03:13.931711 kubelet[2763]: I0916 05:03:13.931677 2763 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 05:03:13.932163 kubelet[2763]: I0916 05:03:13.932124 2763 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 05:03:13.932307 kubelet[2763]: I0916 05:03:13.932152 2763 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 05:03:13.932307 kubelet[2763]: I0916 05:03:13.932302 2763 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 05:03:13.932406 kubelet[2763]: I0916 05:03:13.932311 2763 container_manager_linux.go:303] "Creating device plugin manager" Sep 16 05:03:13.932406 kubelet[2763]: I0916 05:03:13.932358 2763 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:03:13.932557 kubelet[2763]: I0916 05:03:13.932538 2763 kubelet.go:480] "Attempting to sync node with API server" Sep 16 05:03:13.932557 kubelet[2763]: I0916 05:03:13.932552 2763 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 05:03:13.932608 kubelet[2763]: I0916 05:03:13.932572 2763 kubelet.go:386] "Adding apiserver pod source" Sep 16 05:03:13.932608 kubelet[2763]: I0916 05:03:13.932587 2763 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 05:03:13.933463 kubelet[2763]: I0916 05:03:13.933382 2763 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 05:03:13.933830 kubelet[2763]: I0916 05:03:13.933811 2763 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 16 05:03:13.940684 kubelet[2763]: I0916 05:03:13.940541 2763 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 05:03:13.940684 kubelet[2763]: I0916 05:03:13.940586 2763 server.go:1289] "Started kubelet" Sep 16 05:03:13.942993 kubelet[2763]: I0916 05:03:13.941791 2763 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 05:03:13.942993 kubelet[2763]: I0916 05:03:13.942104 2763 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 05:03:13.942993 kubelet[2763]: I0916 05:03:13.942147 2763 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 05:03:13.942993 kubelet[2763]: I0916 05:03:13.942392 2763 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 05:03:13.944291 kubelet[2763]: I0916 05:03:13.943007 2763 server.go:317] "Adding debug handlers to kubelet server" Sep 16 05:03:13.944291 kubelet[2763]: I0916 05:03:13.944154 2763 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 05:03:13.945315 kubelet[2763]: I0916 05:03:13.945238 2763 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 05:03:13.945395 kubelet[2763]: I0916 05:03:13.945352 2763 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 05:03:13.945565 kubelet[2763]: I0916 05:03:13.945550 2763 reconciler.go:26] "Reconciler: start to sync state" Sep 16 05:03:13.946248 kubelet[2763]: I0916 05:03:13.946160 2763 factory.go:223] Registration of the systemd container factory successfully Sep 16 05:03:13.946248 kubelet[2763]: I0916 05:03:13.946243 2763 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 05:03:13.948200 kubelet[2763]: I0916 05:03:13.948174 2763 factory.go:223] Registration of the containerd container factory successfully Sep 16 05:03:13.949043 kubelet[2763]: E0916 05:03:13.948947 2763 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 05:03:13.953178 kubelet[2763]: I0916 05:03:13.953042 2763 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 16 05:03:13.961054 kubelet[2763]: I0916 05:03:13.961038 2763 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 16 05:03:13.961129 kubelet[2763]: I0916 05:03:13.961119 2763 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 16 05:03:13.961194 kubelet[2763]: I0916 05:03:13.961184 2763 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 05:03:13.961239 kubelet[2763]: I0916 05:03:13.961232 2763 kubelet.go:2436] "Starting kubelet main sync loop" Sep 16 05:03:13.961327 kubelet[2763]: E0916 05:03:13.961310 2763 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 05:03:13.986607 kubelet[2763]: I0916 05:03:13.986581 2763 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 05:03:13.986607 kubelet[2763]: I0916 05:03:13.986598 2763 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 05:03:13.986737 kubelet[2763]: I0916 05:03:13.986633 2763 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:03:13.986780 kubelet[2763]: I0916 05:03:13.986745 2763 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 05:03:13.986780 kubelet[2763]: I0916 05:03:13.986763 2763 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 05:03:13.986780 kubelet[2763]: I0916 05:03:13.986781 2763 policy_none.go:49] "None policy: Start" Sep 16 05:03:13.986845 kubelet[2763]: I0916 05:03:13.986791 2763 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 05:03:13.986845 kubelet[2763]: I0916 05:03:13.986801 2763 state_mem.go:35] "Initializing new in-memory state store" Sep 16 05:03:13.986896 kubelet[2763]: I0916 05:03:13.986882 2763 state_mem.go:75] "Updated machine memory state" Sep 16 05:03:13.990832 kubelet[2763]: E0916 05:03:13.990804 2763 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 16 05:03:13.990972 kubelet[2763]: I0916 05:03:13.990957 2763 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 05:03:13.990999 kubelet[2763]: I0916 05:03:13.990970 2763 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 05:03:13.992382 kubelet[2763]: I0916 05:03:13.992353 2763 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 05:03:13.992918 kubelet[2763]: E0916 05:03:13.992897 2763 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 05:03:14.062269 kubelet[2763]: I0916 05:03:14.062243 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 16 05:03:14.062337 kubelet[2763]: I0916 05:03:14.062303 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 16 05:03:14.062371 kubelet[2763]: I0916 05:03:14.062341 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 16 05:03:14.069304 kubelet[2763]: E0916 05:03:14.069261 2763 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 16 05:03:14.096224 kubelet[2763]: I0916 05:03:14.096189 2763 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 16 05:03:14.100993 kubelet[2763]: I0916 05:03:14.100955 2763 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 16 05:03:14.101064 kubelet[2763]: I0916 05:03:14.101018 2763 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 16 05:03:14.147314 kubelet[2763]: I0916 05:03:14.147209 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e513f2f8dab02aa8d01b48d40dd7a7d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1e513f2f8dab02aa8d01b48d40dd7a7d\") " pod="kube-system/kube-apiserver-localhost" Sep 16 05:03:14.147314 kubelet[2763]: I0916 05:03:14.147240 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 05:03:14.147314 kubelet[2763]: I0916 05:03:14.147283 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e513f2f8dab02aa8d01b48d40dd7a7d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e513f2f8dab02aa8d01b48d40dd7a7d\") " pod="kube-system/kube-apiserver-localhost" Sep 16 05:03:14.147485 kubelet[2763]: I0916 05:03:14.147317 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 05:03:14.147485 kubelet[2763]: I0916 05:03:14.147368 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 05:03:14.147485 kubelet[2763]: I0916 05:03:14.147394 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 05:03:14.147485 kubelet[2763]: I0916 05:03:14.147413 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 05:03:14.147485 kubelet[2763]: I0916 05:03:14.147430 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 16 05:03:14.147629 kubelet[2763]: I0916 05:03:14.147446 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e513f2f8dab02aa8d01b48d40dd7a7d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e513f2f8dab02aa8d01b48d40dd7a7d\") " pod="kube-system/kube-apiserver-localhost" Sep 16 05:03:14.288455 sudo[2802]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 16 05:03:14.288836 sudo[2802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 16 05:03:14.575775 sudo[2802]: pam_unix(sudo:session): session closed for user root Sep 16 05:03:14.933669 kubelet[2763]: I0916 05:03:14.933561 2763 apiserver.go:52] "Watching apiserver" Sep 16 05:03:14.945486 kubelet[2763]: I0916 05:03:14.945441 2763 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 05:03:14.974006 kubelet[2763]: I0916 05:03:14.973988 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 16 05:03:14.974425 kubelet[2763]: I0916 05:03:14.974403 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 16 05:03:14.979266 kubelet[2763]: E0916 05:03:14.979112 2763 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 16 05:03:14.979943 kubelet[2763]: E0916 05:03:14.979925 2763 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 16 05:03:14.996214 kubelet[2763]: I0916 05:03:14.996133 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.996118001 podStartE2EDuration="996.118001ms" podCreationTimestamp="2025-09-16 05:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:03:14.995859617 +0000 UTC m=+1.113933063" watchObservedRunningTime="2025-09-16 05:03:14.996118001 +0000 UTC m=+1.114191437" Sep 16 05:03:14.996302 kubelet[2763]: I0916 05:03:14.996243 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.996239369 podStartE2EDuration="996.239369ms" podCreationTimestamp="2025-09-16 05:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:03:14.989896784 +0000 UTC m=+1.107970241" watchObservedRunningTime="2025-09-16 05:03:14.996239369 +0000 UTC m=+1.114312805" Sep 16 05:03:15.007653 kubelet[2763]: I0916 05:03:15.007597 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.007491771 podStartE2EDuration="2.007491771s" podCreationTimestamp="2025-09-16 05:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:03:15.001174083 +0000 UTC m=+1.119247519" watchObservedRunningTime="2025-09-16 05:03:15.007491771 +0000 UTC m=+1.125565207" Sep 16 05:03:16.379729 sudo[1809]: pam_unix(sudo:session): session closed for user root Sep 16 05:03:16.381929 sshd[1808]: Connection closed by 10.0.0.1 port 55632 Sep 16 05:03:16.382285 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Sep 16 05:03:16.386663 systemd[1]: sshd@6-10.0.0.151:22-10.0.0.1:55632.service: Deactivated successfully. Sep 16 05:03:16.388767 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 05:03:16.388970 systemd[1]: session-7.scope: Consumed 4.842s CPU time, 259.3M memory peak. Sep 16 05:03:16.390146 systemd-logind[1574]: Session 7 logged out. Waiting for processes to exit. Sep 16 05:03:16.391171 systemd-logind[1574]: Removed session 7. Sep 16 05:03:21.029175 kubelet[2763]: I0916 05:03:21.029141 2763 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 05:03:21.029581 containerd[1596]: time="2025-09-16T05:03:21.029538179Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 05:03:21.029864 kubelet[2763]: I0916 05:03:21.029774 2763 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 05:03:22.073178 systemd[1]: Created slice kubepods-besteffort-pod79bfefb9_fc28_41fc_83bb_3344fe584ecd.slice - libcontainer container kubepods-besteffort-pod79bfefb9_fc28_41fc_83bb_3344fe584ecd.slice. Sep 16 05:03:22.085861 systemd[1]: Created slice kubepods-burstable-podaae2867b_194c_47af_8188_2901d549d330.slice - libcontainer container kubepods-burstable-podaae2867b_194c_47af_8188_2901d549d330.slice. Sep 16 05:03:22.098772 kubelet[2763]: I0916 05:03:22.098727 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79bfefb9-fc28-41fc-83bb-3344fe584ecd-lib-modules\") pod \"kube-proxy-r89rq\" (UID: \"79bfefb9-fc28-41fc-83bb-3344fe584ecd\") " pod="kube-system/kube-proxy-r89rq" Sep 16 05:03:22.099137 kubelet[2763]: I0916 05:03:22.098816 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqnvn\" (UniqueName: \"kubernetes.io/projected/79bfefb9-fc28-41fc-83bb-3344fe584ecd-kube-api-access-dqnvn\") pod \"kube-proxy-r89rq\" (UID: \"79bfefb9-fc28-41fc-83bb-3344fe584ecd\") " pod="kube-system/kube-proxy-r89rq" Sep 16 05:03:22.099137 kubelet[2763]: I0916 05:03:22.098840 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-bpf-maps\") pod \"cilium-wtfz4\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " pod="kube-system/cilium-wtfz4" Sep 16 05:03:22.099137 kubelet[2763]: I0916 05:03:22.098853 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-host-proc-sys-kernel\") pod \"cilium-wtfz4\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " pod="kube-system/cilium-wtfz4" Sep 16 05:03:22.099137 kubelet[2763]: I0916 05:03:22.098919 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aae2867b-194c-47af-8188-2901d549d330-hubble-tls\") pod \"cilium-wtfz4\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " pod="kube-system/cilium-wtfz4" Sep 16 05:03:22.099137 kubelet[2763]: I0916 05:03:22.098963 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-hostproc\") pod \"cilium-wtfz4\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " pod="kube-system/cilium-wtfz4" Sep 16 05:03:22.099137 kubelet[2763]: I0916 05:03:22.098978 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-lib-modules\") pod \"cilium-wtfz4\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " pod="kube-system/cilium-wtfz4" Sep 16 05:03:22.099269 kubelet[2763]: I0916 05:03:22.099000 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aae2867b-194c-47af-8188-2901d549d330-cilium-config-path\") pod \"cilium-wtfz4\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " pod="kube-system/cilium-wtfz4" Sep 16 05:03:22.099269 kubelet[2763]: I0916 05:03:22.099065 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-host-proc-sys-net\") pod \"cilium-wtfz4\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " pod="kube-system/cilium-wtfz4" Sep 16 05:03:22.099269 kubelet[2763]: I0916 05:03:22.099115 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/79bfefb9-fc28-41fc-83bb-3344fe584ecd-kube-proxy\") pod \"kube-proxy-r89rq\" (UID: \"79bfefb9-fc28-41fc-83bb-3344fe584ecd\") " pod="kube-system/kube-proxy-r89rq" Sep 16 05:03:22.099269 kubelet[2763]: I0916 05:03:22.099128 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79bfefb9-fc28-41fc-83bb-3344fe584ecd-xtables-lock\") pod \"kube-proxy-r89rq\" (UID: \"79bfefb9-fc28-41fc-83bb-3344fe584ecd\") " pod="kube-system/kube-proxy-r89rq" Sep 16 05:03:22.099269 kubelet[2763]: I0916 05:03:22.099143 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-cilium-run\") pod \"cilium-wtfz4\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " pod="kube-system/cilium-wtfz4" Sep 16 05:03:22.099269 kubelet[2763]: I0916 05:03:22.099197 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-cilium-cgroup\") pod \"cilium-wtfz4\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " pod="kube-system/cilium-wtfz4" Sep 16 05:03:22.099390 kubelet[2763]: I0916 05:03:22.099214 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-etc-cni-netd\") pod \"cilium-wtfz4\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " pod="kube-system/cilium-wtfz4" Sep 16 05:03:22.099390 kubelet[2763]: I0916 05:03:22.099280 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-xtables-lock\") pod \"cilium-wtfz4\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " pod="kube-system/cilium-wtfz4" Sep 16 05:03:22.099390 kubelet[2763]: I0916 05:03:22.099295 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxzr\" (UniqueName: \"kubernetes.io/projected/aae2867b-194c-47af-8188-2901d549d330-kube-api-access-mwxzr\") pod \"cilium-wtfz4\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " pod="kube-system/cilium-wtfz4" Sep 16 05:03:22.099390 kubelet[2763]: I0916 05:03:22.099349 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-cni-path\") pod \"cilium-wtfz4\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " pod="kube-system/cilium-wtfz4" Sep 16 05:03:22.099390 kubelet[2763]: I0916 05:03:22.099363 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aae2867b-194c-47af-8188-2901d549d330-clustermesh-secrets\") pod \"cilium-wtfz4\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " pod="kube-system/cilium-wtfz4" Sep 16 05:03:22.164657 systemd[1]: Created slice kubepods-besteffort-pod3f744d51_fa72_40e6_b1ca_2dca418b9000.slice - libcontainer container kubepods-besteffort-pod3f744d51_fa72_40e6_b1ca_2dca418b9000.slice. Sep 16 05:03:22.200611 kubelet[2763]: I0916 05:03:22.200567 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8bqd\" (UniqueName: \"kubernetes.io/projected/3f744d51-fa72-40e6-b1ca-2dca418b9000-kube-api-access-j8bqd\") pod \"cilium-operator-6c4d7847fc-qpjh9\" (UID: \"3f744d51-fa72-40e6-b1ca-2dca418b9000\") " pod="kube-system/cilium-operator-6c4d7847fc-qpjh9" Sep 16 05:03:22.200832 kubelet[2763]: I0916 05:03:22.200780 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f744d51-fa72-40e6-b1ca-2dca418b9000-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-qpjh9\" (UID: \"3f744d51-fa72-40e6-b1ca-2dca418b9000\") " pod="kube-system/cilium-operator-6c4d7847fc-qpjh9" Sep 16 05:03:22.382082 containerd[1596]: time="2025-09-16T05:03:22.382003432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r89rq,Uid:79bfefb9-fc28-41fc-83bb-3344fe584ecd,Namespace:kube-system,Attempt:0,}" Sep 16 05:03:22.391097 containerd[1596]: time="2025-09-16T05:03:22.391065285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wtfz4,Uid:aae2867b-194c-47af-8188-2901d549d330,Namespace:kube-system,Attempt:0,}" Sep 16 05:03:22.422240 containerd[1596]: time="2025-09-16T05:03:22.422201546Z" level=info msg="connecting to shim 01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510" address="unix:///run/containerd/s/e5606bba0e5a1d1ea1fe58e1562a51e1a757532d7475b37e7363fc82486f781d" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:22.422300 containerd[1596]: time="2025-09-16T05:03:22.422263826Z" level=info msg="connecting to shim b222869076e28023f15c31fcbdf3f4c45e9788d7fc49ad2e47d682e51f46d01f" address="unix:///run/containerd/s/aedc8885519ec771c9d6e852f097e3c6fdbe7203aac22096e112465568fc4681" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:22.470009 containerd[1596]: time="2025-09-16T05:03:22.469152436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qpjh9,Uid:3f744d51-fa72-40e6-b1ca-2dca418b9000,Namespace:kube-system,Attempt:0,}" Sep 16 05:03:22.472643 systemd[1]: Started cri-containerd-01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510.scope - libcontainer container 01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510. Sep 16 05:03:22.477025 systemd[1]: Started cri-containerd-b222869076e28023f15c31fcbdf3f4c45e9788d7fc49ad2e47d682e51f46d01f.scope - libcontainer container b222869076e28023f15c31fcbdf3f4c45e9788d7fc49ad2e47d682e51f46d01f. Sep 16 05:03:22.494904 containerd[1596]: time="2025-09-16T05:03:22.494868732Z" level=info msg="connecting to shim 2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e" address="unix:///run/containerd/s/bdbe32331f8e614c3813983eed48f8a3556136abdb64490c8c3bca2c25f8854a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:22.507682 containerd[1596]: time="2025-09-16T05:03:22.507580104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wtfz4,Uid:aae2867b-194c-47af-8188-2901d549d330,Namespace:kube-system,Attempt:0,} returns sandbox id \"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\"" Sep 16 05:03:22.510437 containerd[1596]: time="2025-09-16T05:03:22.510420256Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 16 05:03:22.518669 containerd[1596]: time="2025-09-16T05:03:22.518648825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r89rq,Uid:79bfefb9-fc28-41fc-83bb-3344fe584ecd,Namespace:kube-system,Attempt:0,} returns sandbox id \"b222869076e28023f15c31fcbdf3f4c45e9788d7fc49ad2e47d682e51f46d01f\"" Sep 16 05:03:22.521681 systemd[1]: Started cri-containerd-2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e.scope - libcontainer container 2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e. Sep 16 05:03:22.524396 containerd[1596]: time="2025-09-16T05:03:22.524283501Z" level=info msg="CreateContainer within sandbox \"b222869076e28023f15c31fcbdf3f4c45e9788d7fc49ad2e47d682e51f46d01f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 05:03:22.535541 containerd[1596]: time="2025-09-16T05:03:22.535491088Z" level=info msg="Container b6f1fc481f4dd93a0a6a5ec2f7b7c4f8a480f065e0d562ea5d06261b16c6ef76: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:22.546211 containerd[1596]: time="2025-09-16T05:03:22.546183097Z" level=info msg="CreateContainer within sandbox \"b222869076e28023f15c31fcbdf3f4c45e9788d7fc49ad2e47d682e51f46d01f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b6f1fc481f4dd93a0a6a5ec2f7b7c4f8a480f065e0d562ea5d06261b16c6ef76\"" Sep 16 05:03:22.546861 containerd[1596]: time="2025-09-16T05:03:22.546769890Z" level=info msg="StartContainer for \"b6f1fc481f4dd93a0a6a5ec2f7b7c4f8a480f065e0d562ea5d06261b16c6ef76\"" Sep 16 05:03:22.548019 containerd[1596]: time="2025-09-16T05:03:22.547997267Z" level=info msg="connecting to shim b6f1fc481f4dd93a0a6a5ec2f7b7c4f8a480f065e0d562ea5d06261b16c6ef76" address="unix:///run/containerd/s/aedc8885519ec771c9d6e852f097e3c6fdbe7203aac22096e112465568fc4681" protocol=ttrpc version=3 Sep 16 05:03:22.565115 containerd[1596]: time="2025-09-16T05:03:22.565062517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qpjh9,Uid:3f744d51-fa72-40e6-b1ca-2dca418b9000,Namespace:kube-system,Attempt:0,} returns sandbox id \"2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e\"" Sep 16 05:03:22.567620 systemd[1]: Started cri-containerd-b6f1fc481f4dd93a0a6a5ec2f7b7c4f8a480f065e0d562ea5d06261b16c6ef76.scope - libcontainer container b6f1fc481f4dd93a0a6a5ec2f7b7c4f8a480f065e0d562ea5d06261b16c6ef76. Sep 16 05:03:22.608028 containerd[1596]: time="2025-09-16T05:03:22.607997134Z" level=info msg="StartContainer for \"b6f1fc481f4dd93a0a6a5ec2f7b7c4f8a480f065e0d562ea5d06261b16c6ef76\" returns successfully" Sep 16 05:03:27.892586 kubelet[2763]: I0916 05:03:27.892527 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r89rq" podStartSLOduration=5.892496428 podStartE2EDuration="5.892496428s" podCreationTimestamp="2025-09-16 05:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:03:22.995641744 +0000 UTC m=+9.113715170" watchObservedRunningTime="2025-09-16 05:03:27.892496428 +0000 UTC m=+14.010569864" Sep 16 05:03:28.933704 update_engine[1575]: I20250916 05:03:28.933642 1575 update_attempter.cc:509] Updating boot flags... Sep 16 05:03:34.784705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3350058886.mount: Deactivated successfully. Sep 16 05:03:36.920477 containerd[1596]: time="2025-09-16T05:03:36.920421456Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:36.921263 containerd[1596]: time="2025-09-16T05:03:36.921220256Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 16 05:03:36.922403 containerd[1596]: time="2025-09-16T05:03:36.922373587Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:36.923788 containerd[1596]: time="2025-09-16T05:03:36.923757533Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.413249119s" Sep 16 05:03:36.923788 containerd[1596]: time="2025-09-16T05:03:36.923784273Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 16 05:03:36.924434 containerd[1596]: time="2025-09-16T05:03:36.924396320Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 16 05:03:36.927971 containerd[1596]: time="2025-09-16T05:03:36.927939118Z" level=info msg="CreateContainer within sandbox \"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 05:03:36.938292 containerd[1596]: time="2025-09-16T05:03:36.938251583Z" level=info msg="Container fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:36.944322 containerd[1596]: time="2025-09-16T05:03:36.944286341Z" level=info msg="CreateContainer within sandbox \"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8\"" Sep 16 05:03:36.944673 containerd[1596]: time="2025-09-16T05:03:36.944635432Z" level=info msg="StartContainer for \"fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8\"" Sep 16 05:03:36.945418 containerd[1596]: time="2025-09-16T05:03:36.945393444Z" level=info msg="connecting to shim fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8" address="unix:///run/containerd/s/e5606bba0e5a1d1ea1fe58e1562a51e1a757532d7475b37e7363fc82486f781d" protocol=ttrpc version=3 Sep 16 05:03:36.965647 systemd[1]: Started cri-containerd-fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8.scope - libcontainer container fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8. Sep 16 05:03:36.995562 containerd[1596]: time="2025-09-16T05:03:36.995525828Z" level=info msg="StartContainer for \"fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8\" returns successfully" Sep 16 05:03:37.010043 systemd[1]: cri-containerd-fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8.scope: Deactivated successfully. Sep 16 05:03:37.012255 containerd[1596]: time="2025-09-16T05:03:37.012217833Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8\" id:\"fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8\" pid:3207 exited_at:{seconds:1757999017 nanos:11871899}" Sep 16 05:03:37.012380 containerd[1596]: time="2025-09-16T05:03:37.012358088Z" level=info msg="received exit event container_id:\"fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8\" id:\"fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8\" pid:3207 exited_at:{seconds:1757999017 nanos:11871899}" Sep 16 05:03:37.937182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8-rootfs.mount: Deactivated successfully. Sep 16 05:03:38.021234 containerd[1596]: time="2025-09-16T05:03:38.021192148Z" level=info msg="CreateContainer within sandbox \"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 05:03:38.030938 containerd[1596]: time="2025-09-16T05:03:38.030888050Z" level=info msg="Container aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:38.037772 containerd[1596]: time="2025-09-16T05:03:38.037747195Z" level=info msg="CreateContainer within sandbox \"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316\"" Sep 16 05:03:38.038212 containerd[1596]: time="2025-09-16T05:03:38.038181294Z" level=info msg="StartContainer for \"aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316\"" Sep 16 05:03:38.038953 containerd[1596]: time="2025-09-16T05:03:38.038918526Z" level=info msg="connecting to shim aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316" address="unix:///run/containerd/s/e5606bba0e5a1d1ea1fe58e1562a51e1a757532d7475b37e7363fc82486f781d" protocol=ttrpc version=3 Sep 16 05:03:38.058634 systemd[1]: Started cri-containerd-aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316.scope - libcontainer container aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316. Sep 16 05:03:38.085989 containerd[1596]: time="2025-09-16T05:03:38.085958077Z" level=info msg="StartContainer for \"aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316\" returns successfully" Sep 16 05:03:38.100494 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 05:03:38.100800 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:03:38.101985 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 16 05:03:38.103832 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 05:03:38.105766 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 05:03:38.108361 systemd[1]: cri-containerd-aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316.scope: Deactivated successfully. Sep 16 05:03:38.109063 containerd[1596]: time="2025-09-16T05:03:38.109028157Z" level=info msg="received exit event container_id:\"aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316\" id:\"aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316\" pid:3253 exited_at:{seconds:1757999018 nanos:108895307}" Sep 16 05:03:38.109236 containerd[1596]: time="2025-09-16T05:03:38.109198560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316\" id:\"aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316\" pid:3253 exited_at:{seconds:1757999018 nanos:108895307}" Sep 16 05:03:38.126964 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:03:38.937456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316-rootfs.mount: Deactivated successfully. Sep 16 05:03:39.148946 containerd[1596]: time="2025-09-16T05:03:39.148901339Z" level=info msg="CreateContainer within sandbox \"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 05:03:39.162892 containerd[1596]: time="2025-09-16T05:03:39.162837528Z" level=info msg="Container 1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:39.166573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339234042.mount: Deactivated successfully. Sep 16 05:03:39.174837 containerd[1596]: time="2025-09-16T05:03:39.174796574Z" level=info msg="CreateContainer within sandbox \"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3\"" Sep 16 05:03:39.175332 containerd[1596]: time="2025-09-16T05:03:39.175308740Z" level=info msg="StartContainer for \"1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3\"" Sep 16 05:03:39.176564 containerd[1596]: time="2025-09-16T05:03:39.176535326Z" level=info msg="connecting to shim 1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3" address="unix:///run/containerd/s/e5606bba0e5a1d1ea1fe58e1562a51e1a757532d7475b37e7363fc82486f781d" protocol=ttrpc version=3 Sep 16 05:03:39.179840 containerd[1596]: time="2025-09-16T05:03:39.179742009Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:39.180459 containerd[1596]: time="2025-09-16T05:03:39.180431191Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 16 05:03:39.181310 containerd[1596]: time="2025-09-16T05:03:39.181274703Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:39.182585 containerd[1596]: time="2025-09-16T05:03:39.182557554Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.258127701s" Sep 16 05:03:39.182631 containerd[1596]: time="2025-09-16T05:03:39.182590507Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 16 05:03:39.189532 containerd[1596]: time="2025-09-16T05:03:39.189448383Z" level=info msg="CreateContainer within sandbox \"2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 16 05:03:39.198050 containerd[1596]: time="2025-09-16T05:03:39.198025445Z" level=info msg="Container 4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:39.198674 systemd[1]: Started cri-containerd-1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3.scope - libcontainer container 1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3. Sep 16 05:03:39.204890 containerd[1596]: time="2025-09-16T05:03:39.204855368Z" level=info msg="CreateContainer within sandbox \"2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\"" Sep 16 05:03:39.205820 containerd[1596]: time="2025-09-16T05:03:39.205799611Z" level=info msg="StartContainer for \"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\"" Sep 16 05:03:39.207124 containerd[1596]: time="2025-09-16T05:03:39.207057105Z" level=info msg="connecting to shim 4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9" address="unix:///run/containerd/s/bdbe32331f8e614c3813983eed48f8a3556136abdb64490c8c3bca2c25f8854a" protocol=ttrpc version=3 Sep 16 05:03:39.229625 systemd[1]: Started cri-containerd-4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9.scope - libcontainer container 4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9. Sep 16 05:03:39.256601 systemd[1]: cri-containerd-1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3.scope: Deactivated successfully. Sep 16 05:03:39.259164 containerd[1596]: time="2025-09-16T05:03:39.259041471Z" level=info msg="StartContainer for \"1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3\" returns successfully" Sep 16 05:03:39.260138 containerd[1596]: time="2025-09-16T05:03:39.260054162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3\" id:\"1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3\" pid:3316 exited_at:{seconds:1757999019 nanos:259747965}" Sep 16 05:03:39.260366 containerd[1596]: time="2025-09-16T05:03:39.260330685Z" level=info msg="received exit event container_id:\"1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3\" id:\"1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3\" pid:3316 exited_at:{seconds:1757999019 nanos:259747965}" Sep 16 05:03:39.273675 containerd[1596]: time="2025-09-16T05:03:39.273636664Z" level=info msg="StartContainer for \"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\" returns successfully" Sep 16 05:03:39.938234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3-rootfs.mount: Deactivated successfully. Sep 16 05:03:40.111716 containerd[1596]: time="2025-09-16T05:03:40.111659777Z" level=info msg="CreateContainer within sandbox \"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 05:03:40.124601 kubelet[2763]: I0916 05:03:40.124282 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-qpjh9" podStartSLOduration=1.505097994 podStartE2EDuration="18.124264814s" podCreationTimestamp="2025-09-16 05:03:22 +0000 UTC" firstStartedPulling="2025-09-16 05:03:22.566062088 +0000 UTC m=+8.684135524" lastFinishedPulling="2025-09-16 05:03:39.185228908 +0000 UTC m=+25.303302344" observedRunningTime="2025-09-16 05:03:40.12415743 +0000 UTC m=+26.242230866" watchObservedRunningTime="2025-09-16 05:03:40.124264814 +0000 UTC m=+26.242338250" Sep 16 05:03:40.125839 containerd[1596]: time="2025-09-16T05:03:40.125234864Z" level=info msg="Container 177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:40.134189 containerd[1596]: time="2025-09-16T05:03:40.134147121Z" level=info msg="CreateContainer within sandbox \"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559\"" Sep 16 05:03:40.135884 containerd[1596]: time="2025-09-16T05:03:40.135678702Z" level=info msg="StartContainer for \"177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559\"" Sep 16 05:03:40.136780 containerd[1596]: time="2025-09-16T05:03:40.136757017Z" level=info msg="connecting to shim 177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559" address="unix:///run/containerd/s/e5606bba0e5a1d1ea1fe58e1562a51e1a757532d7475b37e7363fc82486f781d" protocol=ttrpc version=3 Sep 16 05:03:40.160642 systemd[1]: Started cri-containerd-177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559.scope - libcontainer container 177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559. Sep 16 05:03:40.185428 systemd[1]: cri-containerd-177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559.scope: Deactivated successfully. Sep 16 05:03:40.186108 containerd[1596]: time="2025-09-16T05:03:40.186074590Z" level=info msg="TaskExit event in podsandbox handler container_id:\"177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559\" id:\"177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559\" pid:3387 exited_at:{seconds:1757999020 nanos:185655829}" Sep 16 05:03:40.189546 containerd[1596]: time="2025-09-16T05:03:40.189431264Z" level=info msg="received exit event container_id:\"177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559\" id:\"177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559\" pid:3387 exited_at:{seconds:1757999020 nanos:185655829}" Sep 16 05:03:40.194598 containerd[1596]: time="2025-09-16T05:03:40.194566995Z" level=info msg="StartContainer for \"177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559\" returns successfully" Sep 16 05:03:40.213414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559-rootfs.mount: Deactivated successfully. Sep 16 05:03:41.037586 containerd[1596]: time="2025-09-16T05:03:41.037534155Z" level=info msg="CreateContainer within sandbox \"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 05:03:41.050770 containerd[1596]: time="2025-09-16T05:03:41.050728203Z" level=info msg="Container e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:41.061392 containerd[1596]: time="2025-09-16T05:03:41.061345731Z" level=info msg="CreateContainer within sandbox \"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\"" Sep 16 05:03:41.061942 containerd[1596]: time="2025-09-16T05:03:41.061917520Z" level=info msg="StartContainer for \"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\"" Sep 16 05:03:41.062830 containerd[1596]: time="2025-09-16T05:03:41.062807187Z" level=info msg="connecting to shim e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11" address="unix:///run/containerd/s/e5606bba0e5a1d1ea1fe58e1562a51e1a757532d7475b37e7363fc82486f781d" protocol=ttrpc version=3 Sep 16 05:03:41.081631 systemd[1]: Started cri-containerd-e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11.scope - libcontainer container e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11. Sep 16 05:03:41.112904 containerd[1596]: time="2025-09-16T05:03:41.112874980Z" level=info msg="StartContainer for \"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\" returns successfully" Sep 16 05:03:41.191225 containerd[1596]: time="2025-09-16T05:03:41.191140702Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\" id:\"8afbae9babff265de90e3a72f2b24b0fb9df331197ea728339e3d65db65be694\" pid:3457 exited_at:{seconds:1757999021 nanos:190879529}" Sep 16 05:03:41.228793 kubelet[2763]: I0916 05:03:41.228769 2763 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 16 05:03:41.300830 systemd[1]: Created slice kubepods-burstable-pod93d2d13a_2e8c_411a_9e08_84ae22601030.slice - libcontainer container kubepods-burstable-pod93d2d13a_2e8c_411a_9e08_84ae22601030.slice. Sep 16 05:03:41.310307 systemd[1]: Created slice kubepods-burstable-pod765dc3b6_b6af_447c_ae79_96db53323e06.slice - libcontainer container kubepods-burstable-pod765dc3b6_b6af_447c_ae79_96db53323e06.slice. Sep 16 05:03:41.320918 kubelet[2763]: I0916 05:03:41.320887 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93d2d13a-2e8c-411a-9e08-84ae22601030-config-volume\") pod \"coredns-674b8bbfcf-5q96z\" (UID: \"93d2d13a-2e8c-411a-9e08-84ae22601030\") " pod="kube-system/coredns-674b8bbfcf-5q96z" Sep 16 05:03:41.321002 kubelet[2763]: I0916 05:03:41.320923 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q92r\" (UniqueName: \"kubernetes.io/projected/93d2d13a-2e8c-411a-9e08-84ae22601030-kube-api-access-9q92r\") pod \"coredns-674b8bbfcf-5q96z\" (UID: \"93d2d13a-2e8c-411a-9e08-84ae22601030\") " pod="kube-system/coredns-674b8bbfcf-5q96z" Sep 16 05:03:41.321002 kubelet[2763]: I0916 05:03:41.320941 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d68mh\" (UniqueName: \"kubernetes.io/projected/765dc3b6-b6af-447c-ae79-96db53323e06-kube-api-access-d68mh\") pod \"coredns-674b8bbfcf-9qmdj\" (UID: \"765dc3b6-b6af-447c-ae79-96db53323e06\") " pod="kube-system/coredns-674b8bbfcf-9qmdj" Sep 16 05:03:41.321002 kubelet[2763]: I0916 05:03:41.320956 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/765dc3b6-b6af-447c-ae79-96db53323e06-config-volume\") pod \"coredns-674b8bbfcf-9qmdj\" (UID: \"765dc3b6-b6af-447c-ae79-96db53323e06\") " pod="kube-system/coredns-674b8bbfcf-9qmdj" Sep 16 05:03:41.607907 containerd[1596]: time="2025-09-16T05:03:41.607862571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5q96z,Uid:93d2d13a-2e8c-411a-9e08-84ae22601030,Namespace:kube-system,Attempt:0,}" Sep 16 05:03:41.613532 containerd[1596]: time="2025-09-16T05:03:41.613467443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9qmdj,Uid:765dc3b6-b6af-447c-ae79-96db53323e06,Namespace:kube-system,Attempt:0,}" Sep 16 05:03:42.054517 kubelet[2763]: I0916 05:03:42.053903 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wtfz4" podStartSLOduration=5.639795331 podStartE2EDuration="20.053888401s" podCreationTimestamp="2025-09-16 05:03:22 +0000 UTC" firstStartedPulling="2025-09-16 05:03:22.510204323 +0000 UTC m=+8.628277759" lastFinishedPulling="2025-09-16 05:03:36.924297403 +0000 UTC m=+23.042370829" observedRunningTime="2025-09-16 05:03:42.052602566 +0000 UTC m=+28.170676002" watchObservedRunningTime="2025-09-16 05:03:42.053888401 +0000 UTC m=+28.171961837" Sep 16 05:03:42.446224 systemd[1]: Started sshd@7-10.0.0.151:22-10.0.0.1:35622.service - OpenSSH per-connection server daemon (10.0.0.1:35622). Sep 16 05:03:42.502961 sshd[3551]: Accepted publickey for core from 10.0.0.1 port 35622 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:03:42.504419 sshd-session[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:42.508554 systemd-logind[1574]: New session 8 of user core. Sep 16 05:03:42.521640 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 05:03:42.639913 sshd[3554]: Connection closed by 10.0.0.1 port 35622 Sep 16 05:03:42.640226 sshd-session[3551]: pam_unix(sshd:session): session closed for user core Sep 16 05:03:42.644554 systemd[1]: sshd@7-10.0.0.151:22-10.0.0.1:35622.service: Deactivated successfully. Sep 16 05:03:42.646593 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 05:03:42.647335 systemd-logind[1574]: Session 8 logged out. Waiting for processes to exit. Sep 16 05:03:42.648376 systemd-logind[1574]: Removed session 8. Sep 16 05:03:43.293187 systemd-networkd[1493]: cilium_host: Link UP Sep 16 05:03:43.293388 systemd-networkd[1493]: cilium_net: Link UP Sep 16 05:03:43.295698 systemd-networkd[1493]: cilium_net: Gained carrier Sep 16 05:03:43.295983 systemd-networkd[1493]: cilium_host: Gained carrier Sep 16 05:03:43.386846 systemd-networkd[1493]: cilium_vxlan: Link UP Sep 16 05:03:43.387377 systemd-networkd[1493]: cilium_vxlan: Gained carrier Sep 16 05:03:43.484700 systemd-networkd[1493]: cilium_net: Gained IPv6LL Sep 16 05:03:43.593560 kernel: NET: Registered PF_ALG protocol family Sep 16 05:03:43.837735 systemd-networkd[1493]: cilium_host: Gained IPv6LL Sep 16 05:03:44.205608 systemd-networkd[1493]: lxc_health: Link UP Sep 16 05:03:44.216785 systemd-networkd[1493]: lxc_health: Gained carrier Sep 16 05:03:44.451543 kernel: eth0: renamed from tmpa0d32 Sep 16 05:03:44.452744 systemd-networkd[1493]: lxc63f2013e3b32: Link UP Sep 16 05:03:44.460996 systemd-networkd[1493]: lxc72ff2b8b277d: Link UP Sep 16 05:03:44.463070 systemd-networkd[1493]: lxc63f2013e3b32: Gained carrier Sep 16 05:03:44.463526 kernel: eth0: renamed from tmpd3b84 Sep 16 05:03:44.465180 systemd-networkd[1493]: lxc72ff2b8b277d: Gained carrier Sep 16 05:03:45.372729 systemd-networkd[1493]: cilium_vxlan: Gained IPv6LL Sep 16 05:03:45.436626 systemd-networkd[1493]: lxc_health: Gained IPv6LL Sep 16 05:03:46.012644 systemd-networkd[1493]: lxc63f2013e3b32: Gained IPv6LL Sep 16 05:03:46.460677 systemd-networkd[1493]: lxc72ff2b8b277d: Gained IPv6LL Sep 16 05:03:47.653002 systemd[1]: Started sshd@8-10.0.0.151:22-10.0.0.1:35638.service - OpenSSH per-connection server daemon (10.0.0.1:35638). Sep 16 05:03:47.711223 sshd[3956]: Accepted publickey for core from 10.0.0.1 port 35638 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:03:47.711244 sshd-session[3956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:47.717117 systemd-logind[1574]: New session 9 of user core. Sep 16 05:03:47.725623 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 05:03:47.799436 containerd[1596]: time="2025-09-16T05:03:47.799388252Z" level=info msg="connecting to shim a0d32c23130233357a82661366609410b8904b2f08c1a3496b4025b83c54cd18" address="unix:///run/containerd/s/eb33a1510fe2345755778817ac401b8c3eba273dea7bfa0f7826930f64fa30bf" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:47.812280 containerd[1596]: time="2025-09-16T05:03:47.812238936Z" level=info msg="connecting to shim d3b84f4f13cb6039ebe1876b8be083a010745e7ea0d5928b8e01a9eda5e353d6" address="unix:///run/containerd/s/f5ab4c9799e3100732d844d57a15ad10f129dbf3ee2f0a93a52997b952eecb47" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:47.828649 systemd[1]: Started cri-containerd-a0d32c23130233357a82661366609410b8904b2f08c1a3496b4025b83c54cd18.scope - libcontainer container a0d32c23130233357a82661366609410b8904b2f08c1a3496b4025b83c54cd18. Sep 16 05:03:47.845197 systemd[1]: Started cri-containerd-d3b84f4f13cb6039ebe1876b8be083a010745e7ea0d5928b8e01a9eda5e353d6.scope - libcontainer container d3b84f4f13cb6039ebe1876b8be083a010745e7ea0d5928b8e01a9eda5e353d6. Sep 16 05:03:47.858946 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 05:03:47.861937 sshd[3962]: Connection closed by 10.0.0.1 port 35638 Sep 16 05:03:47.862452 sshd-session[3956]: pam_unix(sshd:session): session closed for user core Sep 16 05:03:47.864860 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 05:03:47.867489 systemd[1]: sshd@8-10.0.0.151:22-10.0.0.1:35638.service: Deactivated successfully. Sep 16 05:03:47.871008 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 05:03:47.873354 systemd-logind[1574]: Session 9 logged out. Waiting for processes to exit. Sep 16 05:03:47.874878 systemd-logind[1574]: Removed session 9. Sep 16 05:03:47.893288 containerd[1596]: time="2025-09-16T05:03:47.893250855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5q96z,Uid:93d2d13a-2e8c-411a-9e08-84ae22601030,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3b84f4f13cb6039ebe1876b8be083a010745e7ea0d5928b8e01a9eda5e353d6\"" Sep 16 05:03:47.899543 containerd[1596]: time="2025-09-16T05:03:47.899494399Z" level=info msg="CreateContainer within sandbox \"d3b84f4f13cb6039ebe1876b8be083a010745e7ea0d5928b8e01a9eda5e353d6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 05:03:47.902782 containerd[1596]: time="2025-09-16T05:03:47.902748366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9qmdj,Uid:765dc3b6-b6af-447c-ae79-96db53323e06,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0d32c23130233357a82661366609410b8904b2f08c1a3496b4025b83c54cd18\"" Sep 16 05:03:47.907817 containerd[1596]: time="2025-09-16T05:03:47.907724665Z" level=info msg="CreateContainer within sandbox \"a0d32c23130233357a82661366609410b8904b2f08c1a3496b4025b83c54cd18\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 05:03:47.914525 containerd[1596]: time="2025-09-16T05:03:47.914249460Z" level=info msg="Container f02f134a1eaae065f837d1c999c08d00f91751ea39633a19566104001ed55261: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:47.921495 containerd[1596]: time="2025-09-16T05:03:47.921464996Z" level=info msg="CreateContainer within sandbox \"d3b84f4f13cb6039ebe1876b8be083a010745e7ea0d5928b8e01a9eda5e353d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f02f134a1eaae065f837d1c999c08d00f91751ea39633a19566104001ed55261\"" Sep 16 05:03:47.922259 containerd[1596]: time="2025-09-16T05:03:47.922158351Z" level=info msg="StartContainer for \"f02f134a1eaae065f837d1c999c08d00f91751ea39633a19566104001ed55261\"" Sep 16 05:03:47.923023 containerd[1596]: time="2025-09-16T05:03:47.922993784Z" level=info msg="connecting to shim f02f134a1eaae065f837d1c999c08d00f91751ea39633a19566104001ed55261" address="unix:///run/containerd/s/f5ab4c9799e3100732d844d57a15ad10f129dbf3ee2f0a93a52997b952eecb47" protocol=ttrpc version=3 Sep 16 05:03:47.948643 systemd[1]: Started cri-containerd-f02f134a1eaae065f837d1c999c08d00f91751ea39633a19566104001ed55261.scope - libcontainer container f02f134a1eaae065f837d1c999c08d00f91751ea39633a19566104001ed55261. Sep 16 05:03:47.951634 containerd[1596]: time="2025-09-16T05:03:47.951594072Z" level=info msg="Container 5c7d9aef6fb82dd1822db36c149921c17203b03d87fed0e39ac4c6a82f80c26c: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:47.968808 containerd[1596]: time="2025-09-16T05:03:47.968774578Z" level=info msg="CreateContainer within sandbox \"a0d32c23130233357a82661366609410b8904b2f08c1a3496b4025b83c54cd18\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c7d9aef6fb82dd1822db36c149921c17203b03d87fed0e39ac4c6a82f80c26c\"" Sep 16 05:03:47.969482 containerd[1596]: time="2025-09-16T05:03:47.969447836Z" level=info msg="StartContainer for \"5c7d9aef6fb82dd1822db36c149921c17203b03d87fed0e39ac4c6a82f80c26c\"" Sep 16 05:03:47.970663 containerd[1596]: time="2025-09-16T05:03:47.970634200Z" level=info msg="connecting to shim 5c7d9aef6fb82dd1822db36c149921c17203b03d87fed0e39ac4c6a82f80c26c" address="unix:///run/containerd/s/eb33a1510fe2345755778817ac401b8c3eba273dea7bfa0f7826930f64fa30bf" protocol=ttrpc version=3 Sep 16 05:03:47.987247 containerd[1596]: time="2025-09-16T05:03:47.987211101Z" level=info msg="StartContainer for \"f02f134a1eaae065f837d1c999c08d00f91751ea39633a19566104001ed55261\" returns successfully" Sep 16 05:03:47.995727 systemd[1]: Started cri-containerd-5c7d9aef6fb82dd1822db36c149921c17203b03d87fed0e39ac4c6a82f80c26c.scope - libcontainer container 5c7d9aef6fb82dd1822db36c149921c17203b03d87fed0e39ac4c6a82f80c26c. Sep 16 05:03:48.033205 containerd[1596]: time="2025-09-16T05:03:48.033108020Z" level=info msg="StartContainer for \"5c7d9aef6fb82dd1822db36c149921c17203b03d87fed0e39ac4c6a82f80c26c\" returns successfully" Sep 16 05:03:48.063878 kubelet[2763]: I0916 05:03:48.063813 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5q96z" podStartSLOduration=26.063799641 podStartE2EDuration="26.063799641s" podCreationTimestamp="2025-09-16 05:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:03:48.06193986 +0000 UTC m=+34.180013306" watchObservedRunningTime="2025-09-16 05:03:48.063799641 +0000 UTC m=+34.181873077" Sep 16 05:03:48.076263 kubelet[2763]: I0916 05:03:48.076198 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9qmdj" podStartSLOduration=26.076180046 podStartE2EDuration="26.076180046s" podCreationTimestamp="2025-09-16 05:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:03:48.074167938 +0000 UTC m=+34.192241374" watchObservedRunningTime="2025-09-16 05:03:48.076180046 +0000 UTC m=+34.194253482" Sep 16 05:03:48.788152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2902758102.mount: Deactivated successfully. Sep 16 05:03:52.881161 systemd[1]: Started sshd@9-10.0.0.151:22-10.0.0.1:54608.service - OpenSSH per-connection server daemon (10.0.0.1:54608). Sep 16 05:03:52.937147 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 54608 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:03:52.938416 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:52.942279 systemd-logind[1574]: New session 10 of user core. Sep 16 05:03:52.948648 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 05:03:53.054719 sshd[4154]: Connection closed by 10.0.0.1 port 54608 Sep 16 05:03:53.055079 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Sep 16 05:03:53.059765 systemd[1]: sshd@9-10.0.0.151:22-10.0.0.1:54608.service: Deactivated successfully. Sep 16 05:03:53.061768 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 05:03:53.062708 systemd-logind[1574]: Session 10 logged out. Waiting for processes to exit. Sep 16 05:03:53.063704 systemd-logind[1574]: Removed session 10. Sep 16 05:03:58.066378 systemd[1]: Started sshd@10-10.0.0.151:22-10.0.0.1:54614.service - OpenSSH per-connection server daemon (10.0.0.1:54614). Sep 16 05:03:58.119836 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 54614 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:03:58.121377 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:58.125431 systemd-logind[1574]: New session 11 of user core. Sep 16 05:03:58.134621 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 05:03:58.245601 sshd[4175]: Connection closed by 10.0.0.1 port 54614 Sep 16 05:03:58.246032 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Sep 16 05:03:58.258065 systemd[1]: sshd@10-10.0.0.151:22-10.0.0.1:54614.service: Deactivated successfully. Sep 16 05:03:58.259758 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 05:03:58.260610 systemd-logind[1574]: Session 11 logged out. Waiting for processes to exit. Sep 16 05:03:58.263054 systemd[1]: Started sshd@11-10.0.0.151:22-10.0.0.1:54620.service - OpenSSH per-connection server daemon (10.0.0.1:54620). Sep 16 05:03:58.263744 systemd-logind[1574]: Removed session 11. Sep 16 05:03:58.320472 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 54620 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:03:58.321620 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:58.325626 systemd-logind[1574]: New session 12 of user core. Sep 16 05:03:58.333612 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 05:03:58.474557 sshd[4193]: Connection closed by 10.0.0.1 port 54620 Sep 16 05:03:58.475215 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Sep 16 05:03:58.482339 systemd[1]: sshd@11-10.0.0.151:22-10.0.0.1:54620.service: Deactivated successfully. Sep 16 05:03:58.484290 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 05:03:58.485496 systemd-logind[1574]: Session 12 logged out. Waiting for processes to exit. Sep 16 05:03:58.489404 systemd[1]: Started sshd@12-10.0.0.151:22-10.0.0.1:54628.service - OpenSSH per-connection server daemon (10.0.0.1:54628). Sep 16 05:03:58.490799 systemd-logind[1574]: Removed session 12. Sep 16 05:03:58.540201 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 54628 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:03:58.541357 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:58.545711 systemd-logind[1574]: New session 13 of user core. Sep 16 05:03:58.552645 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 05:03:58.657763 sshd[4207]: Connection closed by 10.0.0.1 port 54628 Sep 16 05:03:58.658035 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Sep 16 05:03:58.662463 systemd[1]: sshd@12-10.0.0.151:22-10.0.0.1:54628.service: Deactivated successfully. Sep 16 05:03:58.664461 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 05:03:58.665266 systemd-logind[1574]: Session 13 logged out. Waiting for processes to exit. Sep 16 05:03:58.666356 systemd-logind[1574]: Removed session 13. Sep 16 05:04:03.674192 systemd[1]: Started sshd@13-10.0.0.151:22-10.0.0.1:43780.service - OpenSSH per-connection server daemon (10.0.0.1:43780). Sep 16 05:04:03.736087 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 43780 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:04:03.737616 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:03.741632 systemd-logind[1574]: New session 14 of user core. Sep 16 05:04:03.750629 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 05:04:03.863750 sshd[4224]: Connection closed by 10.0.0.1 port 43780 Sep 16 05:04:03.864169 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:03.868659 systemd[1]: sshd@13-10.0.0.151:22-10.0.0.1:43780.service: Deactivated successfully. Sep 16 05:04:03.870669 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 05:04:03.871556 systemd-logind[1574]: Session 14 logged out. Waiting for processes to exit. Sep 16 05:04:03.872655 systemd-logind[1574]: Removed session 14. Sep 16 05:04:08.883108 systemd[1]: Started sshd@14-10.0.0.151:22-10.0.0.1:43790.service - OpenSSH per-connection server daemon (10.0.0.1:43790). Sep 16 05:04:08.937211 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 43790 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:04:08.938382 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:08.942393 systemd-logind[1574]: New session 15 of user core. Sep 16 05:04:08.947624 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 05:04:09.054009 sshd[4241]: Connection closed by 10.0.0.1 port 43790 Sep 16 05:04:09.054423 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:09.063163 systemd[1]: sshd@14-10.0.0.151:22-10.0.0.1:43790.service: Deactivated successfully. Sep 16 05:04:09.065112 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 05:04:09.065844 systemd-logind[1574]: Session 15 logged out. Waiting for processes to exit. Sep 16 05:04:09.069015 systemd[1]: Started sshd@15-10.0.0.151:22-10.0.0.1:43794.service - OpenSSH per-connection server daemon (10.0.0.1:43794). Sep 16 05:04:09.069685 systemd-logind[1574]: Removed session 15. Sep 16 05:04:09.127419 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 43794 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:04:09.129238 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:09.133394 systemd-logind[1574]: New session 16 of user core. Sep 16 05:04:09.148634 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 05:04:09.320568 sshd[4258]: Connection closed by 10.0.0.1 port 43794 Sep 16 05:04:09.321085 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:09.331155 systemd[1]: sshd@15-10.0.0.151:22-10.0.0.1:43794.service: Deactivated successfully. Sep 16 05:04:09.333125 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 05:04:09.333876 systemd-logind[1574]: Session 16 logged out. Waiting for processes to exit. Sep 16 05:04:09.337043 systemd[1]: Started sshd@16-10.0.0.151:22-10.0.0.1:43810.service - OpenSSH per-connection server daemon (10.0.0.1:43810). Sep 16 05:04:09.337704 systemd-logind[1574]: Removed session 16. Sep 16 05:04:09.391561 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 43810 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:04:09.393017 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:09.397401 systemd-logind[1574]: New session 17 of user core. Sep 16 05:04:09.403645 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 05:04:09.874605 sshd[4273]: Connection closed by 10.0.0.1 port 43810 Sep 16 05:04:09.874922 sshd-session[4270]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:09.888424 systemd[1]: sshd@16-10.0.0.151:22-10.0.0.1:43810.service: Deactivated successfully. Sep 16 05:04:09.891028 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 05:04:09.891887 systemd-logind[1574]: Session 17 logged out. Waiting for processes to exit. Sep 16 05:04:09.895078 systemd[1]: Started sshd@17-10.0.0.151:22-10.0.0.1:43818.service - OpenSSH per-connection server daemon (10.0.0.1:43818). Sep 16 05:04:09.896148 systemd-logind[1574]: Removed session 17. Sep 16 05:04:09.946719 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 43818 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:04:09.947831 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:09.951876 systemd-logind[1574]: New session 18 of user core. Sep 16 05:04:09.957628 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 05:04:10.179547 sshd[4295]: Connection closed by 10.0.0.1 port 43818 Sep 16 05:04:10.179843 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:10.189493 systemd[1]: sshd@17-10.0.0.151:22-10.0.0.1:43818.service: Deactivated successfully. Sep 16 05:04:10.191380 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 05:04:10.192382 systemd-logind[1574]: Session 18 logged out. Waiting for processes to exit. Sep 16 05:04:10.195465 systemd[1]: Started sshd@18-10.0.0.151:22-10.0.0.1:40054.service - OpenSSH per-connection server daemon (10.0.0.1:40054). Sep 16 05:04:10.196195 systemd-logind[1574]: Removed session 18. Sep 16 05:04:10.249121 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 40054 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:04:10.251346 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:10.255742 systemd-logind[1574]: New session 19 of user core. Sep 16 05:04:10.262628 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 05:04:10.367931 sshd[4309]: Connection closed by 10.0.0.1 port 40054 Sep 16 05:04:10.369866 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:10.374383 systemd[1]: sshd@18-10.0.0.151:22-10.0.0.1:40054.service: Deactivated successfully. Sep 16 05:04:10.376407 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 05:04:10.377156 systemd-logind[1574]: Session 19 logged out. Waiting for processes to exit. Sep 16 05:04:10.378460 systemd-logind[1574]: Removed session 19. Sep 16 05:04:15.384144 systemd[1]: Started sshd@19-10.0.0.151:22-10.0.0.1:40058.service - OpenSSH per-connection server daemon (10.0.0.1:40058). Sep 16 05:04:15.434172 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 40058 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:04:15.435466 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:15.439493 systemd-logind[1574]: New session 20 of user core. Sep 16 05:04:15.450645 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 05:04:15.559364 sshd[4329]: Connection closed by 10.0.0.1 port 40058 Sep 16 05:04:15.559701 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:15.564055 systemd[1]: sshd@19-10.0.0.151:22-10.0.0.1:40058.service: Deactivated successfully. Sep 16 05:04:15.566148 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 05:04:15.566901 systemd-logind[1574]: Session 20 logged out. Waiting for processes to exit. Sep 16 05:04:15.567997 systemd-logind[1574]: Removed session 20. Sep 16 05:04:20.580141 systemd[1]: Started sshd@20-10.0.0.151:22-10.0.0.1:33234.service - OpenSSH per-connection server daemon (10.0.0.1:33234). Sep 16 05:04:20.631712 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 33234 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:04:20.633237 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:20.637036 systemd-logind[1574]: New session 21 of user core. Sep 16 05:04:20.650655 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 05:04:20.757562 sshd[4346]: Connection closed by 10.0.0.1 port 33234 Sep 16 05:04:20.757890 sshd-session[4343]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:20.762131 systemd[1]: sshd@20-10.0.0.151:22-10.0.0.1:33234.service: Deactivated successfully. Sep 16 05:04:20.764111 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 05:04:20.764817 systemd-logind[1574]: Session 21 logged out. Waiting for processes to exit. Sep 16 05:04:20.765918 systemd-logind[1574]: Removed session 21. Sep 16 05:04:25.771055 systemd[1]: Started sshd@21-10.0.0.151:22-10.0.0.1:33236.service - OpenSSH per-connection server daemon (10.0.0.1:33236). Sep 16 05:04:25.827240 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 33236 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:04:25.828376 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:25.832337 systemd-logind[1574]: New session 22 of user core. Sep 16 05:04:25.841630 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 05:04:25.946297 sshd[4365]: Connection closed by 10.0.0.1 port 33236 Sep 16 05:04:25.946667 sshd-session[4362]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:25.955178 systemd[1]: sshd@21-10.0.0.151:22-10.0.0.1:33236.service: Deactivated successfully. Sep 16 05:04:25.957112 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 05:04:25.957908 systemd-logind[1574]: Session 22 logged out. Waiting for processes to exit. Sep 16 05:04:25.961103 systemd[1]: Started sshd@22-10.0.0.151:22-10.0.0.1:33248.service - OpenSSH per-connection server daemon (10.0.0.1:33248). Sep 16 05:04:25.961780 systemd-logind[1574]: Removed session 22. Sep 16 05:04:26.019827 sshd[4378]: Accepted publickey for core from 10.0.0.1 port 33248 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:04:26.021393 sshd-session[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:26.025701 systemd-logind[1574]: New session 23 of user core. Sep 16 05:04:26.036620 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 05:04:27.362588 containerd[1596]: time="2025-09-16T05:04:27.362542567Z" level=info msg="StopContainer for \"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\" with timeout 30 (s)" Sep 16 05:04:27.363839 containerd[1596]: time="2025-09-16T05:04:27.363738928Z" level=info msg="Stop container \"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\" with signal terminated" Sep 16 05:04:27.377434 systemd[1]: cri-containerd-4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9.scope: Deactivated successfully. Sep 16 05:04:27.377839 systemd[1]: cri-containerd-4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9.scope: Consumed 296ms CPU time, 29.6M memory peak, 4.3M read from disk, 4K written to disk. Sep 16 05:04:27.380237 containerd[1596]: time="2025-09-16T05:04:27.380187711Z" level=info msg="received exit event container_id:\"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\" id:\"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\" pid:3334 exited_at:{seconds:1757999067 nanos:379922909}" Sep 16 05:04:27.380401 containerd[1596]: time="2025-09-16T05:04:27.380380934Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\" id:\"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\" pid:3334 exited_at:{seconds:1757999067 nanos:379922909}" Sep 16 05:04:27.397116 containerd[1596]: time="2025-09-16T05:04:27.397069509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\" id:\"1a62be2296adc67d47ed0a5af79de647b9669dafd5a79200b6e119bf4b55a2dd\" pid:4408 exited_at:{seconds:1757999067 nanos:396801752}" Sep 16 05:04:27.397941 containerd[1596]: time="2025-09-16T05:04:27.397908730Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 05:04:27.398826 containerd[1596]: time="2025-09-16T05:04:27.398758722Z" level=info msg="StopContainer for \"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\" with timeout 2 (s)" Sep 16 05:04:27.399060 containerd[1596]: time="2025-09-16T05:04:27.399000259Z" level=info msg="Stop container \"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\" with signal terminated" Sep 16 05:04:27.401979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9-rootfs.mount: Deactivated successfully. Sep 16 05:04:27.405614 systemd-networkd[1493]: lxc_health: Link DOWN Sep 16 05:04:27.405620 systemd-networkd[1493]: lxc_health: Lost carrier Sep 16 05:04:27.422990 containerd[1596]: time="2025-09-16T05:04:27.422946673Z" level=info msg="StopContainer for \"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\" returns successfully" Sep 16 05:04:27.425310 containerd[1596]: time="2025-09-16T05:04:27.425279029Z" level=info msg="StopPodSandbox for \"2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e\"" Sep 16 05:04:27.425364 containerd[1596]: time="2025-09-16T05:04:27.425338744Z" level=info msg="Container to stop \"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:04:27.426178 systemd[1]: cri-containerd-e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11.scope: Deactivated successfully. Sep 16 05:04:27.426595 systemd[1]: cri-containerd-e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11.scope: Consumed 6.389s CPU time, 124.3M memory peak, 204K read from disk, 13.3M written to disk. Sep 16 05:04:27.429472 containerd[1596]: time="2025-09-16T05:04:27.429296339Z" level=info msg="received exit event container_id:\"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\" id:\"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\" pid:3424 exited_at:{seconds:1757999067 nanos:429094329}" Sep 16 05:04:27.429752 containerd[1596]: time="2025-09-16T05:04:27.429712312Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\" id:\"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\" pid:3424 exited_at:{seconds:1757999067 nanos:429094329}" Sep 16 05:04:27.434468 systemd[1]: cri-containerd-2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e.scope: Deactivated successfully. Sep 16 05:04:27.435769 containerd[1596]: time="2025-09-16T05:04:27.435743201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e\" id:\"2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e\" pid:2971 exit_status:137 exited_at:{seconds:1757999067 nanos:435462919}" Sep 16 05:04:27.452712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11-rootfs.mount: Deactivated successfully. Sep 16 05:04:27.465179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e-rootfs.mount: Deactivated successfully. Sep 16 05:04:27.472894 containerd[1596]: time="2025-09-16T05:04:27.472822544Z" level=info msg="shim disconnected" id=2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e namespace=k8s.io Sep 16 05:04:27.472894 containerd[1596]: time="2025-09-16T05:04:27.472854986Z" level=warning msg="cleaning up after shim disconnected" id=2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e namespace=k8s.io Sep 16 05:04:27.486782 containerd[1596]: time="2025-09-16T05:04:27.472863142Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 05:04:27.486877 containerd[1596]: time="2025-09-16T05:04:27.477045882Z" level=info msg="StopContainer for \"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\" returns successfully" Sep 16 05:04:27.487358 containerd[1596]: time="2025-09-16T05:04:27.487335156Z" level=info msg="StopPodSandbox for \"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\"" Sep 16 05:04:27.487400 containerd[1596]: time="2025-09-16T05:04:27.487390483Z" level=info msg="Container to stop \"fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:04:27.487432 containerd[1596]: time="2025-09-16T05:04:27.487401745Z" level=info msg="Container to stop \"aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:04:27.487432 containerd[1596]: time="2025-09-16T05:04:27.487410312Z" level=info msg="Container to stop \"1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:04:27.487432 containerd[1596]: time="2025-09-16T05:04:27.487418177Z" level=info msg="Container to stop \"177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:04:27.487432 containerd[1596]: time="2025-09-16T05:04:27.487425891Z" level=info msg="Container to stop \"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:04:27.493329 systemd[1]: cri-containerd-01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510.scope: Deactivated successfully. Sep 16 05:04:27.507892 containerd[1596]: time="2025-09-16T05:04:27.507815746Z" level=info msg="TaskExit event in podsandbox handler container_id:\"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" id:\"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" pid:2912 exit_status:137 exited_at:{seconds:1757999067 nanos:499328082}" Sep 16 05:04:27.510134 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e-shm.mount: Deactivated successfully. Sep 16 05:04:27.518292 containerd[1596]: time="2025-09-16T05:04:27.518255943Z" level=info msg="received exit event sandbox_id:\"2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e\" exit_status:137 exited_at:{seconds:1757999067 nanos:435462919}" Sep 16 05:04:27.519402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510-rootfs.mount: Deactivated successfully. Sep 16 05:04:27.523323 containerd[1596]: time="2025-09-16T05:04:27.523285317Z" level=info msg="TearDown network for sandbox \"2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e\" successfully" Sep 16 05:04:27.523323 containerd[1596]: time="2025-09-16T05:04:27.523307240Z" level=info msg="StopPodSandbox for \"2aeab82ca0153668b7e6513b58d70c62b74f648d372dc9c8c3086ed35b57499e\" returns successfully" Sep 16 05:04:27.532667 containerd[1596]: time="2025-09-16T05:04:27.532629206Z" level=info msg="shim disconnected" id=01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510 namespace=k8s.io Sep 16 05:04:27.532667 containerd[1596]: time="2025-09-16T05:04:27.532659565Z" level=warning msg="cleaning up after shim disconnected" id=01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510 namespace=k8s.io Sep 16 05:04:27.532866 containerd[1596]: time="2025-09-16T05:04:27.532668172Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 05:04:27.533077 containerd[1596]: time="2025-09-16T05:04:27.533044028Z" level=error msg="Failed to handle event container_id:\"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" id:\"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" pid:2912 exit_status:137 exited_at:{seconds:1757999067 nanos:499328082} for 01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510" error="failed to handle container TaskExit event: failed to stop sandbox: ttrpc: closed" Sep 16 05:04:27.544944 containerd[1596]: time="2025-09-16T05:04:27.544906972Z" level=info msg="received exit event sandbox_id:\"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" exit_status:137 exited_at:{seconds:1757999067 nanos:499328082}" Sep 16 05:04:27.545238 containerd[1596]: time="2025-09-16T05:04:27.545190429Z" level=info msg="TearDown network for sandbox \"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" successfully" Sep 16 05:04:27.545238 containerd[1596]: time="2025-09-16T05:04:27.545218914Z" level=info msg="StopPodSandbox for \"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" returns successfully" Sep 16 05:04:27.590434 kubelet[2763]: I0916 05:04:27.590391 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8bqd\" (UniqueName: \"kubernetes.io/projected/3f744d51-fa72-40e6-b1ca-2dca418b9000-kube-api-access-j8bqd\") pod \"3f744d51-fa72-40e6-b1ca-2dca418b9000\" (UID: \"3f744d51-fa72-40e6-b1ca-2dca418b9000\") " Sep 16 05:04:27.590789 kubelet[2763]: I0916 05:04:27.590450 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f744d51-fa72-40e6-b1ca-2dca418b9000-cilium-config-path\") pod \"3f744d51-fa72-40e6-b1ca-2dca418b9000\" (UID: \"3f744d51-fa72-40e6-b1ca-2dca418b9000\") " Sep 16 05:04:27.593456 kubelet[2763]: I0916 05:04:27.593405 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f744d51-fa72-40e6-b1ca-2dca418b9000-kube-api-access-j8bqd" (OuterVolumeSpecName: "kube-api-access-j8bqd") pod "3f744d51-fa72-40e6-b1ca-2dca418b9000" (UID: "3f744d51-fa72-40e6-b1ca-2dca418b9000"). InnerVolumeSpecName "kube-api-access-j8bqd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 05:04:27.593456 kubelet[2763]: I0916 05:04:27.593414 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f744d51-fa72-40e6-b1ca-2dca418b9000-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3f744d51-fa72-40e6-b1ca-2dca418b9000" (UID: "3f744d51-fa72-40e6-b1ca-2dca418b9000"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 05:04:27.691109 kubelet[2763]: I0916 05:04:27.691013 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-xtables-lock\") pod \"aae2867b-194c-47af-8188-2901d549d330\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " Sep 16 05:04:27.691109 kubelet[2763]: I0916 05:04:27.691054 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-cni-path\") pod \"aae2867b-194c-47af-8188-2901d549d330\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " Sep 16 05:04:27.691109 kubelet[2763]: I0916 05:04:27.691073 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-hostproc\") pod \"aae2867b-194c-47af-8188-2901d549d330\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " Sep 16 05:04:27.691109 kubelet[2763]: I0916 05:04:27.691088 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-host-proc-sys-net\") pod \"aae2867b-194c-47af-8188-2901d549d330\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " Sep 16 05:04:27.691109 kubelet[2763]: I0916 05:04:27.691109 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aae2867b-194c-47af-8188-2901d549d330-clustermesh-secrets\") pod \"aae2867b-194c-47af-8188-2901d549d330\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " Sep 16 05:04:27.691295 kubelet[2763]: I0916 05:04:27.691126 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aae2867b-194c-47af-8188-2901d549d330-hubble-tls\") pod \"aae2867b-194c-47af-8188-2901d549d330\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " Sep 16 05:04:27.691295 kubelet[2763]: I0916 05:04:27.691144 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-bpf-maps\") pod \"aae2867b-194c-47af-8188-2901d549d330\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " Sep 16 05:04:27.691295 kubelet[2763]: I0916 05:04:27.691160 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-cilium-cgroup\") pod \"aae2867b-194c-47af-8188-2901d549d330\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " Sep 16 05:04:27.691295 kubelet[2763]: I0916 05:04:27.691201 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-lib-modules\") pod \"aae2867b-194c-47af-8188-2901d549d330\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " Sep 16 05:04:27.691295 kubelet[2763]: I0916 05:04:27.691015 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "aae2867b-194c-47af-8188-2901d549d330" (UID: "aae2867b-194c-47af-8188-2901d549d330"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:04:27.691295 kubelet[2763]: I0916 05:04:27.691196 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-hostproc" (OuterVolumeSpecName: "hostproc") pod "aae2867b-194c-47af-8188-2901d549d330" (UID: "aae2867b-194c-47af-8188-2901d549d330"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:04:27.691449 kubelet[2763]: I0916 05:04:27.691273 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-cni-path" (OuterVolumeSpecName: "cni-path") pod "aae2867b-194c-47af-8188-2901d549d330" (UID: "aae2867b-194c-47af-8188-2901d549d330"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:04:27.691449 kubelet[2763]: I0916 05:04:27.691218 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwxzr\" (UniqueName: \"kubernetes.io/projected/aae2867b-194c-47af-8188-2901d549d330-kube-api-access-mwxzr\") pod \"aae2867b-194c-47af-8188-2901d549d330\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " Sep 16 05:04:27.691449 kubelet[2763]: I0916 05:04:27.691322 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-host-proc-sys-kernel\") pod \"aae2867b-194c-47af-8188-2901d549d330\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " Sep 16 05:04:27.691449 kubelet[2763]: I0916 05:04:27.691344 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aae2867b-194c-47af-8188-2901d549d330-cilium-config-path\") pod \"aae2867b-194c-47af-8188-2901d549d330\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " Sep 16 05:04:27.691449 kubelet[2763]: I0916 05:04:27.691358 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-cilium-run\") pod \"aae2867b-194c-47af-8188-2901d549d330\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " Sep 16 05:04:27.691449 kubelet[2763]: I0916 05:04:27.691374 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-etc-cni-netd\") pod \"aae2867b-194c-47af-8188-2901d549d330\" (UID: \"aae2867b-194c-47af-8188-2901d549d330\") " Sep 16 05:04:27.691624 kubelet[2763]: I0916 05:04:27.691422 2763 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 16 05:04:27.691624 kubelet[2763]: I0916 05:04:27.691433 2763 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 16 05:04:27.691624 kubelet[2763]: I0916 05:04:27.691442 2763 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f744d51-fa72-40e6-b1ca-2dca418b9000-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 16 05:04:27.691624 kubelet[2763]: I0916 05:04:27.691451 2763 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 16 05:04:27.691624 kubelet[2763]: I0916 05:04:27.691459 2763 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j8bqd\" (UniqueName: \"kubernetes.io/projected/3f744d51-fa72-40e6-b1ca-2dca418b9000-kube-api-access-j8bqd\") on node \"localhost\" DevicePath \"\"" Sep 16 05:04:27.691624 kubelet[2763]: I0916 05:04:27.691480 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "aae2867b-194c-47af-8188-2901d549d330" (UID: "aae2867b-194c-47af-8188-2901d549d330"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:04:27.691624 kubelet[2763]: I0916 05:04:27.691494 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "aae2867b-194c-47af-8188-2901d549d330" (UID: "aae2867b-194c-47af-8188-2901d549d330"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:04:27.692588 kubelet[2763]: I0916 05:04:27.692552 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "aae2867b-194c-47af-8188-2901d549d330" (UID: "aae2867b-194c-47af-8188-2901d549d330"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:04:27.692637 kubelet[2763]: I0916 05:04:27.692600 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "aae2867b-194c-47af-8188-2901d549d330" (UID: "aae2867b-194c-47af-8188-2901d549d330"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:04:27.693246 kubelet[2763]: I0916 05:04:27.693166 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "aae2867b-194c-47af-8188-2901d549d330" (UID: "aae2867b-194c-47af-8188-2901d549d330"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:04:27.693246 kubelet[2763]: I0916 05:04:27.693204 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "aae2867b-194c-47af-8188-2901d549d330" (UID: "aae2867b-194c-47af-8188-2901d549d330"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:04:27.693316 kubelet[2763]: I0916 05:04:27.693298 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aae2867b-194c-47af-8188-2901d549d330" (UID: "aae2867b-194c-47af-8188-2901d549d330"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:04:27.694412 kubelet[2763]: I0916 05:04:27.694356 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae2867b-194c-47af-8188-2901d549d330-kube-api-access-mwxzr" (OuterVolumeSpecName: "kube-api-access-mwxzr") pod "aae2867b-194c-47af-8188-2901d549d330" (UID: "aae2867b-194c-47af-8188-2901d549d330"). InnerVolumeSpecName "kube-api-access-mwxzr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 05:04:27.695394 kubelet[2763]: I0916 05:04:27.695353 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aae2867b-194c-47af-8188-2901d549d330-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aae2867b-194c-47af-8188-2901d549d330" (UID: "aae2867b-194c-47af-8188-2901d549d330"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 05:04:27.695992 kubelet[2763]: I0916 05:04:27.695970 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aae2867b-194c-47af-8188-2901d549d330-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "aae2867b-194c-47af-8188-2901d549d330" (UID: "aae2867b-194c-47af-8188-2901d549d330"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 05:04:27.696104 kubelet[2763]: I0916 05:04:27.696085 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae2867b-194c-47af-8188-2901d549d330-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "aae2867b-194c-47af-8188-2901d549d330" (UID: "aae2867b-194c-47af-8188-2901d549d330"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 05:04:27.792434 kubelet[2763]: I0916 05:04:27.792378 2763 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 16 05:04:27.792434 kubelet[2763]: I0916 05:04:27.792416 2763 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 16 05:04:27.792434 kubelet[2763]: I0916 05:04:27.792426 2763 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 16 05:04:27.792434 kubelet[2763]: I0916 05:04:27.792435 2763 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mwxzr\" (UniqueName: \"kubernetes.io/projected/aae2867b-194c-47af-8188-2901d549d330-kube-api-access-mwxzr\") on node \"localhost\" DevicePath \"\"" Sep 16 05:04:27.792434 kubelet[2763]: I0916 05:04:27.792444 2763 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 16 05:04:27.792434 kubelet[2763]: I0916 05:04:27.792454 2763 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aae2867b-194c-47af-8188-2901d549d330-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 16 05:04:27.792434 kubelet[2763]: I0916 05:04:27.792462 2763 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 16 05:04:27.792768 kubelet[2763]: I0916 05:04:27.792471 2763 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 16 05:04:27.792768 kubelet[2763]: I0916 05:04:27.792478 2763 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aae2867b-194c-47af-8188-2901d549d330-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 16 05:04:27.792768 kubelet[2763]: I0916 05:04:27.792485 2763 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aae2867b-194c-47af-8188-2901d549d330-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 16 05:04:27.792768 kubelet[2763]: I0916 05:04:27.792492 2763 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aae2867b-194c-47af-8188-2901d549d330-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 16 05:04:27.972259 systemd[1]: Removed slice kubepods-besteffort-pod3f744d51_fa72_40e6_b1ca_2dca418b9000.slice - libcontainer container kubepods-besteffort-pod3f744d51_fa72_40e6_b1ca_2dca418b9000.slice. Sep 16 05:04:27.972351 systemd[1]: kubepods-besteffort-pod3f744d51_fa72_40e6_b1ca_2dca418b9000.slice: Consumed 321ms CPU time, 29.9M memory peak, 4.3M read from disk, 4K written to disk. Sep 16 05:04:27.974115 systemd[1]: Removed slice kubepods-burstable-podaae2867b_194c_47af_8188_2901d549d330.slice - libcontainer container kubepods-burstable-podaae2867b_194c_47af_8188_2901d549d330.slice. Sep 16 05:04:27.974208 systemd[1]: kubepods-burstable-podaae2867b_194c_47af_8188_2901d549d330.slice: Consumed 6.491s CPU time, 124.6M memory peak, 212K read from disk, 13.3M written to disk. Sep 16 05:04:28.134822 kubelet[2763]: I0916 05:04:28.134782 2763 scope.go:117] "RemoveContainer" containerID="4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9" Sep 16 05:04:28.137146 containerd[1596]: time="2025-09-16T05:04:28.137104653Z" level=info msg="RemoveContainer for \"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\"" Sep 16 05:04:28.156273 containerd[1596]: time="2025-09-16T05:04:28.156108222Z" level=info msg="RemoveContainer for \"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\" returns successfully" Sep 16 05:04:28.156618 kubelet[2763]: I0916 05:04:28.156592 2763 scope.go:117] "RemoveContainer" containerID="4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9" Sep 16 05:04:28.163165 containerd[1596]: time="2025-09-16T05:04:28.157788805Z" level=error msg="ContainerStatus for \"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\": not found" Sep 16 05:04:28.164435 kubelet[2763]: E0916 05:04:28.164399 2763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\": not found" containerID="4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9" Sep 16 05:04:28.164486 kubelet[2763]: I0916 05:04:28.164441 2763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9"} err="failed to get container status \"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ac3f6f231737407914afbafe3aa2e1714d011d1ecc44d4571d808f61d4e59d9\": not found" Sep 16 05:04:28.164486 kubelet[2763]: I0916 05:04:28.164479 2763 scope.go:117] "RemoveContainer" containerID="e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11" Sep 16 05:04:28.166499 containerd[1596]: time="2025-09-16T05:04:28.166045161Z" level=info msg="RemoveContainer for \"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\"" Sep 16 05:04:28.170556 containerd[1596]: time="2025-09-16T05:04:28.170494809Z" level=info msg="RemoveContainer for \"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\" returns successfully" Sep 16 05:04:28.170709 kubelet[2763]: I0916 05:04:28.170686 2763 scope.go:117] "RemoveContainer" containerID="177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559" Sep 16 05:04:28.171784 containerd[1596]: time="2025-09-16T05:04:28.171751996Z" level=info msg="RemoveContainer for \"177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559\"" Sep 16 05:04:28.175951 containerd[1596]: time="2025-09-16T05:04:28.175912025Z" level=info msg="RemoveContainer for \"177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559\" returns successfully" Sep 16 05:04:28.176080 kubelet[2763]: I0916 05:04:28.176048 2763 scope.go:117] "RemoveContainer" containerID="1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3" Sep 16 05:04:28.178053 containerd[1596]: time="2025-09-16T05:04:28.178027046Z" level=info msg="RemoveContainer for \"1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3\"" Sep 16 05:04:28.182141 containerd[1596]: time="2025-09-16T05:04:28.182084869Z" level=info msg="RemoveContainer for \"1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3\" returns successfully" Sep 16 05:04:28.182284 kubelet[2763]: I0916 05:04:28.182243 2763 scope.go:117] "RemoveContainer" containerID="aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316" Sep 16 05:04:28.183538 containerd[1596]: time="2025-09-16T05:04:28.183394527Z" level=info msg="RemoveContainer for \"aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316\"" Sep 16 05:04:28.186940 containerd[1596]: time="2025-09-16T05:04:28.186912847Z" level=info msg="RemoveContainer for \"aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316\" returns successfully" Sep 16 05:04:28.187122 kubelet[2763]: I0916 05:04:28.187052 2763 scope.go:117] "RemoveContainer" containerID="fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8" Sep 16 05:04:28.188204 containerd[1596]: time="2025-09-16T05:04:28.188175374Z" level=info msg="RemoveContainer for \"fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8\"" Sep 16 05:04:28.204231 containerd[1596]: time="2025-09-16T05:04:28.204201084Z" level=info msg="RemoveContainer for \"fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8\" returns successfully" Sep 16 05:04:28.204418 kubelet[2763]: I0916 05:04:28.204393 2763 scope.go:117] "RemoveContainer" containerID="e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11" Sep 16 05:04:28.204622 containerd[1596]: time="2025-09-16T05:04:28.204580266Z" level=error msg="ContainerStatus for \"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\": not found" Sep 16 05:04:28.204758 kubelet[2763]: E0916 05:04:28.204734 2763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\": not found" containerID="e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11" Sep 16 05:04:28.204758 kubelet[2763]: I0916 05:04:28.204757 2763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11"} err="failed to get container status \"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9b07b4976ee8673f9448b2147ae3dac4d5ed8930dbaadd75a3e3ebd9970eb11\": not found" Sep 16 05:04:28.204758 kubelet[2763]: I0916 05:04:28.204770 2763 scope.go:117] "RemoveContainer" containerID="177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559" Sep 16 05:04:28.205008 containerd[1596]: time="2025-09-16T05:04:28.204973686Z" level=error msg="ContainerStatus for \"177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559\": not found" Sep 16 05:04:28.205138 kubelet[2763]: E0916 05:04:28.205112 2763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559\": not found" containerID="177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559" Sep 16 05:04:28.205173 kubelet[2763]: I0916 05:04:28.205142 2763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559"} err="failed to get container status \"177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559\": rpc error: code = NotFound desc = an error occurred when try to find container \"177d576ca8e7e1da637587a39f8343f0b3a1982e2274160fcf75534f4943f559\": not found" Sep 16 05:04:28.205173 kubelet[2763]: I0916 05:04:28.205165 2763 scope.go:117] "RemoveContainer" containerID="1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3" Sep 16 05:04:28.205341 containerd[1596]: time="2025-09-16T05:04:28.205300928Z" level=error msg="ContainerStatus for \"1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3\": not found" Sep 16 05:04:28.205434 kubelet[2763]: E0916 05:04:28.205411 2763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3\": not found" containerID="1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3" Sep 16 05:04:28.205470 kubelet[2763]: I0916 05:04:28.205434 2763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3"} err="failed to get container status \"1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f2f9ef2990b103259287c560558a1346b4f4c6dd7eb6e65da5267958c7e97f3\": not found" Sep 16 05:04:28.205470 kubelet[2763]: I0916 05:04:28.205448 2763 scope.go:117] "RemoveContainer" containerID="aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316" Sep 16 05:04:28.205602 containerd[1596]: time="2025-09-16T05:04:28.205578183Z" level=error msg="ContainerStatus for \"aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316\": not found" Sep 16 05:04:28.205700 kubelet[2763]: E0916 05:04:28.205680 2763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316\": not found" containerID="aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316" Sep 16 05:04:28.205751 kubelet[2763]: I0916 05:04:28.205700 2763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316"} err="failed to get container status \"aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316\": rpc error: code = NotFound desc = an error occurred when try to find container \"aea50c7446b76e7d21ae299066ed7b5ac294a124b533dc3abe6697573918f316\": not found" Sep 16 05:04:28.205751 kubelet[2763]: I0916 05:04:28.205713 2763 scope.go:117] "RemoveContainer" containerID="fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8" Sep 16 05:04:28.205884 containerd[1596]: time="2025-09-16T05:04:28.205823656Z" level=error msg="ContainerStatus for \"fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8\": not found" Sep 16 05:04:28.205983 kubelet[2763]: E0916 05:04:28.205955 2763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8\": not found" containerID="fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8" Sep 16 05:04:28.206019 kubelet[2763]: I0916 05:04:28.205986 2763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8"} err="failed to get container status \"fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8\": rpc error: code = NotFound desc = an error occurred when try to find container \"fcfd2cdfaba064a501df21c5b5dc1a131e85db5378efcd05a9ee96cb94ae1cd8\": not found" Sep 16 05:04:28.401677 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510-shm.mount: Deactivated successfully. Sep 16 05:04:28.401800 systemd[1]: var-lib-kubelet-pods-3f744d51\x2dfa72\x2d40e6\x2db1ca\x2d2dca418b9000-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj8bqd.mount: Deactivated successfully. Sep 16 05:04:28.401881 systemd[1]: var-lib-kubelet-pods-aae2867b\x2d194c\x2d47af\x2d8188\x2d2901d549d330-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmwxzr.mount: Deactivated successfully. Sep 16 05:04:28.402676 systemd[1]: var-lib-kubelet-pods-aae2867b\x2d194c\x2d47af\x2d8188\x2d2901d549d330-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 16 05:04:28.402804 systemd[1]: var-lib-kubelet-pods-aae2867b\x2d194c\x2d47af\x2d8188\x2d2901d549d330-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 16 05:04:28.722962 containerd[1596]: time="2025-09-16T05:04:28.722635213Z" level=info msg="TaskExit event in podsandbox handler container_id:\"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" id:\"01e820b6fe19314887ae88d2aa592b96cf9fc9485f6a8f158ebe1a7ca5649510\" pid:2912 exit_status:137 exited_at:{seconds:1757999067 nanos:499328082}" Sep 16 05:04:29.010341 kubelet[2763]: E0916 05:04:29.010240 2763 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 05:04:29.328606 sshd[4381]: Connection closed by 10.0.0.1 port 33248 Sep 16 05:04:29.329135 sshd-session[4378]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:29.341232 systemd[1]: sshd@22-10.0.0.151:22-10.0.0.1:33248.service: Deactivated successfully. Sep 16 05:04:29.343205 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 05:04:29.343964 systemd-logind[1574]: Session 23 logged out. Waiting for processes to exit. Sep 16 05:04:29.346673 systemd[1]: Started sshd@23-10.0.0.151:22-10.0.0.1:33254.service - OpenSSH per-connection server daemon (10.0.0.1:33254). Sep 16 05:04:29.347280 systemd-logind[1574]: Removed session 23. Sep 16 05:04:29.404903 sshd[4532]: Accepted publickey for core from 10.0.0.1 port 33254 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:04:29.406087 sshd-session[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:29.410026 systemd-logind[1574]: New session 24 of user core. Sep 16 05:04:29.420632 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 16 05:04:29.964139 kubelet[2763]: I0916 05:04:29.964089 2763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f744d51-fa72-40e6-b1ca-2dca418b9000" path="/var/lib/kubelet/pods/3f744d51-fa72-40e6-b1ca-2dca418b9000/volumes" Sep 16 05:04:29.964641 kubelet[2763]: I0916 05:04:29.964615 2763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aae2867b-194c-47af-8188-2901d549d330" path="/var/lib/kubelet/pods/aae2867b-194c-47af-8188-2901d549d330/volumes" Sep 16 05:04:30.144625 sshd[4535]: Connection closed by 10.0.0.1 port 33254 Sep 16 05:04:30.145886 sshd-session[4532]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:30.156088 systemd[1]: sshd@23-10.0.0.151:22-10.0.0.1:33254.service: Deactivated successfully. Sep 16 05:04:30.158138 systemd[1]: session-24.scope: Deactivated successfully. Sep 16 05:04:30.160163 systemd-logind[1574]: Session 24 logged out. Waiting for processes to exit. Sep 16 05:04:30.166315 systemd[1]: Started sshd@24-10.0.0.151:22-10.0.0.1:43414.service - OpenSSH per-connection server daemon (10.0.0.1:43414). Sep 16 05:04:30.168905 systemd-logind[1574]: Removed session 24. Sep 16 05:04:30.186324 systemd[1]: Created slice kubepods-burstable-pod5df1a4e4_4c6c_478e_9e21_cb844c317017.slice - libcontainer container kubepods-burstable-pod5df1a4e4_4c6c_478e_9e21_cb844c317017.slice. Sep 16 05:04:30.222197 sshd[4548]: Accepted publickey for core from 10.0.0.1 port 43414 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:04:30.223347 sshd-session[4548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:30.227494 systemd-logind[1574]: New session 25 of user core. Sep 16 05:04:30.246628 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 16 05:04:30.297179 sshd[4551]: Connection closed by 10.0.0.1 port 43414 Sep 16 05:04:30.298466 sshd-session[4548]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:30.307067 kubelet[2763]: I0916 05:04:30.307005 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5df1a4e4-4c6c-478e-9e21-cb844c317017-cilium-run\") pod \"cilium-2srf9\" (UID: \"5df1a4e4-4c6c-478e-9e21-cb844c317017\") " pod="kube-system/cilium-2srf9" Sep 16 05:04:30.307067 kubelet[2763]: I0916 05:04:30.307037 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5df1a4e4-4c6c-478e-9e21-cb844c317017-bpf-maps\") pod \"cilium-2srf9\" (UID: \"5df1a4e4-4c6c-478e-9e21-cb844c317017\") " pod="kube-system/cilium-2srf9" Sep 16 05:04:30.307370 kubelet[2763]: I0916 05:04:30.307228 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5df1a4e4-4c6c-478e-9e21-cb844c317017-lib-modules\") pod \"cilium-2srf9\" (UID: \"5df1a4e4-4c6c-478e-9e21-cb844c317017\") " pod="kube-system/cilium-2srf9" Sep 16 05:04:30.307370 kubelet[2763]: I0916 05:04:30.307268 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5df1a4e4-4c6c-478e-9e21-cb844c317017-hubble-tls\") pod \"cilium-2srf9\" (UID: \"5df1a4e4-4c6c-478e-9e21-cb844c317017\") " pod="kube-system/cilium-2srf9" Sep 16 05:04:30.307370 kubelet[2763]: I0916 05:04:30.307289 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5df1a4e4-4c6c-478e-9e21-cb844c317017-cni-path\") pod \"cilium-2srf9\" (UID: \"5df1a4e4-4c6c-478e-9e21-cb844c317017\") " pod="kube-system/cilium-2srf9" Sep 16 05:04:30.307244 systemd[1]: sshd@24-10.0.0.151:22-10.0.0.1:43414.service: Deactivated successfully. Sep 16 05:04:30.307651 kubelet[2763]: I0916 05:04:30.307302 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5df1a4e4-4c6c-478e-9e21-cb844c317017-etc-cni-netd\") pod \"cilium-2srf9\" (UID: \"5df1a4e4-4c6c-478e-9e21-cb844c317017\") " pod="kube-system/cilium-2srf9" Sep 16 05:04:30.307651 kubelet[2763]: I0916 05:04:30.307407 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5df1a4e4-4c6c-478e-9e21-cb844c317017-cilium-config-path\") pod \"cilium-2srf9\" (UID: \"5df1a4e4-4c6c-478e-9e21-cb844c317017\") " pod="kube-system/cilium-2srf9" Sep 16 05:04:30.307651 kubelet[2763]: I0916 05:04:30.307423 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5df1a4e4-4c6c-478e-9e21-cb844c317017-hostproc\") pod \"cilium-2srf9\" (UID: \"5df1a4e4-4c6c-478e-9e21-cb844c317017\") " pod="kube-system/cilium-2srf9" Sep 16 05:04:30.307651 kubelet[2763]: I0916 05:04:30.307542 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5df1a4e4-4c6c-478e-9e21-cb844c317017-host-proc-sys-kernel\") pod \"cilium-2srf9\" (UID: \"5df1a4e4-4c6c-478e-9e21-cb844c317017\") " pod="kube-system/cilium-2srf9" Sep 16 05:04:30.307651 kubelet[2763]: I0916 05:04:30.307560 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5df1a4e4-4c6c-478e-9e21-cb844c317017-clustermesh-secrets\") pod \"cilium-2srf9\" (UID: \"5df1a4e4-4c6c-478e-9e21-cb844c317017\") " pod="kube-system/cilium-2srf9" Sep 16 05:04:30.307772 kubelet[2763]: I0916 05:04:30.307576 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5df1a4e4-4c6c-478e-9e21-cb844c317017-cilium-ipsec-secrets\") pod \"cilium-2srf9\" (UID: \"5df1a4e4-4c6c-478e-9e21-cb844c317017\") " pod="kube-system/cilium-2srf9" Sep 16 05:04:30.307772 kubelet[2763]: I0916 05:04:30.307596 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5df1a4e4-4c6c-478e-9e21-cb844c317017-xtables-lock\") pod \"cilium-2srf9\" (UID: \"5df1a4e4-4c6c-478e-9e21-cb844c317017\") " pod="kube-system/cilium-2srf9" Sep 16 05:04:30.307772 kubelet[2763]: I0916 05:04:30.307615 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5df1a4e4-4c6c-478e-9e21-cb844c317017-host-proc-sys-net\") pod \"cilium-2srf9\" (UID: \"5df1a4e4-4c6c-478e-9e21-cb844c317017\") " pod="kube-system/cilium-2srf9" Sep 16 05:04:30.307772 kubelet[2763]: I0916 05:04:30.307629 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5njfj\" (UniqueName: \"kubernetes.io/projected/5df1a4e4-4c6c-478e-9e21-cb844c317017-kube-api-access-5njfj\") pod \"cilium-2srf9\" (UID: \"5df1a4e4-4c6c-478e-9e21-cb844c317017\") " pod="kube-system/cilium-2srf9" Sep 16 05:04:30.307772 kubelet[2763]: I0916 05:04:30.307644 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5df1a4e4-4c6c-478e-9e21-cb844c317017-cilium-cgroup\") pod \"cilium-2srf9\" (UID: \"5df1a4e4-4c6c-478e-9e21-cb844c317017\") " pod="kube-system/cilium-2srf9" Sep 16 05:04:30.309135 systemd[1]: session-25.scope: Deactivated successfully. Sep 16 05:04:30.309885 systemd-logind[1574]: Session 25 logged out. Waiting for processes to exit. Sep 16 05:04:30.312874 systemd[1]: Started sshd@25-10.0.0.151:22-10.0.0.1:43426.service - OpenSSH per-connection server daemon (10.0.0.1:43426). Sep 16 05:04:30.313439 systemd-logind[1574]: Removed session 25. Sep 16 05:04:30.370869 sshd[4558]: Accepted publickey for core from 10.0.0.1 port 43426 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 05:04:30.372041 sshd-session[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:30.376266 systemd-logind[1574]: New session 26 of user core. Sep 16 05:04:30.383624 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 16 05:04:30.493050 containerd[1596]: time="2025-09-16T05:04:30.492942797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2srf9,Uid:5df1a4e4-4c6c-478e-9e21-cb844c317017,Namespace:kube-system,Attempt:0,}" Sep 16 05:04:30.508193 containerd[1596]: time="2025-09-16T05:04:30.508164743Z" level=info msg="connecting to shim fccd12ef4814d5966c61cea912f60e1a4dd78da4668a2dd46288424cdf9d7012" address="unix:///run/containerd/s/4cd33e602082468d70352f36d418d2d71f5a37e16aea127d67c590666ddea9ec" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:30.538631 systemd[1]: Started cri-containerd-fccd12ef4814d5966c61cea912f60e1a4dd78da4668a2dd46288424cdf9d7012.scope - libcontainer container fccd12ef4814d5966c61cea912f60e1a4dd78da4668a2dd46288424cdf9d7012. Sep 16 05:04:30.561397 containerd[1596]: time="2025-09-16T05:04:30.561346384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2srf9,Uid:5df1a4e4-4c6c-478e-9e21-cb844c317017,Namespace:kube-system,Attempt:0,} returns sandbox id \"fccd12ef4814d5966c61cea912f60e1a4dd78da4668a2dd46288424cdf9d7012\"" Sep 16 05:04:30.568543 containerd[1596]: time="2025-09-16T05:04:30.568486925Z" level=info msg="CreateContainer within sandbox \"fccd12ef4814d5966c61cea912f60e1a4dd78da4668a2dd46288424cdf9d7012\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 05:04:30.575524 containerd[1596]: time="2025-09-16T05:04:30.575194842Z" level=info msg="Container 64c4ec4fe17ac86e43a619e2d04cd382c79224091734359a753c3e5f5c98b075: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:30.586771 containerd[1596]: time="2025-09-16T05:04:30.586731255Z" level=info msg="CreateContainer within sandbox \"fccd12ef4814d5966c61cea912f60e1a4dd78da4668a2dd46288424cdf9d7012\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"64c4ec4fe17ac86e43a619e2d04cd382c79224091734359a753c3e5f5c98b075\"" Sep 16 05:04:30.587184 containerd[1596]: time="2025-09-16T05:04:30.587157596Z" level=info msg="StartContainer for \"64c4ec4fe17ac86e43a619e2d04cd382c79224091734359a753c3e5f5c98b075\"" Sep 16 05:04:30.588091 containerd[1596]: time="2025-09-16T05:04:30.588066377Z" level=info msg="connecting to shim 64c4ec4fe17ac86e43a619e2d04cd382c79224091734359a753c3e5f5c98b075" address="unix:///run/containerd/s/4cd33e602082468d70352f36d418d2d71f5a37e16aea127d67c590666ddea9ec" protocol=ttrpc version=3 Sep 16 05:04:30.613620 systemd[1]: Started cri-containerd-64c4ec4fe17ac86e43a619e2d04cd382c79224091734359a753c3e5f5c98b075.scope - libcontainer container 64c4ec4fe17ac86e43a619e2d04cd382c79224091734359a753c3e5f5c98b075. Sep 16 05:04:30.640343 containerd[1596]: time="2025-09-16T05:04:30.640312195Z" level=info msg="StartContainer for \"64c4ec4fe17ac86e43a619e2d04cd382c79224091734359a753c3e5f5c98b075\" returns successfully" Sep 16 05:04:30.648886 systemd[1]: cri-containerd-64c4ec4fe17ac86e43a619e2d04cd382c79224091734359a753c3e5f5c98b075.scope: Deactivated successfully. Sep 16 05:04:30.650026 containerd[1596]: time="2025-09-16T05:04:30.649907206Z" level=info msg="received exit event container_id:\"64c4ec4fe17ac86e43a619e2d04cd382c79224091734359a753c3e5f5c98b075\" id:\"64c4ec4fe17ac86e43a619e2d04cd382c79224091734359a753c3e5f5c98b075\" pid:4632 exited_at:{seconds:1757999070 nanos:649643187}" Sep 16 05:04:30.650169 containerd[1596]: time="2025-09-16T05:04:30.650101430Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64c4ec4fe17ac86e43a619e2d04cd382c79224091734359a753c3e5f5c98b075\" id:\"64c4ec4fe17ac86e43a619e2d04cd382c79224091734359a753c3e5f5c98b075\" pid:4632 exited_at:{seconds:1757999070 nanos:649643187}" Sep 16 05:04:31.154059 containerd[1596]: time="2025-09-16T05:04:31.154016333Z" level=info msg="CreateContainer within sandbox \"fccd12ef4814d5966c61cea912f60e1a4dd78da4668a2dd46288424cdf9d7012\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 05:04:31.179905 containerd[1596]: time="2025-09-16T05:04:31.179864201Z" level=info msg="Container c218c2140f95bcc6bee893845199baa38f06274944b69be60e292dc91d1f79e0: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:31.185881 containerd[1596]: time="2025-09-16T05:04:31.185830300Z" level=info msg="CreateContainer within sandbox \"fccd12ef4814d5966c61cea912f60e1a4dd78da4668a2dd46288424cdf9d7012\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c218c2140f95bcc6bee893845199baa38f06274944b69be60e292dc91d1f79e0\"" Sep 16 05:04:31.186536 containerd[1596]: time="2025-09-16T05:04:31.186344220Z" level=info msg="StartContainer for \"c218c2140f95bcc6bee893845199baa38f06274944b69be60e292dc91d1f79e0\"" Sep 16 05:04:31.187109 containerd[1596]: time="2025-09-16T05:04:31.187076140Z" level=info msg="connecting to shim c218c2140f95bcc6bee893845199baa38f06274944b69be60e292dc91d1f79e0" address="unix:///run/containerd/s/4cd33e602082468d70352f36d418d2d71f5a37e16aea127d67c590666ddea9ec" protocol=ttrpc version=3 Sep 16 05:04:31.210649 systemd[1]: Started cri-containerd-c218c2140f95bcc6bee893845199baa38f06274944b69be60e292dc91d1f79e0.scope - libcontainer container c218c2140f95bcc6bee893845199baa38f06274944b69be60e292dc91d1f79e0. Sep 16 05:04:31.236449 containerd[1596]: time="2025-09-16T05:04:31.236413504Z" level=info msg="StartContainer for \"c218c2140f95bcc6bee893845199baa38f06274944b69be60e292dc91d1f79e0\" returns successfully" Sep 16 05:04:31.242659 systemd[1]: cri-containerd-c218c2140f95bcc6bee893845199baa38f06274944b69be60e292dc91d1f79e0.scope: Deactivated successfully. Sep 16 05:04:31.243847 containerd[1596]: time="2025-09-16T05:04:31.243793998Z" level=info msg="received exit event container_id:\"c218c2140f95bcc6bee893845199baa38f06274944b69be60e292dc91d1f79e0\" id:\"c218c2140f95bcc6bee893845199baa38f06274944b69be60e292dc91d1f79e0\" pid:4676 exited_at:{seconds:1757999071 nanos:243581729}" Sep 16 05:04:31.243955 containerd[1596]: time="2025-09-16T05:04:31.243916804Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c218c2140f95bcc6bee893845199baa38f06274944b69be60e292dc91d1f79e0\" id:\"c218c2140f95bcc6bee893845199baa38f06274944b69be60e292dc91d1f79e0\" pid:4676 exited_at:{seconds:1757999071 nanos:243581729}" Sep 16 05:04:32.159875 containerd[1596]: time="2025-09-16T05:04:32.159174276Z" level=info msg="CreateContainer within sandbox \"fccd12ef4814d5966c61cea912f60e1a4dd78da4668a2dd46288424cdf9d7012\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 05:04:32.171240 containerd[1596]: time="2025-09-16T05:04:32.171192806Z" level=info msg="Container 4945aa9d9b35abce3bf0893fb2b29056bdc8ed05f0103bf9dcec94ba6e5ce3c5: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:32.179286 containerd[1596]: time="2025-09-16T05:04:32.179249350Z" level=info msg="CreateContainer within sandbox \"fccd12ef4814d5966c61cea912f60e1a4dd78da4668a2dd46288424cdf9d7012\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4945aa9d9b35abce3bf0893fb2b29056bdc8ed05f0103bf9dcec94ba6e5ce3c5\"" Sep 16 05:04:32.179900 containerd[1596]: time="2025-09-16T05:04:32.179779330Z" level=info msg="StartContainer for \"4945aa9d9b35abce3bf0893fb2b29056bdc8ed05f0103bf9dcec94ba6e5ce3c5\"" Sep 16 05:04:32.181021 containerd[1596]: time="2025-09-16T05:04:32.180956886Z" level=info msg="connecting to shim 4945aa9d9b35abce3bf0893fb2b29056bdc8ed05f0103bf9dcec94ba6e5ce3c5" address="unix:///run/containerd/s/4cd33e602082468d70352f36d418d2d71f5a37e16aea127d67c590666ddea9ec" protocol=ttrpc version=3 Sep 16 05:04:32.204634 systemd[1]: Started cri-containerd-4945aa9d9b35abce3bf0893fb2b29056bdc8ed05f0103bf9dcec94ba6e5ce3c5.scope - libcontainer container 4945aa9d9b35abce3bf0893fb2b29056bdc8ed05f0103bf9dcec94ba6e5ce3c5. Sep 16 05:04:32.244676 systemd[1]: cri-containerd-4945aa9d9b35abce3bf0893fb2b29056bdc8ed05f0103bf9dcec94ba6e5ce3c5.scope: Deactivated successfully. Sep 16 05:04:32.246207 containerd[1596]: time="2025-09-16T05:04:32.246157513Z" level=info msg="received exit event container_id:\"4945aa9d9b35abce3bf0893fb2b29056bdc8ed05f0103bf9dcec94ba6e5ce3c5\" id:\"4945aa9d9b35abce3bf0893fb2b29056bdc8ed05f0103bf9dcec94ba6e5ce3c5\" pid:4720 exited_at:{seconds:1757999072 nanos:245426496}" Sep 16 05:04:32.246207 containerd[1596]: time="2025-09-16T05:04:32.246204142Z" level=info msg="StartContainer for \"4945aa9d9b35abce3bf0893fb2b29056bdc8ed05f0103bf9dcec94ba6e5ce3c5\" returns successfully" Sep 16 05:04:32.246440 containerd[1596]: time="2025-09-16T05:04:32.246170537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4945aa9d9b35abce3bf0893fb2b29056bdc8ed05f0103bf9dcec94ba6e5ce3c5\" id:\"4945aa9d9b35abce3bf0893fb2b29056bdc8ed05f0103bf9dcec94ba6e5ce3c5\" pid:4720 exited_at:{seconds:1757999072 nanos:245426496}" Sep 16 05:04:32.266826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4945aa9d9b35abce3bf0893fb2b29056bdc8ed05f0103bf9dcec94ba6e5ce3c5-rootfs.mount: Deactivated successfully. Sep 16 05:04:33.165178 containerd[1596]: time="2025-09-16T05:04:33.165124467Z" level=info msg="CreateContainer within sandbox \"fccd12ef4814d5966c61cea912f60e1a4dd78da4668a2dd46288424cdf9d7012\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 05:04:33.176959 containerd[1596]: time="2025-09-16T05:04:33.176845879Z" level=info msg="Container 149b2c230c6b0d0955292e5831d5a5c299ecd43c35d0260593303889d419b1b5: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:33.179057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3828456394.mount: Deactivated successfully. Sep 16 05:04:33.183903 containerd[1596]: time="2025-09-16T05:04:33.183870679Z" level=info msg="CreateContainer within sandbox \"fccd12ef4814d5966c61cea912f60e1a4dd78da4668a2dd46288424cdf9d7012\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"149b2c230c6b0d0955292e5831d5a5c299ecd43c35d0260593303889d419b1b5\"" Sep 16 05:04:33.184538 containerd[1596]: time="2025-09-16T05:04:33.184516792Z" level=info msg="StartContainer for \"149b2c230c6b0d0955292e5831d5a5c299ecd43c35d0260593303889d419b1b5\"" Sep 16 05:04:33.185490 containerd[1596]: time="2025-09-16T05:04:33.185287254Z" level=info msg="connecting to shim 149b2c230c6b0d0955292e5831d5a5c299ecd43c35d0260593303889d419b1b5" address="unix:///run/containerd/s/4cd33e602082468d70352f36d418d2d71f5a37e16aea127d67c590666ddea9ec" protocol=ttrpc version=3 Sep 16 05:04:33.204627 systemd[1]: Started cri-containerd-149b2c230c6b0d0955292e5831d5a5c299ecd43c35d0260593303889d419b1b5.scope - libcontainer container 149b2c230c6b0d0955292e5831d5a5c299ecd43c35d0260593303889d419b1b5. Sep 16 05:04:33.230451 systemd[1]: cri-containerd-149b2c230c6b0d0955292e5831d5a5c299ecd43c35d0260593303889d419b1b5.scope: Deactivated successfully. Sep 16 05:04:33.230948 containerd[1596]: time="2025-09-16T05:04:33.230908371Z" level=info msg="TaskExit event in podsandbox handler container_id:\"149b2c230c6b0d0955292e5831d5a5c299ecd43c35d0260593303889d419b1b5\" id:\"149b2c230c6b0d0955292e5831d5a5c299ecd43c35d0260593303889d419b1b5\" pid:4759 exited_at:{seconds:1757999073 nanos:230688127}" Sep 16 05:04:33.232317 containerd[1596]: time="2025-09-16T05:04:33.232280369Z" level=info msg="received exit event container_id:\"149b2c230c6b0d0955292e5831d5a5c299ecd43c35d0260593303889d419b1b5\" id:\"149b2c230c6b0d0955292e5831d5a5c299ecd43c35d0260593303889d419b1b5\" pid:4759 exited_at:{seconds:1757999073 nanos:230688127}" Sep 16 05:04:33.236843 containerd[1596]: time="2025-09-16T05:04:33.236800853Z" level=info msg="StartContainer for \"149b2c230c6b0d0955292e5831d5a5c299ecd43c35d0260593303889d419b1b5\" returns successfully" Sep 16 05:04:33.253744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-149b2c230c6b0d0955292e5831d5a5c299ecd43c35d0260593303889d419b1b5-rootfs.mount: Deactivated successfully. Sep 16 05:04:34.011128 kubelet[2763]: E0916 05:04:34.011083 2763 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 05:04:34.169465 containerd[1596]: time="2025-09-16T05:04:34.169425599Z" level=info msg="CreateContainer within sandbox \"fccd12ef4814d5966c61cea912f60e1a4dd78da4668a2dd46288424cdf9d7012\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 05:04:34.191076 containerd[1596]: time="2025-09-16T05:04:34.191024690Z" level=info msg="Container e88c16d467baeb64dcd147b500a3bb344ef46d78ea4fa5e9065992ca86470fb8: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:34.199661 containerd[1596]: time="2025-09-16T05:04:34.199616753Z" level=info msg="CreateContainer within sandbox \"fccd12ef4814d5966c61cea912f60e1a4dd78da4668a2dd46288424cdf9d7012\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e88c16d467baeb64dcd147b500a3bb344ef46d78ea4fa5e9065992ca86470fb8\"" Sep 16 05:04:34.200130 containerd[1596]: time="2025-09-16T05:04:34.200092197Z" level=info msg="StartContainer for \"e88c16d467baeb64dcd147b500a3bb344ef46d78ea4fa5e9065992ca86470fb8\"" Sep 16 05:04:34.200991 containerd[1596]: time="2025-09-16T05:04:34.200968441Z" level=info msg="connecting to shim e88c16d467baeb64dcd147b500a3bb344ef46d78ea4fa5e9065992ca86470fb8" address="unix:///run/containerd/s/4cd33e602082468d70352f36d418d2d71f5a37e16aea127d67c590666ddea9ec" protocol=ttrpc version=3 Sep 16 05:04:34.231199 systemd[1]: Started cri-containerd-e88c16d467baeb64dcd147b500a3bb344ef46d78ea4fa5e9065992ca86470fb8.scope - libcontainer container e88c16d467baeb64dcd147b500a3bb344ef46d78ea4fa5e9065992ca86470fb8. Sep 16 05:04:34.266285 containerd[1596]: time="2025-09-16T05:04:34.266008491Z" level=info msg="StartContainer for \"e88c16d467baeb64dcd147b500a3bb344ef46d78ea4fa5e9065992ca86470fb8\" returns successfully" Sep 16 05:04:34.328183 containerd[1596]: time="2025-09-16T05:04:34.328138870Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e88c16d467baeb64dcd147b500a3bb344ef46d78ea4fa5e9065992ca86470fb8\" id:\"2e5e0d12b024e78fb85d0bc6131a7b48fcd196b5674e36b4bc4b1f1898bd9a69\" pid:4828 exited_at:{seconds:1757999074 nanos:327700728}" Sep 16 05:04:34.663622 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 16 05:04:35.183387 kubelet[2763]: I0916 05:04:35.183324 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2srf9" podStartSLOduration=5.183307915 podStartE2EDuration="5.183307915s" podCreationTimestamp="2025-09-16 05:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:04:35.182538116 +0000 UTC m=+81.300611552" watchObservedRunningTime="2025-09-16 05:04:35.183307915 +0000 UTC m=+81.301381341" Sep 16 05:04:36.048719 kubelet[2763]: I0916 05:04:36.048666 2763 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-16T05:04:36Z","lastTransitionTime":"2025-09-16T05:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 16 05:04:36.670487 containerd[1596]: time="2025-09-16T05:04:36.670413530Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e88c16d467baeb64dcd147b500a3bb344ef46d78ea4fa5e9065992ca86470fb8\" id:\"429611f3be44b2bbd2e76ceaceeeb09a9e7464ac164e1a52f655161553a944ee\" pid:5042 exit_status:1 exited_at:{seconds:1757999076 nanos:670130758}" Sep 16 05:04:37.664110 systemd-networkd[1493]: lxc_health: Link UP Sep 16 05:04:37.664772 systemd-networkd[1493]: lxc_health: Gained carrier Sep 16 05:04:38.774563 containerd[1596]: time="2025-09-16T05:04:38.774520016Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e88c16d467baeb64dcd147b500a3bb344ef46d78ea4fa5e9065992ca86470fb8\" id:\"971ae53fd30779f0e8bd322133a9acb1038079316eab58ca83a3f9b6153ecc55\" pid:5365 exited_at:{seconds:1757999078 nanos:774182669}" Sep 16 05:04:39.324667 systemd-networkd[1493]: lxc_health: Gained IPv6LL Sep 16 05:04:40.880231 containerd[1596]: time="2025-09-16T05:04:40.880186916Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e88c16d467baeb64dcd147b500a3bb344ef46d78ea4fa5e9065992ca86470fb8\" id:\"dcb13b7a07afaf425765f1ce9b69630d71e7749c8d72759b6112e32fa932e74b\" pid:5399 exited_at:{seconds:1757999080 nanos:879766782}" Sep 16 05:04:42.978653 containerd[1596]: time="2025-09-16T05:04:42.978568454Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e88c16d467baeb64dcd147b500a3bb344ef46d78ea4fa5e9065992ca86470fb8\" id:\"1b304f738d70af2ad82d6270c706c1340bc717a12ea2048abcda2e84cf36b22b\" pid:5425 exited_at:{seconds:1757999082 nanos:978290773}" Sep 16 05:04:42.984841 sshd[4561]: Connection closed by 10.0.0.1 port 43426 Sep 16 05:04:42.985252 sshd-session[4558]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:42.989919 systemd[1]: sshd@25-10.0.0.151:22-10.0.0.1:43426.service: Deactivated successfully. Sep 16 05:04:42.991915 systemd[1]: session-26.scope: Deactivated successfully. Sep 16 05:04:42.992657 systemd-logind[1574]: Session 26 logged out. Waiting for processes to exit. Sep 16 05:04:42.994135 systemd-logind[1574]: Removed session 26.